Example Configurations

This section includes example configurations that demonstrate how to configure feedback chaining.

The following configuration, for the upstream Media Server, runs face detection and then sends records to a Media Server at gpu-mediaserver:14000 for remote face recognition:

[Ingest]
IngestEngine=AV

[AV]
Type=LibAV

[Analysis]
AnalysisEngine0=FaceDetect
AnalysisEngine1=RemoteAnalysis

[FaceDetect]
Type=FaceDetect
FaceDirection=Front
MinSize=200
SizeUnit=pixel

[RemoteAnalysis]
Type=RemoteAnalysis
Host=gpu-mediaserver
Port=14000
ConfigName=RemoteFaceRecognition
Input=DetectedFaces:Crop.Output
Output=RecognizedFaces:FaceRecognition.Result

[Transform]
TransformEngine0=Crop

[Crop]
Type=Crop
Input=FaceDetect.ResultWithSource

[Output]
OutputEngine0=XML

[XML]
Type=XML
Input=RemoteAnalysis.RecognizedFaces
XMLOutputPath=./output/html/%segment.type%_results_%segment.sequence%.html
XSLTemplate=./xsl/tohtml.xsl
Mode=Time
OutputInterval=30s

The following configuration, for the remote Media Server, runs face recognition on records received from the upstream Media Server. To match the upstream configuration, above, this should be saved as RemoteFaceRecognition.cfg, in the folder specified by the ConfigDirectory parameter on the remote Media Server.

[Ingest]
IngestEngine=RecordsFromUpstream
			
[RecordsFromUpstream]
Type=Receive
Input=DetectedFaces
			
[Analysis]
AnalysisEngine0=FaceRecognition

[FaceRecognition]
Type=FaceRecognize
Input=RecordsFromUpstream.DetectedFaces
RecognitionThreshold=60
MaxRecognitionResults=1
			

_HP_HTML5_bannerTitle.htm