Open topic with navigation
Your Media Server may receive many requests from upstream Media Servers. For example, you might have several Media Servers running face detection but use a single Media Server with a GPU to perform face recognition for all of your cameras or video feeds.
The maximum number of processing sessions to run concurrently (as a result of requests from upstream Media Servers) is configured by the
MaxProcessingSessions parameter in the
[Chaining] section of the Media Server configuration file:
[Chaining] MaxProcessingSessions=1 QueueTimeout=60
MaxProcessingSessions parameter only limits sessions requested by upstream Media Servers; it has no effect on
process actions sent directly to the Media Server (the number of
process actions to run concurrently is controlled by the
MaximumThreads parameter, as described in Process Multiple Requests Simultaneously).
The default value of the
MaxProcessingSessions parameter is
1, so if you want to run more than one downstream session concurrently, you must increase the value of this parameter.
If the Media Server receives a greater number of requests than specified by
MaxProcessingSessions, the additional requests are added to a queue and only start when other sessions finish. The upstream Media Server does not start ingesting the source media until the downstream Media Server is ready to start processing.
QueueTimeout configuration parameter specifies the maximum amount of time that a request from an upstream Media Server can remain in the queue. If this timeout is exceeded then the request is removed from the queue and an error is returned to the upstream Media Server. If you are processing live streams, you might want to return an error to the upstream Media Server quickly.