ThingWorx Model Definition in Composer > System > Subsystems > Ordered Event Processing Subsystem
Ordered Event Processing Subsystem
The ordered event processing subsystem exists to ensure that proper order is maintained when processing events such as bind and unbind. The subsystem provides a facade to a thread pool, and has similar tuning parameters as others like event processing subsystem. At this time, this pool only services Presence evaluations that are triggered by bind/unbind events.
The settings described in the following table serve to limit the memory footprint when the system comes under heavy load. If they are exceeded, things will begin to change isReporting to false, regardless of the device state (for more detail, see below the table).
Setting
Description
Min threads allocated to event processing pool
The minimum number of threads that will be allocated. This setting is also the initial size of the thread pool. If threads become idle, they are pruned to preserve resources, down to this number.
Max threads allocated to event processing pool
The maximum number of threads to be allocated. The pool will dynamically resize itself under load up to this number.
Max Queue Entries Before Adding New Working Thread
The maximum number of tasks (presence evaluations) that are ready to immediately process before the pool will resize itself.
Max Blocked Tasks to Guarantee In-Order Execution
The maximum number of tasks (presence evaluations) that may be queued, waiting for a previous evaluation on the same device.
Note that the first three settings are exactly shared by the event processing subsystem.
The subsystem differentiates between two different reasons that a task might be blocked:
Not enough worker threads are available to process all of the evaluations in real time as they occur. The "Max Queue Entries" setting enables you to limit the likelihood of this situation occurring..
Alternatively, the same devices might be flickering on and off, requiring numerous evaluations. If an evaluation is not going to finish before the next bind/unbind event occurs, the second evaluation needs to wait until the first one finishes. The "Max Blocked Tasks" setting governs how many evaluations can be stopped this way.
If the "Max Queue Entries" limit is exceeded, the thread pool will attempt to resize itself. If it succeeds, then the new worker thread will help chew through the queue of evaluations.
If it cannot resize because of the "Max threads" limit, or whenever the "Max Blocked Tasks" limit is exceeded, the thread pool will reject the evaluation. A rejected evaluation immediately short-circuits to "not reporting" without any further processing.
In High Availability environments, the isReporting property might not behave as expected because in High Availability context the AlwaysON requests are routed in round-robin by the Connection Server. Hence, as a result, the BIND/UNBIND events will be executed by different nodes or queues.
Was this helpful?