TY - GEN
T1 - Minimizing communication overhead in window-based parallel complex event processing
AU - Mayer, Ruben
AU - Tariq, Muhammad Adrian
AU - Rothermel, Kurt
N1 - Publisher Copyright:
© 2017 Copyright held by the owner/author(s).
PY - 2017/6/8
Y1 - 2017/6/8
N2 - Distributed Complex Event Processing has emerged as a well-established paradigm to detect situations of interest from basic sensor streams, building an operator graph between sensors and applications. In order to detect event patterns that correspond to situations of interest, each operator correlates events on its incoming streams according to a sliding window mechanism. To increase the throughput of an operator, different windows can be assigned to different operator instances-i.e., identical operator copies-which process them in parallel. This implies that events that are part of multiple overlapping windows are replicated to different operator instances. The communication overhead of replicating the events can be reduced by assigning overlapping windows to the same operator instance. However, this imposes a higher processing load on the single operator instance, possibly overloading it. In this paper, we address the trade-off between processing load and communication overhead when assigning overlapping windows to a single operator instance. Controlling the trade-off is challenging and cannot be solved with traditional reactive methods. To this end, we propose a model-based batch scheduling controller building on prediction. Evaluations show that our approach is able to significantly save bandwidth, while keeping a user-defined latency bound in the operator instances.
AB - Distributed Complex Event Processing has emerged as a well-established paradigm to detect situations of interest from basic sensor streams, building an operator graph between sensors and applications. In order to detect event patterns that correspond to situations of interest, each operator correlates events on its incoming streams according to a sliding window mechanism. To increase the throughput of an operator, different windows can be assigned to different operator instances-i.e., identical operator copies-which process them in parallel. This implies that events that are part of multiple overlapping windows are replicated to different operator instances. The communication overhead of replicating the events can be reduced by assigning overlapping windows to the same operator instance. However, this imposes a higher processing load on the single operator instance, possibly overloading it. In this paper, we address the trade-off between processing load and communication overhead when assigning overlapping windows to a single operator instance. Controlling the trade-off is challenging and cannot be solved with traditional reactive methods. To this end, we propose a model-based batch scheduling controller building on prediction. Evaluations show that our approach is able to significantly save bandwidth, while keeping a user-defined latency bound in the operator instances.
KW - Communication overhead
KW - Complex event processing
KW - Data parallelization
UR - http://www.scopus.com/inward/record.url?scp=85023184826&partnerID=8YFLogxK
U2 - 10.1145/3093742.3093914
DO - 10.1145/3093742.3093914
M3 - Conference contribution
AN - SCOPUS:85023184826
T3 - DEBS 2017 - Proceedings of the 11th ACM International Conference on Distributed Event-Based Systems
SP - 54
EP - 65
BT - DEBS 2017 - Proceedings of the 11th ACM International Conference on Distributed Event-Based Systems
PB - Association for Computing Machinery, Inc
T2 - 11th ACM International Conference on Distributed Event-Based Systems, DEBS 2017
Y2 - 19 June 2017 through 23 June 2017
ER -