TY - GEN
T1 - Task Allocation in Industrial Edge Networks with Particle Swarm Optimization and Deep Reinforcement Learning
AU - Buschmann, Philippe
AU - Shorim, Mostafa H.M.
AU - Helm, Max
AU - Bröring, Arne
AU - Carle, Georg
N1 - Publisher Copyright:
© 2022 Copyright held by the owner/author(s).
PY - 2022/11/7
Y1 - 2022/11/7
N2 - To avoid the disadvantages of a cloud-centric infrastructure, next-generation industrial scenarios focus on using distributed edge networks. Task allocation in distributed edge networks with regards to minimizing the energy consumption is NP-hard and requires considerable computational effort to obtain optimal results with conventional algorithms like Integer Linear Programming (ILP). We extend an existing ILP problem including an ILP heuristic for multi-workflow allocation and propose a Particle Swarm Optimization (PSO) and a Deep Reinforcement Learning (DRL) algorithm. PSO and DRL outperform the ILP heuristic with a median optimality gap of 7.7 % and 35.9 % against 100.4 %. DRL has the lowest upper bound for the optimality gap. It performs better than PSO for problem sizes of more than 25 tasks and PSO fails to find a feasible solution for more than 60 tasks. The execution time of DRL is significantly faster with a maximum of 1 s in comparison to PSO with a maximum of 361 s. In conclusion, our experiments indicate that PSO is more suitable for smaller and DRL for larger sized task allocation problems.
AB - To avoid the disadvantages of a cloud-centric infrastructure, next-generation industrial scenarios focus on using distributed edge networks. Task allocation in distributed edge networks with regards to minimizing the energy consumption is NP-hard and requires considerable computational effort to obtain optimal results with conventional algorithms like Integer Linear Programming (ILP). We extend an existing ILP problem including an ILP heuristic for multi-workflow allocation and propose a Particle Swarm Optimization (PSO) and a Deep Reinforcement Learning (DRL) algorithm. PSO and DRL outperform the ILP heuristic with a median optimality gap of 7.7 % and 35.9 % against 100.4 %. DRL has the lowest upper bound for the optimality gap. It performs better than PSO for problem sizes of more than 25 tasks and PSO fails to find a feasible solution for more than 60 tasks. The execution time of DRL is significantly faster with a maximum of 1 s in comparison to PSO with a maximum of 361 s. In conclusion, our experiments indicate that PSO is more suitable for smaller and DRL for larger sized task allocation problems.
KW - Deep Reinforcement Learning
KW - Edge Computing
KW - Integer Linear Programming
KW - Internet of Things (IoT)
KW - Particle Swarm Optimization
KW - Task Allocation
UR - http://www.scopus.com/inward/record.url?scp=85146535734&partnerID=8YFLogxK
U2 - 10.1145/3567445.3571114
DO - 10.1145/3567445.3571114
M3 - Conference contribution
AN - SCOPUS:85146535734
T3 - ACM International Conference Proceeding Series
SP - 239
EP - 247
BT - IoT 2022 - Proceedings of the 12th International Conference on the Internet of Things 2022
PB - Association for Computing Machinery
T2 - 12th International Conference on the Internet of Things, IoT 2022
Y2 - 7 November 2022 through 10 November 2022
ER -