TY - GEN
T1 - Distributed Intelligence for Dynamic Task Migration in the 6G User Plane using Deep Reinforcement Learning
AU - Majumdar, Sayantini
AU - Schwarzmann, Susanna
AU - Trivisonno, Riccardo
AU - Carle, Georg
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In-Network Computing (INC) is a currently emerging paradigm. Realizing INC in 6G networks could mean that user plane entities (UPEs) carry out computations on packets while transmitting them. These computations may have specific requirements in terms of their completion time. In case of high compute pressure at one UPE, migrating computations to another UPE may be beneficial, in order to avoid exceeding the completion time requirement. Centralized migration approaches suffer from increased signaling and are prone to react too slow. Therefore, this paper investigates the applicability of distributed intelligence to tackle the problem of compute task migration in the 6G User Plane. Each UPE is equipped with an intelligent agent, enabling autonomous decisions on whether computations should be migrated to another UPE. To enable the intelligent agents to learn and apply an optimal task migration policy, we investigate and compare two state-of-the-art Deep Reinforcement Learning (DRL) approaches: Advantage Actor-Critic (A2C) and Double Deep Q-Network (DDQN). We show, via simulations, that the performance of both solutions, in terms of the percentage of tasks exceeding their completion time requirement, is near-optimal and training A2C is at least 60% faster than DDQN.
AB - In-Network Computing (INC) is a currently emerging paradigm. Realizing INC in 6G networks could mean that user plane entities (UPEs) carry out computations on packets while transmitting them. These computations may have specific requirements in terms of their completion time. In case of high compute pressure at one UPE, migrating computations to another UPE may be beneficial, in order to avoid exceeding the completion time requirement. Centralized migration approaches suffer from increased signaling and are prone to react too slow. Therefore, this paper investigates the applicability of distributed intelligence to tackle the problem of compute task migration in the 6G User Plane. Each UPE is equipped with an intelligent agent, enabling autonomous decisions on whether computations should be migrated to another UPE. To enable the intelligent agents to learn and apply an optimal task migration policy, we investigate and compare two state-of-the-art Deep Reinforcement Learning (DRL) approaches: Advantage Actor-Critic (A2C) and Double Deep Q-Network (DDQN). We show, via simulations, that the performance of both solutions, in terms of the percentage of tasks exceeding their completion time requirement, is near-optimal and training A2C is at least 60% faster than DDQN.
KW - 6G Network Management
KW - Actor-Critic
KW - Distributed Intelligence
KW - Double Deep Q-Network
KW - Task Migration
UR - http://www.scopus.com/inward/record.url?scp=85198349358&partnerID=8YFLogxK
U2 - 10.1109/NOMS59830.2024.10575830
DO - 10.1109/NOMS59830.2024.10575830
M3 - Conference contribution
AN - SCOPUS:85198349358
T3 - Proceedings of IEEE/IFIP Network Operations and Management Symposium 2024, NOMS 2024
BT - Proceedings of IEEE/IFIP Network Operations and Management Symposium 2024, NOMS 2024
A2 - Hong, James Won-Ki
A2 - Seok, Seung-Joon
A2 - Nomura, Yuji
A2 - Wang, You-Chiun
A2 - Choi, Baek-Young
A2 - Kim, Myung-Sup
A2 - Riggio, Roberto
A2 - Tsai, Meng-Hsun
A2 - dos Santos, Carlos Raniery Paula
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE/IFIP Network Operations and Management Symposium, NOMS 2024
Y2 - 6 May 2024 through 10 May 2024
ER -