Improving Scalability of 6G Network Automation with Distributed Deep Q-Networks

Sayantini Majumdar, Leonardo Goratti, Riccardo Trivisonno, Georg Carle

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

In recent years, owing to the architectural evolution of 6G towards decentralization, distributed intelligence is being studied extensively for 6G network automation. Distributed intelligence, based on Reinforcement Learning (RL), particularly Q-Learning (QL), has been proposed as a potential direction. The distributed framework consists of independent QL agents, attempting to reach their own individual objectives. The agents need to learn using a sufficient number of training steps before they converge to the optimal performance. After convergence, they can take reliable management actions. However, the scalability of QL could be severely hindered, particularly in the convergence time - when the number of QL agents increases. To overcome the scalability issue of QL, in this paper, we explore the potentials of the Deep Q-Network (DQN) algorithm, a function approximation-based method. Results show that DQN outperforms QL by at least 37% in terms of convergence time. In addition, we highlight that DQN is prone to divergence, which, if solved, could rapidly advance distributed intelligence for 6G.

Original languageEnglish
Pages (from-to)1265-1270
Number of pages6
JournalProceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
StatePublished - 2022
Event2022 IEEE Global Communications Conference, GLOBECOM 2022 - Virtual, Online, Brazil
Duration: 4 Dec 20228 Dec 2022

Keywords

  • 6G
  • architecture
  • Deep Learning
  • DQN
  • network automation
  • Reinforcement Learning
  • resource management

Fingerprint

Dive into the research topics of 'Improving Scalability of 6G Network Automation with Distributed Deep Q-Networks'. Together they form a unique fingerprint.

Cite this