Network Slicing via Transfer Learning aided Distributed Deep Reinforcement Learning

Tianlun Hu, Qi Liao, Qiang Liu, Georg Carle

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

4 Zitate (Scopus)

Abstract

Deep reinforcement learning (DRL) has been in-creasingly employed to handle the dynamic and complex re-source management in network slicing. The deployment of DRL policies in real networks, however, is complicated by heterogeneous cell conditions. In this paper, we propose a novel transfer learning (TL) aided multi-agent deep reinforcement learning (MADRL) approach with inter-agent similarity analysis for inter-cell inter-slice resource partitioning. First, we design a coordinated MADRL method with information sharing to intelligently partition resource to slices and manage inter-cell interference. Second, we propose an integrated TL method to transfer the learned DRL policies among different local agents for accelerating the policy deployment. The method is composed of a new domain and task similarity measurement approach and a new knowledge transfer approach, which resolves the problem of from whom to transfer and how to transfer. We evaluated the proposed solution with extensive simulations in a system-level simulator and show that our approach outperforms the state-of-the-art solutions in terms of performance, convergence speed and sample efficiency. Moreover, by applying TL, we achieve an additional gain over 27% higher than the coordinated MADRL approach without TL.

OriginalspracheEnglisch
Seiten (von - bis)2909-2914
Seitenumfang6
FachzeitschriftProceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
PublikationsstatusVeröffentlicht - 2022
Veranstaltung2022 IEEE Global Communications Conference, GLOBECOM 2022 - Virtual, Online, Brasilien
Dauer: 4 Dez. 20228 Dez. 2022

Fingerprint

Untersuchen Sie die Forschungsthemen von „Network Slicing via Transfer Learning aided Distributed Deep Reinforcement Learning“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren