Nimbus: Towards Latency-Energy Efficient Task Offloading for AR Services

Vittorio Cozzolino, Leonardo Tonetto, Nitinder Mohan, Aaron Yi Ding, Jörg Ott

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

Widespread adoption ofmobile augmented reality (AR) and virtual reality (VR) applications depends on their smoothness and immersiveness. Modern AR applications applying computationally intensive computer vision algorithms can burden today's mobile devices, and cause high energy consumption and/or poor performance. To tackle this challenge, it is possible to offload part of the computation to nearby devices at the edge. However, this calls for smart task placement strategies in order to efficiently use the resources of the edge infrastructure. In this paper, we introduce Nimbus-a task placement and offloading solution for a multi-tier, edge-cloud infrastructure where deep learning tasks are extracted fromthe AR application pipeline and offloaded to nearby GPU-powered edge devices. Our aim is to minimize the latency experienced by end-users and the energy costs on mobile devices. Our multifaceted evaluation, based on benchmarked performance of AR tasks, shows the efficacy of our solution. Overall, Nimbus reduces the task latency by ∼ 4× and the energy consumption by ∼77% for real-time object detection in AR applications. We also benchmark three variants of our offloading algorithm, disclosing the trade-off of centralized versus distributed execution.

Original languageEnglish
Pages (from-to)1530-1545
Number of pages16
JournalIEEE Transactions on Cloud Computing
Volume11
Issue number2
DOIs
StatePublished - 1 Apr 2023

Keywords

  • Edge computing
  • augmented reality
  • cloud computing
  • optimization
  • resource management

Fingerprint

Dive into the research topics of 'Nimbus: Towards Latency-Energy Efficient Task Offloading for AR Services'. Together they form a unique fingerprint.

Cite this