FaST-GShare: Enabling Efficient Spatio-Temporal GPU Sharing in Serverless Computing for Deep Learning Inference

Jianfeng Gu, Yichao Zhu, Puxuan Wang, Mohak Chadha, Michael Gerndt

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Scopus citations

Abstract

Serverless computing (FaaS) has been extensively utilized for deep learning (DL) inference due to the ease of deployment and pay-per-use benefits. However, existing FaaS platforms utilize GPUs in a coarse manner for DL inferences, without taking into account spatio-temporal resource multiplexing and isolation, which results in severe GPU under-utilization, high usage expenses, and SLO (Service Level Objectives) violation. There is an imperative need to enable an efficient and SLO-aware GPU-sharing mechanism in serverless computing to facilitate cost-effective DL inferences. In this paper, we propose FaST-GShare, an efficient FaaS-oriented Spatio-Temporal GPU Sharing architecture for deep learning inferences. In the architecture, we introduce the FaST-Manager to limit and isolate spatio-temporal resources for GPU multiplexing. In order to realize function performance, the automatic and flexible FaST-Profiler is proposed to profile function throughput under various resource allocations. Based on the profiling data and the isolation mechanism, we introduce the FaST-Scheduler with heuristic auto-scaling and efficient resource allocation to guarantee function SLOs. Meanwhile, FaST-Scheduler schedules function with efficient GPU node selection to maximize GPU usage. Furthermore, model sharing is exploited to mitigate memory contention. Our prototype implementation on the OpenFaaS platform and experiments on MLPerf-based benchmark prove that FaST-GShare can ensure resource isolation and function SLOs. Compared to the time sharing mechanism, FaST-GShare can improve throughput by 3.15x, GPU utilization by 1.34x, and SM (Streaming Multiprocessor) occupancy by 3.13x on average.

Original languageEnglish
Title of host publication52nd International Conference on Parallel Processing, ICPP 2023 - Main Conference Proceedings
PublisherAssociation for Computing Machinery
Pages635-644
Number of pages10
ISBN (Electronic)9798400708435
DOIs
StatePublished - 7 Aug 2023
Event52nd International Conference on Parallel Processing, ICPP 2023 - Salt Lake City, United States
Duration: 7 Aug 202310 Aug 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference52nd International Conference on Parallel Processing, ICPP 2023
Country/TerritoryUnited States
CitySalt Lake City
Period7/08/2310/08/23

Fingerprint

Dive into the research topics of 'FaST-GShare: Enabling Efficient Spatio-Temporal GPU Sharing in Serverless Computing for Deep Learning Inference'. Together they form a unique fingerprint.

Cite this