ML-Enabled Systems Model Deployment and Monitoring: Status Quo and Problems

  • Eduardo Zimelewicz
  • , Marcos Kalinowski
  • , Daniel Mendez
  • , Görkem Giray
  • , Antonio Pedro Santos Alves
  • , Niklas Lavesson
  • , Kelly Azevedo
  • , Hugo Villamizar
  • , Tatiana Escovedo
  • , Helio Lopes
  • , Stefan Biffl
  • , Juergen Musil
  • , Michael Felderer
  • , Stefan Wagner
  • , Teresa Baldassarre
  • , Tony Gorschek

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

[Context] Systems that incorporate Machine Learning (ML) models, often referred to as ML-enabled systems, have become commonplace. However, empirical evidence on how ML-enabled systems are engineered in practice is still limited; this is especially true for activities surrounding ML model dissemination. [Goal] We investigate contemporary industrial practices and problems related to ML model dissemination, focusing on the model deployment and the monitoring ML life cycle phases. [Method] We conducted an international survey to gather practitioner insights on how ML-enabled systems are engineered. We gathered a total of 188 complete responses from 25 countries. We analyze the status quo and problems reported for the model deployment and monitoring phases. We analyzed contemporary practices using bootstrapping with confidence intervals and conducted qualitative analyses on the reported problems applying open and axial coding procedures. [Results] Practitioners perceive the model deployment and monitoring phases as relevant and difficult. With respect to model deployment, models are typically deployed as separate services, with limited adoption of MLOps principles. Reported problems include difficulties in designing the architecture of the infrastructure for production deployment and legacy application integration. Concerning model monitoring, many models in production are not monitored. The main monitored aspects are inputs, outputs, and decisions. Reported problems involve the absence of monitoring practices, the need to create custom monitoring tools, and the selection of suitable metrics. [Conclusion] Our results help provide a better understanding of the adopted practices and problems in practice and support guiding ML deployment and monitoring research in a problem-driven manner.

Original languageEnglish
Title of host publicationSoftware Quality as a Foundation for Security - 16th International Conference on Software Quality, SWQD 2024, Proceedings
EditorsPeter Bludau, Rudolf Ramler, Dietmar Winkler, Johannes Bergsmann
PublisherSpringer Science and Business Media Deutschland GmbH
Pages112-131
Number of pages20
ISBN (Print)9783031562808
DOIs
StatePublished - 2024
Event16th International Conference on Software Quality, SWQD 2024 - Vienna, Austria
Duration: 23 Apr 202425 Apr 2024

Publication series

NameLecture Notes in Business Information Processing
Volume505 LNBIP
ISSN (Print)1865-1348
ISSN (Electronic)1865-1356

Conference

Conference16th International Conference on Software Quality, SWQD 2024
Country/TerritoryAustria
CityVienna
Period23/04/2425/04/24

Keywords

  • Deployment
  • Machine Learning
  • Monitoring

Fingerprint

Dive into the research topics of 'ML-Enabled Systems Model Deployment and Monitoring: Status Quo and Problems'. Together they form a unique fingerprint.

Cite this