Hardware Accelerated ATLAS Workloads on the WLCG Grid

A. C. Forti, L. Heinrich, M. Guth

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

In recent years the usage of machine learning techniques within data-intensive sciences in general and high-energy physics in particular has rapidly increased, in part due to the availability of large datasets on which such algorithms can be trained, as well as suitable hardware, such as graphic or tensor processing units, which greatly accelerate the training and execution of such algorithms. Within the HEP domain, the development of these techniques has so far relied on resources external to the primary computing infrastructure of the WLCG (Worldwide LHC Computing Grid). In this paper we present an integration of hardware-accelerated workloads into the Grid through the declaration of dedicated queues with access to hardware accelerators and the use of Linux container images holding a modern data science software stack. A frequent use-case in the development of machine learning algorithms is the optimization of neural networks through the tuning of their Hyper Parameters (HP). For this often a large range of network variations must be trained and compared, which for some optimization schemes can be performed in parallel-A workload well suited for Grid computing. An example of such a hyper-parameter scan on Grid resources for the case of flavor tagging within ATLAS is presented.

Original languageEnglish
Article number012059
JournalJournal of Physics: Conference Series
Volume1525
Issue number1
DOIs
StatePublished - 7 Jul 2020
Externally publishedYes
Event19th International Workshop on Advanced Computing and Analysis Techniques in Physics Research, ACAT 2019 - Saas-Fee, Switzerland
Duration: 11 Mar 201915 Mar 2019

Fingerprint

Dive into the research topics of 'Hardware Accelerated ATLAS Workloads on the WLCG Grid'. Together they form a unique fingerprint.

Cite this