chemtrain: Learning deep potential models via automatic differentiation and statistical physics

Paul Fuchs, Stephan Thaler, Sebastien Röcken, Julija Zavadlav

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

1 Zitat (Scopus)

Abstract

Neural Networks (NNs) are effective models for refining the accuracy of molecular dynamics, opening up new fields of application. Typically trained bottom-up, atomistic NN potential models can reach first-principle accuracy, while coarse-grained implicit solvent NN potentials surpass classical continuum solvent models. However, overcoming the limitations of costly generation of accurate reference data and data inefficiency of common bottom-up training demands efficient incorporation of data from many sources. This paper introduces the framework chemtrain to learn sophisticated NN potential models through customizable training routines and advanced training algorithms. These routines can combine multiple top-down and bottom-up algorithms, e.g., to incorporate both experimental and simulation data or pre-train potentials with less costly algorithms. chemtrain provides an object-oriented high-level interface to simplify the creation of custom routines. On the lower level, chemtrain relies on JAX to compute gradients and scale the computations to use available resources. We demonstrate the simplicity and importance of combining multiple algorithms in the examples of parametrizing an all-atomistic model of titanium and a coarse-grained implicit solvent model of alanine dipeptide. Program summary: Program Title: chemtrain CPC Library link to program files: https://doi.org/10.17632/m6fxmcmfzz.1 Developer's repository link: https://github.com/tummfm/chemtrain Licensing provisions: Apache-2.0 Programming language: python Nature of problem: Neural Network (NN) potentials provide the means to accurately model high-order many-body interactions between particles on a molecular level. Through linear computational scaling with the system size, their high expressivity opens up new possibilities for efficiently modeling systems at a higher precision without resorting to expensive, finer-scale computational methods. However, as common for data-driven approaches, the success of NN potentials depends crucially on the availability of accurate training data. Bottom-up trained state-of-the-art models can match ab initio computations closer than their actual accuracy but can still predict deviations from experimental measurements. Including more accurate reference data can, in principle, resolve this issue, but generating sufficient data is infeasible even with less precise methods for increasingly larger systems. Supplementing the training procedure with more data-efficient methods can limit required training data [1]. In addition, the models can be fully or partially trained on macroscopic reference data [2,3]. Therefore, a framework supporting a combination of multiple training algorithms could further expedite the success of NN potential models in various disciplines. Solution method: We propose a framework that enables the development of NN potential models through customizable training routines. The framework provides the top-down algorithm Differentiable Trajectory Reweighting [2] and the bottom-up learning algorithms Force Matching [1] and Relative Entropy Minimization [1]. A high-level object-oriented API simplifies combining multiple algorithms and setting up sophisticated training routines such as active learning. At a modularly structured lower level, the framework follows a functional programming paradigm relying on the machine learning framework JAX [4] to simplify the creation of algorithms from standard building blocks, e.g., by deriving microscopic quantities such as forces and virials from any JAX-compatible NN potential model and scaling computations to use available resources. References: [1] S. Thaler, M. Stupp, J. Zavadlav, Deep coarse-grained potentials via relative entropy minimization, J. Chem. Phys. 157 (24) (2022) 244103, https://doi.org/10.1063/5.0124538. [2] S. Thaler, J. Zavadlav, Learning neural network potentials from experimental data via Differentiable Trajectory Reweighting, Nat. Commun. 12 (1) (2021) 6884, https://doi.org/10.1038/s41467-021-27241-4. [3] S. Röcken, J. Zavadlav, Accurate machine learning force fields via experimental and simulation data fusion, npj Comput. Mater. 10 (1) (2024) 1–10, https://doi.org/10.1038/s41524-024-01251-4. [4] R. Frostig, M. J. Johnson, C. Leary, Compiling machine learning programs via high-level tracing.

OriginalspracheEnglisch
Aufsatznummer109512
FachzeitschriftComputer Physics Communications
Jahrgang310
DOIs
PublikationsstatusVeröffentlicht - Mai 2025

Fingerprint

Untersuchen Sie die Forschungsthemen von „chemtrain: Learning deep potential models via automatic differentiation and statistical physics“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren