HALF-INVERSE GRADIENTS FOR PHYSICAL DEEP LEARNING

Patrick Schnell, Philipp Holl, Nils Thuerey

Research output: Contribution to conferencePaperpeer-review

2 Scopus citations

Abstract

Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schrödinger equation and the Poisson problem.

Original languageEnglish
StatePublished - 2022
Event10th International Conference on Learning Representations, ICLR 2022 - Virtual, Online
Duration: 25 Apr 202229 Apr 2022

Conference

Conference10th International Conference on Learning Representations, ICLR 2022
CityVirtual, Online
Period25/04/2229/04/22

Fingerprint

Dive into the research topics of 'HALF-INVERSE GRADIENTS FOR PHYSICAL DEEP LEARNING'. Together they form a unique fingerprint.

Cite this