Expressive Sign Equivariant Networks for Spectral Geometric Learning

Derek Lim, Joshua Robinson, Stefanie Jegelka, Haggai Maron

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Recent work has shown the utility of developing machine learning models that respect the structure and symmetries of eigenvectors. These works promote sign invariance, since for any eigenvector v the negation −v is also an eigenvector. However, we show that sign invariance is theoretically limited for tasks such as building orthogonally equivariant models and learning node positional encodings for link prediction in graphs. In this work, we demonstrate the benefits of sign equivariance for these tasks. To obtain these benefits, we develop novel sign equivariant neural network architectures. Our models are based on a new analytic characterization of sign equivariant polynomials and thus inherit provable expressiveness properties. Controlled synthetic experiments show that our networks can achieve the theoretically predicted benefits of sign equivariant models.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Externally publishedYes
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

Fingerprint

Dive into the research topics of 'Expressive Sign Equivariant Networks for Spectral Geometric Learning'. Together they form a unique fingerprint.

Cite this