Transformers learn to implement preconditioned gradient descent for in-context learning

Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, Suvrit Sra

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

Several recent works demonstrate that transformers can implement algorithms like gradient descent. By a careful construction of weights, these works show that multiple layers of transformers are expressive enough to simulate iterations of gradient descent. Going beyond the question of expressivity, we ask: Can transformers learn to implement such algorithms by training over random problem instances? To our knowledge, we make the first theoretical progress on this question via an analysis of the loss landscape for linear transformers trained over random instances of linear regression. For a single attention layer, we prove the global minimum of the training objective implements a single iteration of preconditioned gradient descent. Notably, the preconditioning matrix not only adapts to the input distribution but also to the variance induced by data inadequacy. For a transformer with L attention layers, we prove certain critical points of the training objective implement L iterations of preconditioned gradient descent. Our results call for future theoretical studies on learning algorithms by training transformers.

Original languageEnglish
Pages (from-to)45614-45650
Number of pages37
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

Fingerprint

Dive into the research topics of 'Transformers learn to implement preconditioned gradient descent for in-context learning'. Together they form a unique fingerprint.

Cite this