Abstract
Two lines of work are taking center stage in AI research. On the one hand, the community is making increasing efforts to build models that discard spurious correlations and generalize better in novel test environments. Unfortunately, a hard lesson so far is that no proposal convincingly outperforms a simple empirical risk minimization baseline. On the other hand, large language models (LLMs) have erupted as algorithms able to learn in-context, generalizing on-the-fly to the eclectic contextual circumstances that users enforce by prompting. We argue that context is environment, and posit that in-context learning holds the key to better domain generalization. Via extensive theory and experiments, we show that paying attention to context-unlabeled examples as they arrive-allows our proposed In-Context Risk Minimization (ICRM) algorithm to zoom-in on the test environment risk minimizer, leading to significant out-of-distribution performance improvements. Furthermore, training with context helps the model learn a better featurizer. From all of this, two messages are worth taking home: researchers in domain generalization should consider environment as context, and harness the adaptive power of in-context learning. Researchers in LLMs should consider context as environment, to better structure data towards generalization. Code is available at https://github.com/facebookresearch/ICRM.
Originalsprache | Englisch |
---|---|
Publikationsstatus | Veröffentlicht - 2024 |
Extern publiziert | Ja |
Veranstaltung | 12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, Österreich Dauer: 7 Mai 2024 → 11 Mai 2024 |
Konferenz
Konferenz | 12th International Conference on Learning Representations, ICLR 2024 |
---|---|
Land/Gebiet | Österreich |
Ort | Hybrid, Vienna |
Zeitraum | 7/05/24 → 11/05/24 |