TY - GEN
T1 - A Minimal Model for Compositional Generalization on gSCAN
AU - Hein, Alice
AU - Diepold, Klaus
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Whether neural networks are capable of compositional generalization has been a topic of much debate. Most previous studies on this subject investigate the generalization capabilities of state-of-the-art deep learning architectures. We here take a more bottom-up approach and design a minimal model that displays generalization on a compositional benchmark, namely, the gSCAN dataset. The model is a hybrid architecture that combines layers trained with gradient descent and a selective attention mechanism optimized with an evolutionary strategy. The architecture has around 60 times fewer trainable parameters than models previously tested on gSCAN, and achieves comparable accuracies on most test splits, even when trained only on a fraction of the dataset. On adverb to verb generalization accuracy, it outperforms previous approaches by 65 to 86%. Through ablation studies, neuron pruning, and error analyses, we show that weight decay and attention mechanisms facilitate compositional generalization by encouraging sparse representations divorced from irrelevant context. We find that the model’s sample efficiency can mainly be attributed to its selective attention mechanism.
AB - Whether neural networks are capable of compositional generalization has been a topic of much debate. Most previous studies on this subject investigate the generalization capabilities of state-of-the-art deep learning architectures. We here take a more bottom-up approach and design a minimal model that displays generalization on a compositional benchmark, namely, the gSCAN dataset. The model is a hybrid architecture that combines layers trained with gradient descent and a selective attention mechanism optimized with an evolutionary strategy. The architecture has around 60 times fewer trainable parameters than models previously tested on gSCAN, and achieves comparable accuracies on most test splits, even when trained only on a fraction of the dataset. On adverb to verb generalization accuracy, it outperforms previous approaches by 65 to 86%. Through ablation studies, neuron pruning, and error analyses, we show that weight decay and attention mechanisms facilitate compositional generalization by encouraging sparse representations divorced from irrelevant context. We find that the model’s sample efficiency can mainly be attributed to its selective attention mechanism.
UR - http://www.scopus.com/inward/record.url?scp=85152935288&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85152935288
T3 - BlackboxNLP 2022 - BlackboxNLP Analyzing and Interpreting Neural Networks for NLP, Proceedings of the Workshop
SP - 1
EP - 15
BT - BlackboxNLP 2022 - BlackboxNLP Analyzing and Interpreting Neural Networks for NLP, Proceedings of the Workshop
PB - Association for Computational Linguistics (ACL)
T2 - 5th Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP 2022 hosted by the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022
Y2 - 8 December 2022
ER -