TY - JOUR
T1 - Light attention predicts protein location from the language of life
AU - Stärk, Hannes
AU - Dallago, Christian
AU - Heinzinger, Michael
AU - Rost, Burkhard
N1 - Publisher Copyright:
© 2021 The Author(s). Published by Oxford University Press.
PY - 2021
Y1 - 2021
N2 - Although knowing where a protein functions in a cell is important to characterize biological processes, this information remains unavailable for most known proteins. Machine learning narrows the gap through predictions from expert-designed input features leveraging information from multiple sequence alignments (MSAs) that is resource expensive to generate. Here, we showcased using embeddings from protein language models for competitive localization prediction without MSAs. Our lightweight deep neural network architecture used a softmax weighted aggregation mechanism with linear complexity in sequence length referred to as light attention. The method significantly outperformed the state-of-the-art (SOTA) for 10 localization classes by about 8 percentage points (Q10). So far, this might be the highest improvement of just embeddings over MSAs. Our new test set highlighted the limits of standard static datasets: while inviting new models, they might not suffice to claim improvements over the SOTA.
AB - Although knowing where a protein functions in a cell is important to characterize biological processes, this information remains unavailable for most known proteins. Machine learning narrows the gap through predictions from expert-designed input features leveraging information from multiple sequence alignments (MSAs) that is resource expensive to generate. Here, we showcased using embeddings from protein language models for competitive localization prediction without MSAs. Our lightweight deep neural network architecture used a softmax weighted aggregation mechanism with linear complexity in sequence length referred to as light attention. The method significantly outperformed the state-of-the-art (SOTA) for 10 localization classes by about 8 percentage points (Q10). So far, this might be the highest improvement of just embeddings over MSAs. Our new test set highlighted the limits of standard static datasets: while inviting new models, they might not suffice to claim improvements over the SOTA.
UR - http://www.scopus.com/inward/record.url?scp=85151648383&partnerID=8YFLogxK
U2 - 10.1093/bioadv/vbab035
DO - 10.1093/bioadv/vbab035
M3 - Article
AN - SCOPUS:85151648383
SN - 2635-0041
VL - 1
JO - Bioinformatics Advances
JF - Bioinformatics Advances
IS - 1
M1 - vbab035
ER -