Light attention predicts protein location from the language of life

Hannes Stärk, Christian Dallago, Michael Heinzinger, Burkhard Rost

Research output: Contribution to journalArticlepeer-review

55 Scopus citations

Abstract

Although knowing where a protein functions in a cell is important to characterize biological processes, this information remains unavailable for most known proteins. Machine learning narrows the gap through predictions from expert-designed input features leveraging information from multiple sequence alignments (MSAs) that is resource expensive to generate. Here, we showcased using embeddings from protein language models for competitive localization prediction without MSAs. Our lightweight deep neural network architecture used a softmax weighted aggregation mechanism with linear complexity in sequence length referred to as light attention. The method significantly outperformed the state-of-the-art (SOTA) for 10 localization classes by about 8 percentage points (Q10). So far, this might be the highest improvement of just embeddings over MSAs. Our new test set highlighted the limits of standard static datasets: while inviting new models, they might not suffice to claim improvements over the SOTA.

Original languageEnglish
Article numbervbab035
JournalBioinformatics Advances
Volume1
Issue number1
DOIs
StatePublished - 2021

Fingerprint

Dive into the research topics of 'Light attention predicts protein location from the language of life'. Together they form a unique fingerprint.

Cite this