Protein language-model embeddings for fast, accurate, and alignment-free protein structure prediction

Konstantin Weissenow, Michael Heinzinger, Burkhard Rost

Research output: Contribution to journalArticlepeer-review

76 Scopus citations

Abstract

Advanced protein structure prediction requires evolutionary information from multiple sequence alignments (MSAs) from evolutionary couplings that are not always available. Artificial intelligence (AI)-based predictions inputting only single sequences are faster but so inaccurate as to render speed irrelevant. Here, we described a competitive prediction of inter-residue distances (2D structure) exclusively inputting embeddings from pre-trained protein language models (pLMs), namely ProtT5, from single sequences into a convolutional neural network (CNN) with relatively few layers. The major advance used the ProtT5 attention heads. Our new method, EMBER2, which never requires any MSAs, performed similarly to other methods that fully rely on co-evolution. Although clearly not reaching AlphaFold2, our leaner solution came somehow close at substantially lower costs. By generating protein-specific rather than family-averaged predictions, EMBER2 might better capture some features of particular protein structures. Results from using protein engineering and deep mutational scanning (DMS) experiments provided at least a proof of principle for such a speculation.

Original languageEnglish
Pages (from-to)1169-1177.e4
JournalStructure
Volume30
Issue number8
DOIs
StatePublished - 4 Aug 2022

Keywords

  • deep learning
  • machine learning
  • multiple sequence alignments
  • protein language model
  • protein structure prediction

Fingerprint

Dive into the research topics of 'Protein language-model embeddings for fast, accurate, and alignment-free protein structure prediction'. Together they form a unique fingerprint.

Cite this