Skip to main navigation Skip to search Skip to main content

Benchmarking learned indexes

Research output: Contribution to journalArticlepeer-review

112 Scopus citations

Abstract

Recent advancements in learned index structures propose replacing existing index structures, like B-Trees, with approximate learned models. In this work, we present a unified benchmark that compares well-tuned implementations of three learned index structures against several state-of-the-art "traditional" baselines. Using four real-world datasets, we demonstrate that learned index structures can indeed outperform non-learned indexes in read-only in-memory workloads over a dense array. We investigate the impact of caching, pipelining, dataset size, and key size. We study the performance profile of learned index structures, and build an explanation for why learned models achieve such good performance. Finally, we investigate other important properties of learned index structures, such as their performance in multi-threaded systems and their build times.

Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalProceedings of the VLDB Endowment
Volume14
Issue number1
DOIs
StatePublished - Sep 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Benchmarking learned indexes'. Together they form a unique fingerprint.

Cite this