TY - GEN
T1 - Get real
T2 - 2018 Workshop on Testing Database Systems, DBTest 2018
AU - Vogelsgesang, A. Adrian
AU - Haubenschild, M. Michael
AU - Finis, J. Jan
AU - Kemper, A. Alfons
AU - Leis, V. Viktor
AU - Muehlbauer, T. Tobias
AU - Neumann, T. Thomas
AU - Then, M. Manuel
N1 - Publisher Copyright:
© 2018 ACM.
PY - 2018/6/15
Y1 - 2018/6/15
N2 - Industrial as well as academic analytics systems are usually evaluated based on well-known standard benchmarks, such as TPC-H or TPC-DS. These benchmarks test various components of the DBMS including the join optimizer, the implementation of the join and aggregation operators, concurrency control and the scheduler. However, these benchmarks fall short of evaluating the "real" challenges imposed by modern BI systems, such as Tableau, that emit machine-generated query workloads. This paper reports a comprehensive study based on a set of more than 60k real-world BI data repositories together with their generated query workload. The machine-generated workload posed by BI tools differs from the "hand-crafted" benchmark queries in multiple ways: Structurally simple relational operator trees often come with extremely complex scalar expressions such that expression evaluation becomes the limiting factor. At the same time, we also encountered much more complex relational operator trees than covered by benchmarks. This long tail in both, operator tree and expression complexity, is not adequately represented in standard benchmarks. We contribute various statistics gathered from the large dataset, e.g., data type distributions, operator frequency, string length distribution and expression complexity. We hope our study gives an impetus to database researchers and benchmark designers alike to address the relevant problems in future projects and to enable better database support for data exploration systems which become more and more important in the Big Data era.
AB - Industrial as well as academic analytics systems are usually evaluated based on well-known standard benchmarks, such as TPC-H or TPC-DS. These benchmarks test various components of the DBMS including the join optimizer, the implementation of the join and aggregation operators, concurrency control and the scheduler. However, these benchmarks fall short of evaluating the "real" challenges imposed by modern BI systems, such as Tableau, that emit machine-generated query workloads. This paper reports a comprehensive study based on a set of more than 60k real-world BI data repositories together with their generated query workload. The machine-generated workload posed by BI tools differs from the "hand-crafted" benchmark queries in multiple ways: Structurally simple relational operator trees often come with extremely complex scalar expressions such that expression evaluation becomes the limiting factor. At the same time, we also encountered much more complex relational operator trees than covered by benchmarks. This long tail in both, operator tree and expression complexity, is not adequately represented in standard benchmarks. We contribute various statistics gathered from the large dataset, e.g., data type distributions, operator frequency, string length distribution and expression complexity. We hope our study gives an impetus to database researchers and benchmark designers alike to address the relevant problems in future projects and to enable better database support for data exploration systems which become more and more important in the Big Data era.
UR - http://www.scopus.com/inward/record.url?scp=85063593238&partnerID=8YFLogxK
U2 - 10.1145/3209950.3209952
DO - 10.1145/3209950.3209952
M3 - Conference contribution
AN - SCOPUS:85063593238
T3 - Proceedings of the Workshop on Testing Database Systems, DBTest 2018
BT - Proceedings of the Workshop on Testing Database Systems, DBTest 2018
PB - Association for Computing Machinery, Inc
Y2 - 15 June 2018
ER -