TY - JOUR
T1 - Optimal fast Johnson–Lindenstrauss embeddings for large data sets
AU - Bamberger, Stefan
AU - Krahmer, Felix
N1 - Publisher Copyright:
© 2021, The Author(s).
PY - 2021/6
Y1 - 2021/6
N2 - Johnson–Lindenstrauss embeddings are widely used to reduce the dimension and thus the processing time of data. To reduce the total complexity, also fast algorithms for applying these embeddings are necessary. To date, such fast algorithms are only available either for a non-optimal embedding dimension or up to a certain threshold on the number of data points. We address a variant of this problem where one aims to simultaneously embed larger subsets of the data set. Our method follows an approach by Nelson et al. (New constructions of RIP matrices with fast multiplication and fewer rows. In: Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1515-1528, 2014): a subsampled Hadamard transform maps points into a space of lower, but not optimal dimension. Subsequently, a random matrix with independent entries projects to an optimal embedding dimension. For subsets whose size scales at least polynomially in the ambient dimension, the complexity of this method comes close to the number of operations just to read the data under mild assumptions on the size of the data set that are considerably less restrictive than in previous works. We also prove a lower bound showing that subsampled Hadamard matrices alone cannot reach an optimal embedding dimension. Hence, the second embedding cannot be omitted.
AB - Johnson–Lindenstrauss embeddings are widely used to reduce the dimension and thus the processing time of data. To reduce the total complexity, also fast algorithms for applying these embeddings are necessary. To date, such fast algorithms are only available either for a non-optimal embedding dimension or up to a certain threshold on the number of data points. We address a variant of this problem where one aims to simultaneously embed larger subsets of the data set. Our method follows an approach by Nelson et al. (New constructions of RIP matrices with fast multiplication and fewer rows. In: Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1515-1528, 2014): a subsampled Hadamard transform maps points into a space of lower, but not optimal dimension. Subsequently, a random matrix with independent entries projects to an optimal embedding dimension. For subsets whose size scales at least polynomially in the ambient dimension, the complexity of this method comes close to the number of operations just to read the data under mild assumptions on the size of the data set that are considerably less restrictive than in previous works. We also prove a lower bound showing that subsampled Hadamard matrices alone cannot reach an optimal embedding dimension. Hence, the second embedding cannot be omitted.
KW - Fast matrix multiplication
KW - Hadamard transforms
KW - Johnson–Lindenstrauss embeddings
KW - Restricted isometry property
UR - http://www.scopus.com/inward/record.url?scp=85106984099&partnerID=8YFLogxK
U2 - 10.1007/s43670-021-00003-5
DO - 10.1007/s43670-021-00003-5
M3 - Article
AN - SCOPUS:85106984099
SN - 2730-5716
VL - 19
JO - Sampling Theory, Signal Processing, and Data Analysis
JF - Sampling Theory, Signal Processing, and Data Analysis
IS - 1
M1 - 3
ER -