TY - JOUR
T1 - Dimensionality-reduced subspace clustering
AU - Heckel, Reinhard
AU - Tschannen, Michael
AU - Bölcskei, Helmut
N1 - Publisher Copyright:
© The authors 2017.
PY - 2017/9/1
Y1 - 2017/9/1
N2 - Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations and dimensions are all unknown. In practice, one may have access to dimensionality-reduced observations of the data only, resulting, e.g., from undersampling due to complexity and speed constraints on the acquisition device or mechanism. More pertinently, even if the high-dimensional data set is available, it is often desirable to first project the data points into a lower-dimensional space and to perform clustering there; this reduces storage requirements and computational cost. The purpose of this article is to quantify the impact of dimensionality reduction through random projection on the performance of three subspace clustering algorithms, all of which are based on principles from sparse signal recovery. Specifically, we analyze the thresholding based subspace clustering (TSC) algorithm, the sparse subspace clustering (SSC) algorithm and an orthogonal matching pursuit variant thereof (SSC-OMP).We find, for all three algorithms, that dimensionality reduction down to the order of the subspace dimensions is possible without incurring significant performance degradation. Moreover, these results are order-wise optimal in the sense that reducing the dimensionality further leads to a fundamentally ill-posed clustering problem. Our findings carry over to the noisy case as illustrated through analytical results for TSC and simulations for SSC and SSC-OMP. Extensive experiments on synthetic and real data complement our theoretical findings.
AB - Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations and dimensions are all unknown. In practice, one may have access to dimensionality-reduced observations of the data only, resulting, e.g., from undersampling due to complexity and speed constraints on the acquisition device or mechanism. More pertinently, even if the high-dimensional data set is available, it is often desirable to first project the data points into a lower-dimensional space and to perform clustering there; this reduces storage requirements and computational cost. The purpose of this article is to quantify the impact of dimensionality reduction through random projection on the performance of three subspace clustering algorithms, all of which are based on principles from sparse signal recovery. Specifically, we analyze the thresholding based subspace clustering (TSC) algorithm, the sparse subspace clustering (SSC) algorithm and an orthogonal matching pursuit variant thereof (SSC-OMP).We find, for all three algorithms, that dimensionality reduction down to the order of the subspace dimensions is possible without incurring significant performance degradation. Moreover, these results are order-wise optimal in the sense that reducing the dimensionality further leads to a fundamentally ill-posed clustering problem. Our findings carry over to the noisy case as illustrated through analytical results for TSC and simulations for SSC and SSC-OMP. Extensive experiments on synthetic and real data complement our theoretical findings.
KW - Dimensionality reduction
KW - Random projection
KW - Sparse signal recovery
KW - Subspace clustering
UR - http://www.scopus.com/inward/record.url?scp=85071658273&partnerID=8YFLogxK
U2 - 10.1093/imaiai/iaw021
DO - 10.1093/imaiai/iaw021
M3 - Article
AN - SCOPUS:85071658273
SN - 2049-8772
VL - 6
SP - 246
EP - 283
JO - Information and Inference
JF - Information and Inference
IS - 3
ER -