TY - GEN
T1 - Fast newton-type methods for the least squares nonnegative matrix approximation problem
AU - Kim, Dongmin
AU - Sra, Suvrit
AU - Dhillon, Inderjit S.
PY - 2007
Y1 - 2007
N2 - Nonnegative Matrix Approximation is an effective matrix decomposition technique that has proven to be useful for a wide variety of applications ranging from document analysis and image processing to bioinformatics. There exist a few algorithms for nonnegative matrix approximation (NNMA), for example, Lee & Seung's multiplicative updates, alternating least squares, and certain gradient descent based procedures. All of these procedures suffer from either slow convergence, numerical instabilities, or at worst, theoretical unsoundness. In this paper we present new and improved algorithms for the least-squares NNMA problem, which are not only theoretically well-founded, but also overcome many of the deficiencies of other methods. In particular, we use non-diagonal gradient scaling to obtain rapid convergence. Our methods provide numerical results superior to both Lee & Seung's method as well to the alternating least squares (ALS) heuristic, which is known to work well in some situations but has no theoretical guarantees (Berry et al. 2006). Our approach extends naturally to include regularization and box-constraints, without sacrificing convergence guarantees. We present experimental results on both synthetic and real-world datasets to demonstrate the superiority of our methods, in terms of better approximations as well as efficiency.
AB - Nonnegative Matrix Approximation is an effective matrix decomposition technique that has proven to be useful for a wide variety of applications ranging from document analysis and image processing to bioinformatics. There exist a few algorithms for nonnegative matrix approximation (NNMA), for example, Lee & Seung's multiplicative updates, alternating least squares, and certain gradient descent based procedures. All of these procedures suffer from either slow convergence, numerical instabilities, or at worst, theoretical unsoundness. In this paper we present new and improved algorithms for the least-squares NNMA problem, which are not only theoretically well-founded, but also overcome many of the deficiencies of other methods. In particular, we use non-diagonal gradient scaling to obtain rapid convergence. Our methods provide numerical results superior to both Lee & Seung's method as well to the alternating least squares (ALS) heuristic, which is known to work well in some situations but has no theoretical guarantees (Berry et al. 2006). Our approach extends naturally to include regularization and box-constraints, without sacrificing convergence guarantees. We present experimental results on both synthetic and real-world datasets to demonstrate the superiority of our methods, in terms of better approximations as well as efficiency.
KW - Active sets
KW - Factorization
KW - Least-squares
KW - Nonnegative matrix approximation
KW - Projected Newton methods
UR - http://www.scopus.com/inward/record.url?scp=56449106635&partnerID=8YFLogxK
U2 - 10.1137/1.9781611972771.31
DO - 10.1137/1.9781611972771.31
M3 - Conference contribution
AN - SCOPUS:56449106635
SN - 9780898716306
T3 - Proceedings of the 7th SIAM International Conference on Data Mining
SP - 343
EP - 354
BT - Proceedings of the 7th SIAM International Conference on Data Mining
PB - Society for Industrial and Applied Mathematics Publications
T2 - 7th SIAM International Conference on Data Mining
Y2 - 26 April 2007 through 28 April 2007
ER -