Although nonnegative matrix factorization (NMF) favors a part-based and sparse representation of its input, there is no guarantee for this behavior. Several extensions to NMF have been proposed in order to introduce sparseness via the ℓ1-norm, while little work is done using the more natural sparseness measure, the ℓ0-pseudo-norm. In this work we propose two NMF algorithms with ℓ0-sparseness constraints on the bases and the coefficient matrices, respectively. We show that classic NMF  is a suited tool for ℓ0-sparse NMF algorithms, due to a property we call sparseness maintenance. We apply our algorithms to synthetic and real-world data and compare our results to sparse NMF  and nonnegative KSVD .
|Titel||Proceedings of the 2010 IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2010|
|Status||Gepubliceerd - 24 nov 2010|
|Evenement||2010 IEEE 20th International Workshop on Machine Learning for Signal Processing, MLSP 2010 - Kittila, Finland|
Duur: 29 aug 2010 → 1 sep 2010
|Congres||2010 IEEE 20th International Workshop on Machine Learning for Signal Processing, MLSP 2010|
|Periode||29/08/10 → 1/09/10|