Although nonnegative matrix factorization (NMF) favors a part-based and sparse representation of its input, there is no guarantee for this behavior. Several extensions to NMF have been proposed in order to introduce sparseness via the ℓ1-norm, while little work is done using the more natural sparseness measure, the ℓ0-pseudo-norm. In this work we propose two NMF algorithms with ℓ0-sparseness constraints on the bases and the coefficient matrices, respectively. We show that classic NMF  is a suited tool for ℓ0-sparse NMF algorithms, due to a property we call sparseness maintenance. We apply our algorithms to synthetic and real-world data and compare our results to sparse NMF  and nonnegative KSVD .
|Title of host publication||Proceedings of the 2010 IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2010|
|Number of pages||6|
|Publication status||Published - 24 Nov 2010|
|Event||2010 IEEE 20th International Workshop on Machine Learning for Signal Processing, MLSP 2010 - Kittila, Finland|
Duration: 29 Aug 2010 → 1 Sep 2010
|Conference||2010 IEEE 20th International Workshop on Machine Learning for Signal Processing, MLSP 2010|
|Period||29/08/10 → 1/09/10|