Abstract
In many practical situations, it is highly desirable to estimate an accurate mathematical model of a real system using as few parameters as possible. At the same time, the need for an accurate description of the system behavior without knowing its complete dynamical structure often leads to model parameterizations describing a rich set of possible hypotheses; an unavoidable choice, which suggests sparsity of the desired parameter estimate. An elegant way to impose this expectation of sparsity is to estimate the parameters by penalizing the criterion with the l0 "norm" of the parameters. Due to the non-convex nature of the l0-norm, this penalization is often implemented as solving an optimization program based on a convex relaxation (e.g., l1/LASSO, nuclear norm, . . .). Two difficulties arise when trying to apply these methods: (1) the need to use cross-validation or some related technique for choosing the values of regularization parameters associated with the l1 penalty; and (2) the requirement that the (unpenalized) cost function must be convex. To address the first issue, we propose a new technique for sparse linear regression called SPARSEVA, with close ties with the LASSO (least absolute shrinkage and selection operator), which provides an automatic tuning of the amount of regularization. The second difficulty, which imposes a severe constraint on the types of model structures or estimation methods on which the l1 relaxation can be applied, is addressed by combining SPARSEVA and the Steiglitz-McBride method. To demonstrate the advantages of the proposed approach, a solid theoretical analysis and an extensive simulation study are provided.
Original language | English |
---|---|
Pages (from-to) | 2962-2977 |
Number of pages | 16 |
Journal | IEEE Transactions on Automatic Control |
Volume | 59 |
Issue number | 11 |
DOIs | |
Publication status | Published - 2014 |