Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Pathwise coordinate optimization
1.936
Zitationen
4
Autoren
2007
Jahr
Abstract
We consider “one-at-a-time” coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the L1-penalized regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the “fused lasso,” however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.
Ähnliche Arbeiten
Compressed sensing
2006 · 23.004 Zit.
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
1984 · 17.948 Zit.
Compressed sensing
2004 · 17.216 Zit.
Regularization Paths for Generalized Linear Models via Coordinate Descent
2010 · 16.830 Zit.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
2006 · 15.723 Zit.