<p>This thesis demonstrates techniques that provide faster and more accurate solutions to a variety of problems in machine learning and signal processing. The author proposes a "greedy" algorithm, deriving sparse solutions with guarantees of optimality. The use of this algorithm removes many of the
Algorithms for Sparsity-Constrained Optimization
โ Scribed by Sohail Bahmani
- Publisher
- Imprint; Springer; Springer International Publishing
- Year
- 2014
- Tongue
- English
- Leaves
- 124
- Series
- Springer Theses Recognizing Outstanding Ph. D. Research
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Table of Contents
Supervisor's Foreword
Acknowledgements
Parts of This Thesis Have Been Published in the Following Articles
Contents
List of Algorithms
List of Figures
List of Tables
Notations
1 Introduction
1.1 Contributions
1.2 Thesis Outline
References
2 Preliminaries
2.1 Sparse Linear Regression and Compressed Sensing
2.2 Nonlinear Inference Problems
2.2.1 Generalized Linear Models
2.2.2 1-Bit Compressed Sensing
2.2.3 Phase Retrieval
References
3 Sparsity-Constrained Optimization
3.1 Background
3.2 Convex Methods and Their Required Conditions
3.3 Problem Formulation and the GraSP Algorithm
3.3.1 Algorithm Description
3.3.1.1 Variants
3.3.2 Sparse Reconstruction Conditions
3.3.3 Main Theorems
3.4 Example: Sparse Minimization of L2-Regularized Logistic Regression
3.4.1 Verifying SRH for 2-Regularized Logistic Loss
3.4.2 Bounding the Approximation Error
3.5 Simulations
3.5.1 Synthetic Data
3.5.2 Real Data
3.6 Summary and Discussion
References
4 1-Bit Compressed Sensing
4.1 Background
4.2 Problem Formulation
4.3 Algorithm
4.4 Accuracy Guarantees
4.5 Simulations
4.6 Summary
References
5 Estimation Under Model-Based Sparsity
5.1 Background
5.2 Problem Statement and Algorithm
5.3 Theoretical Analysis
5.3.1 Stable Model-Restricted Hessian
5.3.2 Accuracy Guarantee
5.4 Example: Generalized Linear Models
5.4.1 Verifying SMRH for GLMs
5.4.2 Approximation Error for GLMs
5.5 Summary
References
6 Projected Gradient Descent for Lp-Constrained Least Squares
6.1 Background
6.2 Projected Gradient Descent for Lp-Constrained Least Squares
6.3 Discussion
References
7 Conclusion and Future Work
Appendix
A Proofs of Chap.3
A.1 Iteration Analysis For Smooth Cost Functions
A.2 Iteration Analysis For Non-smooth Cost Functions
Appendix
B Proofs of Chap.4
B.1 On Non-convex Formulation of BM:PlanRobust2013
Reference
Appendix
C Proofs of Chap.5
Appendix
D Proofs of Chap.6
D.1 Proof of Theorem 6.1
D.2 Lemmas for Characterization of a Projection onto Lp-Balls
References
๐ SIMILAR VOLUMES
This book, developed through class instruction at MIT over the last 15 years, provides an accessible, concise, and intuitive presentation of algorithms for solving convex optimization problems. It relies on rigorous mathematical analysis, but also aims at an intuitive exposition that makes use of vi
Multi Agent Systems (MAS) have recently attracted a lot of interest because of their ability to model many real life scenarios where information and control are distributed among a set of different agents. Practical applications include planning, scheduling, distributed control, resource allocation
Multi Agent Systems (MAS) have recently attracted a lot of interest because of their ability to model many real life scenarios where information and control are distributed among a set of different agents. Practical applications include planning, scheduling, distributed control, resource allocation
<p>Significant research activity has occurred in the area of global optimization in recent years. Many new theoretical, algorithmic, and computational contributions have resulted. Despite the major importance of test problems for researchers, there has been a lack of representative nonconvex test pr