<p><p>Since publication of the initial papers in 2006, compressed sensing has captured the imagination of the international signal processing community, and the mathematical foundations are nowadays quite well understood.</p><p>Parallel to the progress in mathematics, the potential applications of c
Compressed sensing and its applications: MATHEON Workshop 2013
✍ Scribed by Boche H (ed.)
- Publisher
- Birkhauser
- Year
- 2015
- Tongue
- English
- Leaves
- 475
- Series
- Applied and Numerical Harmonic Analysis
- Category
- Library
No coin nor oath required. For personal study only.
✦ Table of Contents
ANHA Series Preface......Page 6
Preface......Page 10
Contents......Page 12
1 A Survey of Compressed Sensing......Page 14
1.2.1 Norms and quasi-norms......Page 15
1.2.2 Random Variables......Page 17
1.3.1 Basis pursuit......Page 19
1.3.2 Null Space Property......Page 23
1.3.3 Restricted Isometry Property......Page 24
1.3.4.1 Concentration inequalities......Page 26
1.3.4.2 RIP for random Gaussian matrices......Page 30
1.3.4.3 Lemma of Johnson and Lindenstrauss......Page 32
1.3.5 Stability and Robustness......Page 33
1.3.6 Optimality of bounds......Page 36
1.4 Extensions......Page 38
1.4.1 Frames and Dictionaries......Page 39
1.4.2 Coherence......Page 40
1.4.3 Stability and Robustness......Page 41
1.4.4 Recovery algorithms......Page 43
1.4.4.2 Greedy algorithms......Page 44
1.4.5 Structured sparsity......Page 47
1.4.6 Compressed Learning......Page 48
References......Page 50
2.1 Introduction......Page 53
2.2 Temporal resolution background......Page 54
2.2.1 Temporal resolution of image and video cameras......Page 55
2.2.2 Motion blur model and camera exposure modulation......Page 57
2.3.1 CS-video model......Page 60
2.3.2.1 Structured illumination......Page 64
2.3.2.2 Per-pixel spatial light modulator coded projections......Page 66
2.3.3 Coded aperture compressive temporal imaging: a passive CS-video technique......Page 69
2.4 CS-video reconstruction algorithms and adaptive sensing frameworks......Page 73
2.4.1 Generalized alternating projection......Page 76
2.4.2 Gaussian mixture models for CS-video......Page 78
2.4.3 Adaptive CS-video strategies......Page 80
2.5 Video frames recovered from a snapshot......Page 81
References......Page 84
3 Compressed Sensing, Sparse Inversion, and Model Mismatch......Page 87
3.1 Introduction......Page 88
3.2 Model Mismatch in Compressed Sensing......Page 90
3.3 A Canonical Problem......Page 92
3.4.1 Inversion in a DFT Basis......Page 94
3.4.2 Inversion in an Overresolved DFT Dictionary......Page 99
3.5 Performance Bounds for Compressed Sensing with Basis Mismatch......Page 104
3.6 Compressed Sensing Off The Grid......Page 105
References......Page 106
4.1 Introduction......Page 108
4.1.2 Structured Signals and Noisy Compressed Sensing......Page 109
4.1.3 LASSO......Page 110
4.1.4 Generalized LASSO......Page 111
4.2 Least Squares......Page 112
4.2.2 Fixed Noise......Page 113
4.3 The Generalized LASSO......Page 114
4.3.1.2 Proximal denoising......Page 115
4.3.1.3 The merger'' LASSO......Page 116<br> 4.3.2 Motivating Examples......Page 117<br> 4.3.2.1 Sparse Signal Estimation......Page 118<br> 4.3.2.2 Low-rank Matrix Estimation......Page 119<br> 4.4 Background......Page 120<br> 4.4.2.2 Convex Cones......Page 121<br> 4.4.2.3 Gaussian Squared Distance......Page 122<br> 4.4.2.4 Gaussian Squared Distance to the Scaled Subdifferential......Page 123<br> 4.4.3.1 Gordon's Lemma......Page 124<br> 4.5 The NSE of Generalized LASSO in Gaussian Noise......Page 125<br> 4.5.1.1 First-Order Approximation......Page 126<br> 4.5.1.2 Importance of σ→0......Page 127<br> 4.5.1.4 Analyzing the Key Optimization......Page 128<br> 4.5.1.6 Synopsis of the Technical Framework......Page 130<br> 4.5.2.1 NSE......Page 131<br> 4.5.3 2-LASSO......Page 132<br> 4.5.3.2 Regions Of Operation......Page 133<br> 4.5.3.3 Characterizing the NSE in each Region......Page 134<br> 4.5.4.1 Connection to 2-LASSO......Page 136<br> 4.6.1 C-LASSO......Page 137<br> 4.6.1.3 Proof Overview......Page 138<br> 4.6.2 2-LASSO......Page 140<br> 4.6.2.1 NSE......Page 141<br> 4.6.2.3 Proof Overview......Page 142<br> 4.7 The Worst-Case NSE of Generalized LASSO......Page 144<br> 4.8 Conclusion......Page 145<br> Appendix......Page 146<br> Sparse signals......Page 148<br> References......Page 149<br> 5.1 Introduction......Page 153<br> 5.2 Recovery of wavelet coefficients......Page 154<br> 5.2.1 Universal sensing matrices......Page 155<br> 5.2.2 Sparsity structure dependence and the flip test......Page 156<br> 5.2.3 Structured sparsity......Page 159<br> 5.3.1 Block-diagonality and structured Fourier/Hadamard sampling......Page 161<br> 5.3.2 Incoherent bases and compressed sensing......Page 162<br> 5.3.3 Local incoherence and near block-diagonality of Fourier measurements with wavelets......Page 163<br> 5.4.1 Concepts......Page 166<br> 5.4.2 Main theorem......Page 168<br> 5.4.3 The case of Fourier sampling with wavelets......Page 170<br> 5.6 The Restricted Isometry Property in levels......Page 171<br> References......Page 175<br>6 Compressive Sensing in Acoustic Imaging......Page 178<br> 6.1 Introduction......Page 179<br> 6.2.1 Standard Nearfield Acoustic Holography (NAH)......Page 180<br> 6.2.3 Sparse regularization for inverse problems......Page 182<br> 6.2.4 Sensor design for compressive acquisition......Page 183<br> 6.2.5 Practical benefits and sensitivity to hardware implementation......Page 184<br> 6.3 Acoustic imaging scenarios......Page 186<br> 6.3.1.1 Problem formulation......Page 187<br> 6.3.1.2 From beamforming to sparse approaches......Page 188<br> 6.3.2 Sampling the plenacoustic function (scenario b)......Page 189<br> 6.3.3 Medical ultrasound (scenario c)......Page 192<br> 6.3.3.1 Sparse modeling of the ultrasound imaging process......Page 193<br> 6.3.3.3 Subsampling and compressive sensing strategies......Page 194<br> 6.4.1.1 Localization of directive sources (scenario d)......Page 195<br> 6.4.2 Cosparsity......Page 196<br> 6.5 Conclusion / discussion......Page 198<br> References......Page 200<br>7 Quantization and Compressive Sensing......Page 202<br> 7.1 Introduction......Page 203<br> 7.2 Fundamentals of Quantization......Page 204<br> 7.2.1 Quantization Performance Bounds......Page 205<br> 7.2.2.1 Measurement and Scalar Quantization......Page 207<br> 7.2.2.2 Scalar Quantization and Oversampling......Page 208<br> 7.2.2.3 Performance Bounds on Sparse Signals......Page 210<br> 7.2.3 Sigma-Delta Quantization......Page 211<br> 7.3 Scalar Quantization and Compressive Sensing......Page 214<br> 7.3.1 Uniform Scalar Quantization......Page 215<br> 7.3.2 Non-Uniform Scalar Quantization......Page 219<br> 7.3.3 Finite-Range Scalar Quantizer Design......Page 222<br> 7.3.4 1-Bit Compressive Sensing......Page 224<br> 7.3.4.1 Theoretical Performance Bounds......Page 225<br> 7.3.4.2 Reconstruction from 1-Bit Measurements......Page 228<br> 7.3.5 Noise, Quantization, and Trade-offs......Page 230<br> 7.4.1 ΣΔ Quantization for Frames......Page 232<br> 7.4.2 Finding the Signal Support......Page 236<br> 7.4.3 Recovering the Signal Coefficients......Page 237<br> 7.5 Discussion and Conclusion......Page 240<br> References......Page 242<br> 8.1.1 Conceptual compressive learning outline......Page 247<br> 8.1.2 Density mixture estimation......Page 249<br> 8.2.1 Density mixture estimation as a linear inverse problem......Page 250<br> 8.2.2.2 Compressed histogram estimation......Page 252<br> 8.3.1 The compressive operator......Page 253<br> 8.3.2 Proposed instantiation: isotropic Gaussians......Page 255<br> 8.4.1 Reminder of Iterative Hard Thresholding......Page 256<br> 8.4.2.1 Thegradient'': continuous version......Page 257
8.4.2.3 Gradient descent step......Page 258
8.4.3 Memory usage......Page 259
8.5.1 Experimental setup......Page 260
8.5.4 Results......Page 262
8.6 Conclusion and outlooks......Page 264
References......Page 265
9 Two Algorithms for Compressed Sensing of Sparse Tensors......Page 267
9.1 Introduction......Page 268
9.2.1 Vector and Matrix Notation......Page 269
9.2.2.1 Compressed Sensing of Matrices - Serial Recovery (CSM-S)......Page 270
9.2.2.2 Compressed Sensing of Matrices - Parallelizable Recovery (CSM-P)......Page 271
9.2.2.3 Simulation Results......Page 272
9.2.3 Recovery of Data in the Presence of Noise......Page 273
9.2.3.1 Compressed Sensing of Matrices - Serial Recovery (CSM-S) in the Presence of Noise......Page 274
9.2.3.2 Compressed Sensing of Matrices - Parallelizable Recovery (CSM-P) in the Presence of Noise......Page 276
9.3.1 A Brief Introduction to Tensors......Page 277
9.3.2.2 Generalized Tensor Compressed Sensing—Parallelizable Recovery (GTCS-P)......Page 280
9.3.2.3 Simulation Results......Page 282
9.3.3.1 Generalized Tensor Compressed Sensing—Serial Recovery (GTCS-S) in the Presence of Noise......Page 284
9.3.3.3 Simulation Results......Page 285
9.3.4 Tensor Compressibility......Page 286
9.4 Conclusion......Page 287
References......Page 288
10.1 Introduction......Page 290
10.1.1 Problem Statement......Page 292
10.1.2 Bilinear Inverse Problems with Sparsity Priors......Page 294
10.2.1 Structure-aware Approximation......Page 296
10.2.2 Bi-Lipschitz Mappings and the RNMP......Page 299
10.2.3 Covering and Entropy......Page 301
10.2.4 Random Sampling Methods......Page 303
10.3 Sparse Convolutions and Stability......Page 308
10.3.1 The RNMP for Sparse Convolutions......Page 309
10.3.2 Implications for Phase Retrieval......Page 316
References......Page 319
11.1 Introduction......Page 321
11.2 Analysis vs. synthesis sparsity......Page 323
11.3.1 Analysis null space property......Page 326
11.3.2 Restricted isometry property......Page 327
11.3.3 Recovery conditions via tangent cones......Page 328
11.3.4 Dual certificates......Page 329
11.4 Recovery from random measurements......Page 332
11.4.1 Uniform recovery via the restricted isometry property......Page 333
11.4.2 Nonuniform recovery from Gaussian measurements......Page 335
11.4.3 Nonuniform recovery from subgaussian measurements......Page 340
11.4.4 Uniform recovery via the Ω-null space property......Page 341
References......Page 343
12.1 Introduction......Page 346
12.1.1 Why structured sparsity?......Page 347
12.2 Preliminaries......Page 349
12.3 Sparse group models......Page 351
12.3.1 The discrete model......Page 353
12.3.2 Convex approaches......Page 354
12.4 Sparse dispersive models......Page 356
12.4.1 The discrete model......Page 357
12.4.2 Convex approaches......Page 359
12.5 Hierarchical sparse models......Page 360
12.5.1 The discrete model......Page 361
12.5.2 Convex approaches......Page 362
12.6 Submodular models......Page 363
12.6.1 The discrete model......Page 364
12.6.2 Convex approaches......Page 365
12.6.3 Examples......Page 367
12.7 Optimization......Page 369
12.7.1.1 Projected gradient descent and matching pursuit variants......Page 372
12.7.2.1 Proximity methods......Page 373
12.7.2.2 Primal–dual and alternating minimization approaches......Page 374
12.8.1 Compressive Imaging......Page 375
12.8.1.1 Results......Page 377
12.8.2 Neuronal spike detection from compressed data......Page 380
12.8.3 Digital Confocal Imaging......Page 383
12.9 Conclusions......Page 385
References......Page 387
13.1 Introduction......Page 393
13.2.1 The Big-Picture Techniques......Page 395
13.2.2 A Brief Introduction to Additive Combinatorics......Page 397
13.3 The Matrix Construction......Page 400
13.3.1 How to Construct B......Page 402
13.3.2 How to Construct A......Page 404
13.4 The Main Result......Page 406
13.5.1 Proof of Lemma 9......Page 412
13.5.2 Proof of Lemma 10......Page 415
References......Page 420
14 Tensor Completion in Hierarchical Tensor Representations......Page 422
14.1 Introduction......Page 423
14.2.1 Tensor product spaces......Page 426
14.2.2 Subspace approximation......Page 427
14.2.3 Hierarchical tensor representation......Page 428
14.2.4 Tensor trains and matrix product representation......Page 430
14.2.5 Matricization of a tensor and its multi-linear rank......Page 432
14.2.6 Higher order singular value decomposition......Page 433
14.2.7 Hierarchical singular value decompositionand truncation......Page 434
14.2.8 Hierarchical tensors as differentiable manifolds......Page 436
14.3.1 The low rank tensor recovery problem......Page 438
14.3.2 Optimization approaches......Page 439
14.3.3 Iterative hard thresholding schemes......Page 440
14.3.4 Restricted isometry property for hierarchical tensors......Page 441
14.3.5 Convergence results......Page 442
14.3.6 Alternating least squares scheme (ALS)......Page 445
14.4 Numerical results......Page 447
14.5 Concluding remarks......Page 449
References......Page 450
15 Compressive Classification: Where Wireless Communications Meets Machine Learning......Page 454
15.1 Introduction......Page 455
15.2 The Compressive Classification Problem......Page 456
15.3 Dualities: Compressive Classification and Multiple-Antenna Wireless Communications......Page 457
15.4.1 The Diversity-Discrimination Tradeoff (DDT)......Page 459
15.4.2 The Diversity Gain at Zero Discrimination Gain......Page 461
15.5 Results......Page 465
15.6 Conclusions......Page 468
References......Page 469
Applied and Numerical Harmonic Analysis(69 volumes)......Page 472
📜 SIMILAR VOLUMES
<p>This contributed volume contains articles written by the plenary and invited speakers from the second international MATHEON Workshop 2015 that focus on applications of compressed sensing. Article authors address their techniques for solving the problems of compressed sensing, as well as connectio
<p>The chapters in this volume highlight the state-of-the-art of compressed sensing and are based on talks given at the third international MATHEON conference on the same topic, held from December 4-8, 2017 at the Technical University in Berlin. In addition to methods in compressed sensing, chapters