<p>This book presents tools and methods for large-scale and distributed optimization. Since many methods in "Big Data" fields rely on solving large-scale optimization problems, often in distributed fashion, this topic has over the last decade emerged to become very important. As well as specific cov
Large-scale and distributed optimization
β Scribed by Giselsson, Pontus.; Rantzer, Anders (ed.)
- Publisher
- Springer International Publishing : Imprint: Springer
- Year
- 2018
- Tongue
- English
- Leaves
- 416
- Series
- Springer Lecture notes in mathematics 2227
- Edition
- 1st ed. 2018
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
This book presents tools and methods for large-scale and distributed optimization. Since many methods in "Big Data" fields rely on solving large-scale optimization problems, often in distributed fashion, this topic has over the last decade emerged to become very important. As well as specific coverage of this active research field, the book serves as a powerful source of information for practitioners as well as theoreticians. Large-Scale and Distributed Optimization is a unique combination of contributions from leading experts in the field, who were speakers at the LCCC Focus Period on Large-Scale and Distributed Optimization, held in Lund, 14th-16th June 2017. A source of information and innovative ideas for current and future research, this book will appeal to researchers, academics, and students who are interested in large-scale optimization.
β¦ Table of Contents
Preface......Page 6
Contents......Page 7
Contributors......Page 11
1.1.1 The Traditional Standard......Page 14
1.1.2 The Composite Form......Page 15
1.1.3 Randomized Methods......Page 18
1.1.4 Consensus Optimization......Page 19
1.1.4.1 Distributed Algorithms......Page 20
1.1.6 Omissions......Page 21
References......Page 22
2.1 Introduction......Page 24
Notation......Page 25
2.2 Convex Optimization......Page 26
2.2.1 Interior-Point Methods......Page 27
2.2.2 Parametric QPs......Page 28
2.3.1 Quadratic Program......Page 30
2.3.2 Backward Dynamic Programming......Page 31
2.3.3 Forward Dynamic Programming......Page 34
2.3.4 Parallel Computation......Page 35
2.3.5 Merging of Cliques......Page 37
2.4 Regularized MPC......Page 38
2.4.1 Equivalent QP......Page 39
2.5 Stochastic MPC......Page 40
2.7 Conclusions......Page 43
References......Page 44
3 Decomposition Methods for Large-Scale Semidefinite Programs with Chordal Aggregate Sparsity and Partial Orthogonality......Page 46
3.1 Introduction......Page 47
3.2.1 Essential Notions from Graph Theory......Page 49
3.2.2 Graphs, Sparse Matrices, and Chordal Decomposition......Page 50
3.3.1 Homogeneous Self-dual Embedding......Page 52
3.3.2 A Tailored ADMM Algorithm......Page 54
3.4 Cone Decomposition in Sparse SDPs......Page 55
3.5 Affine Decomposition in SDPs with Partial Orthogonality......Page 59
3.6.1 CDCS......Page 61
3.6.2 Cone Decomposition: The hsde Option......Page 62
3.6.3 Affine Decomposition: The sos Option......Page 64
3.7 Conclusion......Page 65
References......Page 66
4 Smoothing Alternating Direction Methods for Fully Nonsmooth Constrained Convex Optimization......Page 69
4.1 Introduction......Page 70
4.2 Preliminaries: Lagrangian Primal-Dual Formulation......Page 72
4.2.2 Basic Assumptions......Page 73
4.2.4 Technical Assumption......Page 74
4.3 Smoothing the Primal-Dual Gap Function......Page 75
4.4.1 Main Steps......Page 77
4.4.2 Initialization......Page 78
4.4.3 Updating the Parameters......Page 79
4.4.4 The New Smoothing AMA Algorithm......Page 80
4.4.5 Convergence Analysis......Page 81
4.4.6 Special Case: g is Strongly Convex......Page 82
4.4.7 Composite Convex Minimization with Linear Operators......Page 83
4.5.1 The Main Steps of the Smoothing ADMM Method......Page 84
4.5.2 Updating Parameters......Page 85
4.5.4 Convergence Analysis......Page 86
4.5.5 SAMA vs. SADMM......Page 87
4.6 Numerical Evidence......Page 88
4.7 Discussion......Page 90
Proof of Lemma 2: The Primal-Dual Bounds......Page 93
Convergence Analysis of Algorithm 1......Page 94
Proof of Lemma 4: Bound on GΞ³Ξ² for the First Iteration......Page 96
Proof of Lemma 3: Gap Reduction Condition......Page 97
Proof of Lemma 5: Parameter Updates......Page 99
Proof of Theorem 1: Convergence of Algorithm 1......Page 100
Convergence Analysis of Algorithm 2......Page 101
Proof of Lemma 6: Gap Reduction Condition......Page 102
Proof of Lemma 7: Parameter Updates......Page 104
Proof of Theorem 2: Convergence of Algorithm 2......Page 105
References......Page 106
5.1 Introduction......Page 108
Notation and Background......Page 110
5.2 A Simple Framework for Primal-Dual Algorithms......Page 112
5.3 Simplified Asymmetric Forward-Backward-Adjoint Splitting......Page 118
5.4 A Unified Convergence Analysis for Primal-Dual Algorithms......Page 123
5.4.1 Linear Convergence......Page 124
References......Page 130
6.1 Introduction......Page 132
6.2 Applications......Page 135
6.3 Analysis......Page 138
6.3.1 Possible Generalizations......Page 148
6.4.1 Basis Pursuit......Page 149
6.4.2 Basis Pursuit with Noise......Page 151
6.4.3 Robust Principal Component Analysis......Page 154
References......Page 156
7.1 Introduction......Page 159
7.2 Notation, Background and Preliminary Results......Page 161
7.3.1 Stochastic Forward-Douglas-Rachford Splitting Method......Page 163
7.3.2 Composite Monotone Inclusions Involving Cocoercive Operators......Page 174
7.4 Convergence Rate......Page 179
References......Page 187
8 Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints......Page 190
8.1 Introduction......Page 191
8.2 Mirror Descent Basics......Page 192
8.3.1 Convex Non-smooth Objective Function......Page 194
8.3.2 Strongly Convex Non-smooth Objective Function......Page 199
8.3.3 General Convex Objective Function......Page 203
8.4 Randomization for Constrained Problems......Page 206
8.4.1 Convex Objective Function, Control of Expectation......Page 207
8.4.2 Convex Objective Function, Control of Large Deviation......Page 210
8.4.3 Strongly Convex Objective Function, Control of Expectation......Page 213
8.4.4 Strongly Convex Objective Function, Control of Large Deviation......Page 216
8.5 Discussion......Page 219
References......Page 220
9.1 Introduction......Page 223
9.2.1 Algorithm and Convergence......Page 229
9.2.2 Numerics......Page 232
9.3 Stochastic Variance Reduced Frank-Wolfe (SVRF) Algorithm with Approximate Oracle (SVRF)......Page 237
9.4 Theoretical Guarantees for SVRF......Page 239
9.5 SSVRF......Page 246
9.6 Theoretical Guarantees for SSVRF......Page 249
Appendix......Page 251
References......Page 252
10 Decentralized Consensus Optimization and Resource Allocation......Page 254
10.1 Introduction......Page 255
10.1.1 Literature Review......Page 256
10.1.2 Notation and Basic Settings for Networks......Page 261
10.2.1 Over Time-Invariant Undirected Graphs......Page 263
10.2.2 Over Time-Varying Undirected Graphs......Page 265
10.2.2.1 Geometric Convergence......Page 267
10.2.3 Over Time-Varying Directed Graphs......Page 271
10.2.3.1 Geometric Convergence......Page 272
10.3 Resource Allocation and Its Connection to Consensus Optimization......Page 274
10.3.1 The Resource Allocation and Consensus Optimization Problems......Page 275
10.3.2 The Mirror Relationship......Page 276
10.4 Decentralized Resource Allocation Algorithms......Page 277
10.4.1 Over Time-Invariant Undirected Graphs......Page 278
10.4.1.1 Unconstrained Case: Mirror-EXTRA......Page 279
10.4.2 Over Time-Varying Directed Graphs......Page 281
10.4.3 Decentralized Resource Allocation with Local Couplings......Page 282
10.5.1 Consensus Optimization......Page 284
10.5.2 Resource Allocation......Page 286
References......Page 290
11 Communication-Efficient Distributed Optimization of Self-concordant Empirical Loss......Page 295
11.1 Introduction......Page 296
11.1.1 Communication Efficiency of Distributed Algorithms......Page 297
11.1.2 Outline of Our Approach......Page 299
11.2 Self-concordant Empirical Loss......Page 302
11.3 Inexact Damped Newton Method......Page 305
11.3.2 Scaling for Non-standard Self-concordant Functions......Page 309
11.4 The DiSCO Algorithm......Page 310
11.4.1 Distributed Computing of the Inexact Newton Step......Page 311
11.4.2 DiSCO and Its Communication Efficiency......Page 315
11.4.3 A Simple Variant Without PCG Iterations......Page 317
11.5 Stochastic Analysis......Page 318
11.5.1 Application to Linear Regression......Page 322
11.5.2.1 Logistic Regression......Page 324
11.5.2.2 Smoothed Hinge Loss......Page 325
11.6.1 Experiment Setup......Page 326
11.6.2 Performance Evaluation......Page 327
11.7 Extension to Distributed Composite Minimization......Page 329
11.8 Conclusions......Page 330
Appendix 1: Proof of Theorem 1......Page 331
Appendix 2: Proof of Theorem 2 and Corollary 2......Page 334
Appendix 3: Proof of Lemma 4......Page 336
Appendix 4: Proof of Lemma 5......Page 337
Appendix 5: Proof of Lemma 6......Page 341
Appendix 6: Proof of Theorem 5......Page 343
References......Page 345
12 Numerical Construction of Nonsmooth Control LyapunovFunctions......Page 348
12.1 Introduction......Page 349
12.2 Mathematical Setting......Page 351
12.3.1 Discretization of the State Space......Page 354
12.3.2 Continuous Piecewise Affine Functions......Page 355
12.4.1 The Decrease Condition for Piecewise Affine Functions......Page 357
12.4.2 Semiconcavity Conditions......Page 358
12.4.3 Local Minimum Condition......Page 364
12.4.4 A Finite Dimensional Optimization Problem......Page 365
12.5.1 Approximation of System Parameters and Reformulation of Nonlinear Constraints......Page 367
12.5.2 The Mixed Integer Linear Programming Formulation......Page 369
12.6.1 Artstein's Circles......Page 370
12.6.2 A Two-Dimensional Example with Two Inputs......Page 373
12.7 Conclusions......Page 376
References......Page 377
13 Convergence of an Inexact Majorization-Minimization Method for Solving a Class of Composite Optimization Problems......Page 379
13.1 Introduction......Page 380
13.2.2 Definition......Page 382
13.2.3 Examples......Page 383
13.2.4 Consistent Majorizers of Composite Functions......Page 387
13.3 Stationarity Measures and Conditions......Page 392
13.4.1 The General Scheme......Page 397
13.4.2 Convergence Analysis of the IMM Method......Page 400
13.5 Applying the IMM Method on Compositions of Strongly Convex Majorizers......Page 403
References......Page 413
β¦ Subjects
Communications Engineering, Networks;Control and Systems Theory;Mathematical optimization;Optimization;Telecommunication
π SIMILAR VOLUMES
<p>Decomposition methods aim to reduce large-scale problems to simpler problems. This monograph presents selected aspects of the dimension-reduction problem. Exact and approximate aggregations of multidimensional systems are developed and from a known model of input-output balance, aggregation metho
This book reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research. Individual chapters, contributed by eminent authorities, provide an up-to-date
<p><P><STRONG>Large-Scale Nonlinear Optimization </STRONG>reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research.</P><P></P><P>The chapters of t
This book reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research. Individual chapters, contributed by eminent authorities, provide an up-to-date
<p><span>Large-Scale Nonlinear Optimization reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research.</span></p><p><span>The chapters of the book,