Many scientific, medical or engineering problems raise the issue of recovering some physical quantities from indirect measurements; for instance, detecting or quantifying flaws or cracks within a material from acoustic or electromagnetic measurements at its surface is an essential problem of non-des
Bayesian Approach to Inverse Problems
β Scribed by Jerome Idier
- Year
- 2008
- Tongue
- English
- Leaves
- 384
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Many scientific, medical or engineering problems raise the issue of recovering some physical quantities from indirect measurements; for instance, detecting or quantifying flaws or cracks within a material from acoustic or electromagnetic measurements at its surface is an essential problem of non-destructive evaluation. The concept of inverse problems precisely originates from the idea of inverting the laws of physics to recover a quantity of interest from measurable data.Unfortunately, most inverse problems are ill-posed, which means that precise and stable solutions are not easy to devise. Regularization is the key concept to solve inverse problems.The goal of this book is to deal with inverse problems and regularized solutions using the Bayesian statistical tools, with a particular view to signal and image estimation.The first three chapters bring the theoretical notions that make it possible to cast inverse problems within a mathematical framework. The next three chapters address the fundamental inverse problem of deconvolution in a comprehensive manner. Chapters 7 and 8 deal with advanced statistical questions linked to image estimation. In the last five chapters, the main tools introduced in the previous chapters are put into a practical context in important applicative areas, such as astronomy or medical imaging.
β¦ Table of Contents
Table of Contents......Page 8
Introduction......Page 18
Part I. Fundamental Problems and Tools......Page 26
1.1. Introduction......Page 28
1.2. Basic example......Page 29
1.3. Ill-posed problem......Page 33
1.3.1. Case of discrete data......Page 34
1.3.2. Continuous case......Page 35
1.4. Generalized inversion......Page 37
1.4.3. Example......Page 38
1.5. Discretization and conditioning......Page 39
1.6. Conclusion......Page 41
1.7. Bibliography......Page 42
2.1. Regularization......Page 44
2.1.1.1. Truncated singular value decomposition......Page 45
2.1.1.3. Iterative methods......Page 46
2.1.2. Minimization of a composite criterion......Page 47
2.1.2.1. Euclidian distances......Page 48
2.1.2.2. Roughness measures......Page 49
2.1.2.4. Kullback pseudo-distance......Page 50
2.2.1. Criterion minimization for inversion......Page 51
2.2.2.1. Non-iterative techniques......Page 52
2.2.2.2. Iterative techniques......Page 53
2.2.3. The convex case......Page 54
2.2.4. General case......Page 55
2.3.2. βL-curveβ method......Page 56
2.3.3. Cross-validation......Page 57
2.4. Bibliography......Page 59
3.1. Inversion and inference......Page 62
3.2. Statistical inference......Page 63
3.2.1. Noise law and direct distribution for data......Page 64
3.2.2. Maximum likelihood estimation......Page 66
3.3. Bayesian approach to inversion......Page 67
3.4. Links with deterministic methods......Page 69
3.5. Choice of hyperparameters......Page 70
3.6. A priori model......Page 71
3.7. Choice of criteria......Page 73
3.8.1. Statistical properties of the solution......Page 74
3.8.2. Calculation of marginal likelihood......Page 76
3.8.3. Wiener filtering......Page 77
3.9. Bibliography......Page 79
Part II. Deconvolution......Page 82
4.1. Introduction......Page 84
4.2.1. Inverse filtering......Page 85
4.2.2. Wiener filtering......Page 87
4.3.1. Choice of a quadrature method......Page 88
4.3.2. Structure of observation matrix H......Page 90
4.3.4. Problem conditioning......Page 92
4.3.4.2. Case of the Toeplitz matrix......Page 93
4.3.5. Generalized inversion......Page 94
4.4.1. Preliminary choices......Page 95
4.4.2. Matrix form of the estimate......Page 96
4.4.3. Huntβs method (periodic boundary hypothesis)......Page 97
4.4.4. Exact inversion methods in the stationary case......Page 99
4.4.6.1. Compromise between bias and variance in 1D deconvolution......Page 101
4.4.6.2. Results for 2D processing......Page 103
4.5.1. Kalman filtering......Page 105
4.5.2. Degenerate state model and recursive least squares......Page 107
4.5.3. Autoregressive state model......Page 108
4.5.3.1. Initialization......Page 109
4.5.3.2. Criterion minimized by Kalman smoother......Page 110
4.5.4. Fast Kalman filtering......Page 111
4.5.5.1. Asymptotic Kalman filtering......Page 113
4.5.7. Case of non-stationary signals......Page 114
4.6. Conclusion......Page 115
4.7. Bibliography......Page 116
5.1. Introduction......Page 120
5.2. Penalization of reflectivities, L2LP/L2Hy deconvolutions......Page 122
5.2.1. Quadratic regularization......Page 124
5.2.2. Non-quadratic regularization......Page 125
5.2.3. L2LP or L2Hy deconvolution......Page 126
5.3.2. Various strategies for estimation......Page 127
5.3.3. General expression for marginal likelihood......Page 128
5.3.4. An iterative method for BG deconvolution......Page 129
5.3.5. Other methods......Page 131
5.4.1. Nature of the solutions......Page 133
5.4.2. Setting the parameters......Page 135
5.5. Extensions......Page 136
5.5.2. Estimation of the impulse response......Page 137
5.6. Conclusion......Page 139
5.7. Bibliography......Page 140
6.1. Introduction......Page 144
6.2.1.1. Case of a monovariate signal......Page 145
6.2.1.2. Multivariate extensions......Page 146
6.2.2. Connection with image processing by linear PDE......Page 147
6.2.3. Limits of Tikhonovβs approach......Page 148
6.3.1. Principle......Page 151
6.3.2. Disadvantages......Page 152
6.4. Non-quadratic approach......Page 153
6.4.1. Detection-estimation and non-convex penalization......Page 157
6.4.2. Anisotropic diffusion by PDE......Page 158
6.5. Half-quadratic augmented criteria......Page 159
6.5.1. Duality between non-quadratic criteria and HQ criteria......Page 160
6.5.2.1. Principle of relaxation......Page 161
6.6.1. Calculation of the solution......Page 162
6.6.2. Example......Page 164
6.7. Conclusion......Page 167
6.8. Bibliography......Page 168
7.1. Introduction......Page 174
7.2. Bayesian statistical framework......Page 175
7.3. Gibbs-Markov fields......Page 176
7.3.1.1. Definition......Page 177
7.3.1.2. Trivial examples......Page 178
7.3.1.4. Markov chains......Page 179
7.3.2.1. Neighborhood relationship......Page 180
7.3.2.2. Definition of a Markov field......Page 181
7.3.2.4. Hammersley-Clifford theorem......Page 182
7.3.3. Posterior law of a GMRF......Page 183
7.3.4.1. Pixels with discrete values and label fields for classification......Page 184
7.3.4.2. Gaussian GMRF......Page 185
7.3.4.3. Edge variables, composite GMRF......Page 186
7.3.4.4. Interactive edge variables......Page 187
7.4.1. Statistical tools......Page 188
7.4.2. Stochastic sampling......Page 191
7.4.2.1. Iterative sampling methods......Page 192
7.4.2.2. Monte Carlo method of the MCMC kind......Page 195
7.4.2.3. Simulated annealing......Page 196
7.5. Conclusion......Page 197
7.6. Bibliography......Page 198
8.1. Introduction and statement of problem......Page 200
8.2.1. Likelihood properties......Page 202
8.2.2.2. Importance sampling......Page 203
8.2.3.1. Encoding methods......Page 205
8.2.3.2. Pseudo-likelihood......Page 206
8.2.3.3. Mean field......Page 207
8.3.1. Statement of problem......Page 208
8.3.2. EM algorithm......Page 209
8.3.3. Application to estimation of the parameters of a GMRF......Page 210
8.3.4. EM algorithm and gradient......Page 211
8.3.5. Linear GMRF relative to hyperparameters......Page 213
8.3.6.1. Generalized maximum likelihood......Page 215
8.3.6.2. Full Bayesian approach......Page 216
8.4. Conclusion......Page 218
8.5. Bibliography......Page 219
Part IV. Some Applications......Page 222
9.1. Introduction......Page 224
9.2.2. Evaluation principle......Page 225
9.2.3. Evaluation results and interpretation......Page 226
9.2.4. Help with interpretation by restoration of discontinuities......Page 227
9.3. Definition of direct convolution model......Page 228
9.4.1.1. Predictive deconvolution......Page 229
9.4.1.4. Sequential estimation: estimation of the kernel, then the input......Page 231
9.4.1.5. Joint estimation of kernel and input......Page 232
9.4.2.3. Double Bernoulli-Gaussian (DBG) deconvolution......Page 233
9.4.2.5. Behavior of DL2Hy/DBG deconvolution methods......Page 234
9.5. Processing real data......Page 235
9.5.1. Processing by blind deconvolution......Page 236
9.5.2. Deconvolution with a measured wave......Page 237
9.5.3. Comparison between DL2Hy and DBG......Page 240
9.6. Conclusion......Page 243
9.7. Bibliography......Page 244
10.1.1. Introduction......Page 246
10.1.2.1. Diffraction......Page 247
10.1.2.2. Principle of optical interferometry......Page 248
10.1.3.1. Turbulence and phase......Page 249
10.1.3.3. Short-exposure imaging......Page 250
10.1.3.4. Case of a long-baseline interferometer......Page 251
10.1.4.1. Speckle techniques......Page 252
10.1.4.2. Deconvolution from wavefront sensing (DWFS)......Page 253
10.1.4.4. Optical interferometry......Page 254
10.2. Inversion approach and regularization criteria used......Page 256
10.3.1. Introduction......Page 257
10.3.2. Hartmann-Shack sensor......Page 258
10.3.3. Phase retrieval and phase diversity......Page 260
10.4.1. Motivation and noise statistic......Page 261
10.4.2.1. Conventional processing of short-exposure images......Page 262
10.4.2.2. Myopic deconvolution of short-exposure images......Page 263
10.4.2.3. Simulations......Page 264
10.4.2.4. Experimental results......Page 265
10.4.3.1. Myopic deconvolution of images corrected by adaptive optics......Page 266
10.4.3.2. Experimental results......Page 268
10.4.4. Conclusion......Page 270
10.5.1. Observation model......Page 271
10.5.2. Traditional Bayesian approach......Page 274
10.5.3. Myopic modeling......Page 275
10.5.4.1. Processing of synthetic data......Page 277
10.5.4.2. Processing of experimental data......Page 279
10.6. Bibliography......Page 280
11.1. Velocity measurement in medical imaging......Page 288
11.1.2. Information carried by Doppler signals......Page 289
11.1.4. Data and problems treated......Page 291
11.2.1. Least squares and traditional extensions......Page 293
11.2.2.1. Spatial regularity......Page 294
11.2.2.3. Regularized least squares......Page 295
11.2.3.1. State and observation equations......Page 296
11.2.4. Estimation of hyperparameters......Page 297
11.2.5.2. Qualitative comparison......Page 299
11.3. Tracking spectral moments......Page 300
11.3.1.2. Amplitudes: prior distribution and marginalization......Page 301
11.3.1.3. Frequencies: prior law and posterior law......Page 303
11.3.2.1. Forward-Backward algorithm......Page 305
11.3.2.2. Likelihood gradient......Page 306
11.3.3.1. Tuning the hyperparameters......Page 307
11.3.3.2. Qualitative comparison......Page 308
11.4. Conclusion......Page 309
11.5. Bibliography......Page 310
12.1. Introduction......Page 314
12.2. Projection generation model......Page 315
12.3. 2D analytical methods......Page 316
12.5. Limitations of analytical methods......Page 320
12.6. Discrete approach to reconstruction......Page 322
12.7. Choice of criterion and reconstruction methods......Page 324
12.8.1. Optimization algorithms for convex criteria......Page 326
12.8.1.1. Gradient algorithms......Page 327
12.8.1.3. ART (Algebraic Reconstruction Technique)......Page 328
12.8.1.6. Richardson-Lucy algorithm......Page 329
12.8.2. Optimization or integration algorithms......Page 330
12.10.1. 2D reconstruction......Page 331
12.10.2. 3D reconstruction......Page 332
12.11. Conclusions......Page 334
12.12. Bibliography......Page 335
13.1. Introduction......Page 338
13.2.1. Examples of diffraction tomography applications......Page 339
13.2.1.2. Non-destructive evaluation of conducting materials using eddy currents......Page 340
13.2.2.1. Equations of propagation in an inhomogeneous medium......Page 341
13.2.2.2. Integral modeling of the direct problem......Page 342
13.3.1. Choice of algebraic framework......Page 343
13.3.2. Method of moments......Page 344
13.3.3. Discretization by the method of moments......Page 345
13.4. Construction of criteria for solving the inverse problem......Page 346
13.4.1. First formulation: estimation of Ξ¦......Page 347
13.4.2. Second formulation: simultaneous estimation of x and Ξ¦......Page 348
13.5. Solving the inverse problem......Page 350
13.5.1.1. Approximations......Page 351
13.5.1.3. Interpretation......Page 352
13.5.2. Joint minimization......Page 353
13.5.3. Minimizing MAP criterion......Page 354
13.6. Conclusion......Page 356
13.7. Bibliography......Page 357
14.1. Introduction......Page 360
14.2.1. Likelihood functions and limiting behavior......Page 362
14.2.2. Purely Poisson measurements......Page 363
14.2.4. Compound noise models with Poisson information......Page 365
14.3.1. Maximum likelihood properties......Page 366
14.3.2. Bayesian estimation......Page 369
14.4.1. Implementation for pure Poisson model......Page 371
14.4.2. Bayesian implementation for a compound data model......Page 373
14.6. Bibliography......Page 375
List of Authors......Page 378
Index......Page 380
π SIMILAR VOLUMES
Many scientific, medical or engineering problems raise the issue of recovering some physical quantities from indirect measurements; for instance, detecting or quantifying flaws or cracks within a material from acoustic or electromagnetic measurements at its surface is an essential problem of non-des
<p>This book is devoted to a special class of engineering problems called <i>Bayesian inverse problems</i>. These problems comprise not only the probabilistic Bayesian formulation of engineering problems, but also the associated stochastic simulation methods needed to solve them. Through this book,
This book is devoted to a special class of engineering problems called Bayesian inverse problems. These problems comprise not only the probabilistic Bayesian formulation of engineering problems, but also the associated stochastic simulation methods needed to solve them. Through this book, the reader
<p>This volume contains the text of the twenty-five papers presented at two workshops entitled Maximum-Entropy and Bayesian Methods in Applied Statistics, which were held at the University of Wyoming from June 8 to 10, 1981, and from August 9 to 11, 1982. The workshops were organized to bring togeth