𝔖 Scriptorium
✦   LIBER   ✦

📁

Essential Statistical Inference: Theory and Methods

✍ Scribed by Dennis D. Boos, L A Stefanski


Publisher
Springer
Year
2013
Tongue
English
Leaves
566
Series
Springer Texts in Statistics
Edition
2013
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


​This book is for students and researchers who have had a first year graduate level mathematical statistics course.  It covers classical likelihood, Bayesian, and permutation inference; an introduction to basic asymptotic distribution theory; and modern topics like M-estimation, the jackknife, and the bootstrap. R code is woven throughout the text, and there are a large number of examples and problems. An important goal has been to make the topics accessible to a wide audience, with little overt reliance on measure theory.  A typical semester course consists of Chapters 1-6 (likelihood-based estimation and testing, Bayesian inference, basic asymptotic results) plus selections from M-estimation and related testing and resampling methodology. Dennis Boos and Len Stefanski are professors in the Department of Statistics at North Carolina State. Their research has been eclectic, often with a robustness angle, although Stefanski is also known for research concentrated on measurement error, including a co-authored book on non-linear measurement error models. In recent years the authors have jointly worked on variable selection methods. ​

✦ Table of Contents


Title......Page 2
Copyright info......Page 3
Preface......Page 5
Contents......Page 6
Part I IntroductoryMaterial......Page 15
1.1 Introduction......Page 16
1.2 The Parts of a Model......Page 17
1.3 The Roles of a Model in Data Analysis......Page 20
1.4 A Consulting Example......Page 21
1.5.1 Families of Distributions......Page 23
1.5.2 Moments and Cumulants......Page 24
1.5.3 Quantiles and Percentiles......Page 25
1.5.4 Asymptotic Normality and Some Basic Asymptotic Results......Page 26
1.5.5 Simulation Methods......Page 28
1.6.1 The Assumed Model and Associated Estimators......Page 30
1.6.3 A Non-Model-Based Comparison of MOM to MLE......Page 31
1.6.4 Checking Example Details......Page 32
1.7 Problems......Page 33
Part II Likelihood-Based Methods......Page 37
2.1 Introduction......Page 38
2.1.1 Notes and Notation......Page 40
2.2.1 Discrete IID Random Variables......Page 41
2.2.2 Multinomial Likelihoods......Page 43
2.2.3 Continuous IID Random Variables......Page 47
2.2.3a The Connection Between Discrete and Continuous Likelihoods......Page 49
2.2.4 Mixtures of Discrete and Continuous Components......Page 52
2.2.5 Proportional Likelihoods......Page 53
2.2.6 The Empirical Distribution Function as an MLE......Page 56
2.2.7a Type I Censoring......Page 57
2.2.7b Random Censoring......Page 60
2.3.1 Linear Model......Page 61
2.3.3 Generalized Linear Model......Page 64
2.3.4 Generalized Linear Mixed Model (GLMM)......Page 66
2.3.5 Accelerated Failure Model......Page 67
2.4.1 Neyman-Scott Problem......Page 68
2.4.2 Marginal Likelihoods......Page 69
2.4.4 Logistic Regression Measurement Error Model......Page 70
2.4.6 Conditional Logistic Regression......Page 72
2.5 The Maximum Likelihood Estimatorand the InformationMatrix......Page 73
2.5.1 Examples of Information Matrix Calculations......Page 80
2.5.2 Variance Cost for Adding Parameters to a Model......Page 86
2.5.2a Location-Scale Models......Page 87
2.5.2b Three-Parameter Models......Page 88
2.5.3 The Information Matrix for Transformed and Modeled Parameters......Page 90
2.6 Methods for Maximizing the Likelihood or Solving the Likelihood Equations......Page 91
2.6.1 Analytical Methods via Profile Likelihoods......Page 92
2.6.2 Newton Methods......Page 93
2.6.3 EM Algorithm......Page 94
2.6.3a EM Algorithm for Two-Component Mixtures......Page 95
2.6.3b EM Algorithm for Right Censored Data......Page 98
2.6.3c Convergence Results for the EM Algorithm......Page 100
2.7.1 Definitions......Page 101
2.7.2 Main Results......Page 102
2.7.3 Application of Theorem 2.2 to the Multinomial......Page 103
2.7.4 Uniqueness of the MLE in the Normallocation-scale Model......Page 104
2.7.5 Application of Theorems 2.1 and 2.3to the Exponential Threshold Model......Page 106
2.7.6 Uniqueness of the MLE for Exponential Families......Page 107
2.8 Appendix B – Exponential Family Distributions......Page 108
2.8.1 Canonical Representation......Page 110
2.8.2 Minimal Exponential Family......Page 112
2.8.3 Completeness......Page 114
2.8.4 Distributions of the Sufficient Statistics......Page 115
2.8.5 Families with Truncation or Threshold Parameters......Page 116
2.8.6 Exponential Family Glossary......Page 117
2.9 Problems......Page 118
3 Likelihood-Based Tests and Confidence Regions......Page 136
3.1 Simple Null Hypotheses......Page 139
3.2.1 Wald Statistic – Partitioned θ......Page 141
3.2.2 Score Statistic – Partitioned θ......Page 142
3.2.4 Normal Location-Scale Model......Page 143
3.2.5 Wald, Score, and Likelihood Ratio Tests – H_0: h(θ) = 0......Page 146
3.2.6 Summary of Wald, Score, and LR Test Statistics......Page 147
3.2.7 Score Statistic for Multinomial Data......Page 148
3.2.8 Wald Test Lack of Invariance......Page 150
3.2.9 Testing Equality of Binomial Probabilities: Independent Samples......Page 153
3.2.10 Test Statistics for the Behrens-Fisher Problem......Page 154
3.3.1 Confidence Interval for a Binomial Probability......Page 155
3.3.2 Confidence Interval for the Difference of Binomial Probabilities: Independent Samples......Page 156
3.4.3 Generalized Linear Model......Page 157
3.4.3b Adequacy of Logistic Dose-Response Model......Page 158
3.4.3c Mantel-Haenszel Statistic......Page 159
3.5 Model Adequacy......Page 160
3.6 Nonstandard Hypothesis Testing Problems......Page 161
3.6.1a Isotonic Regression......Page 162
3.6.2a Normal Mean with Restricted Parameter Space......Page 164
3.6.2b One-Way Random Effects Model......Page 165
3.6.2c General Solution for Parameters on the Boundary......Page 168
3.7 Problems......Page 169
4.1 Introduction......Page 173
4.2 Bayes Estimators......Page 180
4.3 Credible Intervals......Page 181
4.4 Conjugate Priors......Page 182
4.5 Noninformative Priors......Page 184
4.6.1 One Normal Sample with Unknown Mean and Variance......Page 186
4.6.2 Two Normal Samples......Page 188
4.6.3 Normal Linear Model......Page 189
4.7 Hierarchical Bayes and Empirical Bayes......Page 192
4.7.1 One-Way Normal Random Effects Model......Page 193
4.7.2 James-Stein Estimation......Page 196
4.7.3 Meta-Analysis Applications of Hierarchical and Empirical Bayes......Page 197
4.7.3a Meta-Analysis using Normal Models with Known Variance......Page 199
4.7.3b Meta-Analysis Using the Binomial Model......Page 200
4.8 Monte Carlo Estimation of a Posterior......Page 203
4.8.1 Direct Monte Carlo Sampling from a Posterior......Page 204
4.8.2 Markov chain Monte Carlo Sampling from a Posterior......Page 205
4.8.3 Why Does Gibbs SamplingWork?......Page 209
4.9 Problems......Page 211
Part III Large Sample Approximations in Statistics......Page 214
5.1 Overview......Page 215
5.1.1 Statistics Approximated by Averages......Page 216
5.1.1b Functions of Averages......Page 217
5.1.1c Statistics Implicitly Defined by Averages......Page 218
5.2 Types of Stochastic Convergence......Page 220
5.2.1 Convergence with Probability 1 (Almost Sure Convergence)......Page 221
5.2.2 Convergence in Probability......Page 222
5.2.2a Weak Law of Large Numbers......Page 224
5.2.3 Convergence in Distribution......Page 225
5.2.3a Central Limit Theorem......Page 226
5.2.3b Sample Size and the Central Limit Theorem......Page 227
5.2.3c Convergence in Distribution for the Sample Extremes......Page 228
5.2.3d Uniform Convergence in Distribution......Page 230
5.4 Extension of Convergence Definitions to Vectors......Page 231
5.4.2 Convergence in Distribution for Vectors......Page 232
5.5.1 Markov Inequality......Page 234
5.5.2 Continuous Functions of Convergent Sequences......Page 235
5.5.2a Examples for wp1 and in Probability Convergence......Page 236
5.5.2b Examples for Convergence in Distribution......Page 237
5.5.3a Nonrandom Order Relations......Page 239
Rules and Relationships of Deterministic Order Relations......Page 240
5.5.3c “In Probability” Order Relations......Page 241
Rules and Relationships of In Probability Order Relations......Page 242
5.5.4 Asymptotic Normality and Related Results......Page 243
5.5.5 The Delta Theorem......Page 245
5.5.6 Slutsky’s Theorem......Page 249
5.5.7a Sample Central Moments......Page 250
5.5.7b Sample Percentiles/Quantiles......Page 251
5.5.8 Finding h in the Approximation by Averages......Page 252
5.5.8b Functions of Averages......Page 253
5.5.8c Functions of Statistics with Approximation-by-Averages Representation......Page 255
5.5.8d Averages of Functions with Estimated Parameters......Page 256
5.5.8f M-Estimators......Page 258
5.5.8g U-Statistics......Page 259
5.5.8h The Influence Curve......Page 260
5.5.8i Influence Curve Derivation for Sample Mean and Variance......Page 261
5.5.8j Influence Curve Derivation for the Median......Page 262
5.5.9 Proving Convergence in Distribution of Random Vectors......Page 263
5.5.10 Multivariate Approximation by Averages......Page 265
5.6 Summary of Methods for Proving Convergence in Distribution......Page 266
5.7 Appendix – Central Limit Theorem for Independent Non-Identically Distributed Summands......Page 267
5.7.1 Double Arrays......Page 269
5.7.2 Lindeberg-Feller Central Limit Theorem......Page 270
5.8 Problems......Page 271
6.2 Approaches to Proving Consistency of θ_MLE......Page 283
6.3 Existence of a Consistent Root of the Likelihood Equations......Page 285
6.3.1 Real-Valued θ......Page 286
6.3.2 Vector θ......Page 288
6.4 Compact Parameter Spaces......Page 290
6.5 Asymptotic Normality of Maximum Likelihood Estimators......Page 291
6.5.1 Real-Valued θ......Page 292
6.5.2 Vector θ......Page 294
6.6.1 Wald Tests......Page 295
6.6.3 Likelihood Ratio Tests......Page 296
6.6.4 Local Asymptotic Power......Page 298
6.6.5 Nonstandard Situations......Page 299
6.7 Problems......Page 300
Part IV Methods for Misspecified Likelihoods and Partially Specified Models......Page 302
7.1 Introduction......Page 303
7.2 The Basic Approach......Page 306
7.2.1 Estimators for A, B, and V......Page 307
7.2.2 Sample Mean and Variance......Page 308
7.2.3 Ratio Estimator......Page 310
7.2.4 Delta Method Via M-Estimation......Page 311
7.2.6 Instrumental Variable Estimation......Page 312
7.3 Connections to the Influence Curve (Approximation by Averages)......Page 315
7.4.1 Robust Location Estimation......Page 317
7.4.2 Quantile Estimation......Page 318
7.4.3 Positive Mean Deviation......Page 319
7.5.1 Linear Model with Random X......Page 320
7.5.2 Linear Model for Nonrandom X......Page 321
7.5.3 Nonlinear Model for Nonrandom X — Extended Definitions of A and B......Page 324
7.5.4 Robust regression......Page 326
7.5.5 Generalized Linear Models......Page 327
7.5.6 Generalized Estimating Equations (GEE)......Page 328
7.6 Application to a Testing Problem......Page 329
7.7 Summary......Page 332
7.8.1 Consistency of M-Estimators......Page 333
7.8.2 Asymptotic Normality of M-Estimators......Page 334
7.8.3 Weak Law of Large Numbers for Averages with Estimated Parameters......Page 335
7.9 Problems......Page 337
8.2 Likelihood-Based Tests under Misspecification......Page 344
8.3.1 Generalized Wald Tests......Page 349
8.3.1a Examples......Page 350
8.3.2 Generalized Score Tests......Page 351
8.3.2a Examples......Page 353
8.3.2b Derivation of T_GS by M-Estimation......Page 355
8.3.2c T_GS under Different Formulations of H_0......Page 356
8.3.3 Adjusted Likelihood Ratio Statistics......Page 357
8.3.3a A Note about Confidence Intervals......Page 360
8.3.4 Quadratic Inference Functions and Score Tests......Page 361
8.4 Problems......Page 363
Part V Computation-Based Methods......Page 365
9.1.1 Monte Carlo Estimation......Page 366
9.1.2 Monte Carlo Studies......Page 367
9.2 Basic Principles for Monte Carlo Studies......Page 370
9.3.1 Bias Estimation......Page 372
9.3.3 Power Estimation......Page 373
9.3.4 Confidence Intervals......Page 374
9.4.1 Comparing Two Variance Estimators......Page 375
9.4.2 Comparing Estimated True Variance and the Mean of Estimated Variance Estimators......Page 376
9.4.3 When Should Mean Squared Error Be Reported?......Page 378
9.5.2 Presenting Test Power Results......Page 380
9.5.3 Reporting Results for Confidence Intervals......Page 383
9.6 Problems......Page 384
10.1.1 Definitions......Page 387
10.1.2 Bias Estimation......Page 388
10.1.3 Variance Estimation......Page 390
10.2 Connections to the Influence Curve......Page 391
10.3.1 Sample Moments......Page 392
10.3.3 Contingency Tables......Page 396
10.4 Delete-d Jackknife......Page 398
10.5 The Information Reduction Approach for Variance Estimation......Page 399
10.6 Grouped Jackknife......Page 400
10.7 Non-iid Situations......Page 401
10.7.1 k Independent Samples......Page 402
10.7.2 Linear Model......Page 403
10.7.3 Dependent Data......Page 404
10.8.1 Asymptotic Consistency of V_J......Page 405
10.8.2 Positive Bias of V_J......Page 407
10.10 Problems......Page 410
11.1 Introduction......Page 414
11.1.1 Sample Bootstrap Applications......Page 415
11.2 Bootstrap Standard Errors......Page 418
11.2.1 Plug-In Interpretation of Bootstrap Variance Estimates......Page 420
11.3 Comparison of Variance Estimation Methods......Page 422
11.4 Bootstrap Asymptotics......Page 425
11.5 Bootstrap Confidence Intervals......Page 427
11.5.2 Heuristic Justification of the Percentile Interval......Page 429
11.5.3 Asymptotic Accuracy of Confidence Intervals......Page 430
11.5.4 The BC Interval......Page 431
11.5.5 The BCa Interval......Page 433
11.5.6 The Double Bootstrap (Calibrated Percentile) Interval......Page 434
11.5.7 Reflected Percentile and Bootstrap-t Intervals......Page 435
11.5.8 Summary of Bootstrap Confidence Intervals......Page 436
11.6.1 The Basics......Page 437
11.6.2 The Definition of Bootstrap P-Values and the “99 Rule”......Page 441
11.7 Regression Settings......Page 442
11.8 Problems......Page 445
12.1 Introduction......Page 450
12.2 A Simple Example: The Two-Sample Location Problem......Page 453
12.3 The General Two-Sample Setting......Page 455
12.4.1 Size α Property of Permutation Tests......Page 457
12.4.2 Permutation Moments of Linear Statistics......Page 459
12.4.3 Linear Rank Tests......Page 461
12.4.4 Wilcoxon-Mann-Whitney Two-Sample Statistic......Page 462
12.4.5 Asymptotic Normal Approximation......Page 466
12.4.6 Edgeworth Approximation......Page 467
12.4.7 Box-Andersen Approximation......Page 469
12.4.8 Monte Carlo Approximation......Page 471
12.4.9 Comparing the Approximations in a Study of Two Drugs......Page 472
12.5 Optimality Properties of Rank and Permutation Tests......Page 474
12.5.1 Locally Most Powerful Rank Tests......Page 475
12.5.2 Pitman Asymptotic Relative Efficiency......Page 477
12.6 The k-sample Problem, One-way ANOVA......Page 481
12.6.1 Rank Methods for the k-Sample Location Problem......Page 482
12.6.2 Large-k Asymptotics for the ANOVA F Statistic......Page 484
12.6.3 Comparison of Approximate P-Values –Data on Cadmium in Rat Diet......Page 485
12.6.5 Ordered Means or Location Parameters......Page 486
12.6.6 Scale or Variance Comparisons......Page 487
12.7 Testing Independence and Regression Relationships......Page 488
12.8 One-Sample Test for Symmetry about θₒ or Matched Pairs Problem......Page 492
12.8.1 Moments and Normal Approximation......Page 493
12.8.2 Box-Andersen Approximation......Page 494
12.8.3 Signed Rank Methods......Page 495
12.8.5 Pitman ARE for the One-Sample Symmetry Problem......Page 497
12.8.6 Treatment of Ties......Page 498
12.9 Randomized Complete Block Data—the Two-Way Design......Page 500
12.9.1 Friedman’s Rank Test......Page 502
12.9.2 F Approximations......Page 503
12.9.4 Aligned Ranks and the Rank Transform......Page 504
12.9.5 Replications within Blocks......Page 506
12.10.1 2 x 2 Table – Fisher’s Exact Test......Page 507
12.10.2 Paired Binary Data – McNemar’s Test......Page 510
12.10.3 I by J Tables......Page 512
12.11 Confidence Intervals and R-Estimators......Page 513
12.12.1 Locally Most Powerful Rank Tests......Page 515
12.12.2 Distribution of the Rank Vector under Alternatives......Page 516
12.12.3 Pitman Efficiency......Page 517
12.12.4a Efficacy for the One-Sample t......Page 520
12.12.4c Efficacy for the Wilcoxon Signed Rank Test......Page 521
12.12.4d Power approximations for the One-Sample Location problem......Page 522
12.13 Problems......Page 524
A.1 Notation......Page 532
A.2 Definition and Taylor Approximations......Page 533
A.3 Working with Derivatives......Page 534
A.4 Problems......Page 535
References......Page 536
Author Index......Page 549
Example Index......Page 554
R-code Index......Page 558
Subject Index......Page 559


📜 SIMILAR VOLUMES


Essential Statistical Inference: Theory
✍ Dennis D. Boos, L A Stefanski 📂 Library 📅 2013 🏛 Springer 🌐 English

​This book is for students and researchers who have had a first year graduate level mathematical statistics course.  It covers classical likelihood, Bayesian, and permutation inference; an introduction to basic asymptotic distribution theory; and modern topics like M-estimation, the jackknife, and t

Essential statistical inference: theory
✍ Dennis D. Boos, L. A. Stefanski (auth.) 📂 Library 📅 2013 🏛 Springer-Verlag New York 🌐 English

<p><p>​This book is for students and researchers who have had a first year graduate level mathematical statistics course. It covers classical likelihood, Bayesian, and permutation inference; an introduction to basic asymptotic distribution theory; and modern topics like M-estimation, the jackknife,

Statistical Theory and Inference
✍ David J. Olive (auth.) 📂 Library 📅 2014 🏛 Springer International Publishing 🌐 English

<p><p>This text is for a one semester graduate course in statistical theory and covers minimal and complete sufficient statistics, maximum likelihood estimators, method of moments, bias and mean square error, uniform minimum variance estimators and the Cramer-Rao lower bound, an introduction to larg