Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and mo
Handbook of Parametric and Nonparametric Statistical Procedures: Third Edition
✍ Scribed by David J. Sheskin
- Publisher
- Chapman & Hall
- Year
- 2003
- Tongue
- English
- Leaves
- 1184
- Edition
- 3
- Category
- Library
No coin nor oath required. For personal study only.
✦ Synopsis
Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and more background information-more than 200 pages of new material.The Third Edition provides unparalleled, up-to-date coverage of over 130 parametric and nonparametric statistical procedures as well as many practical and theoretical issues relevant to statistical analysis. If you need to…·Decide what method of analysis to use·Use a particular test for the first time·Distinguish acceptable from unacceptable research·Interpret and better understand the results of pubished studies…the Handbook of Parametric and Nonparametric Statistical Procedures will help you get the job done.
✦ Table of Contents
Handbook of Parametric and Nonparametric Statistical Procedures, Third Edition......Page 1
Copyright......Page 2
Preface......Page 3
Table of Contents with Summary of Topics......Page 9
Descriptive versus Inferential Statistics......Page 30
Levels of Measurement......Page 31
Measures of Central Tendency......Page 33
Measures of Variability......Page 38
Measures of Skewness and Kurtosis......Page 44
Visual Methods for Displaying Data......Page 56
The Normal Distribution......Page 71
Hypothesis Testing......Page 82
A History and Critique of the Classical Hypothesis Testing Model......Page 89
Estimation in Inferential Statistics......Page 94
Relevant Concepts, Issues, and Terminology in Conducting Research......Page 95
Experimental Design......Page 102
Sampling Methodologies......Page 116
Basic Principles of Probability......Page 117
Parametric versus Nonparametric Inferential Statistical Tests......Page 126
References......Page 127
Endnotes......Page 129
Outline of Inferential Statistical Tests and Measures of Correlation/Association......Page 135
Guidelines and Decision Tables for Selecting the Appropriate Statistical Procedure......Page 140
Inferential Statistical Tests Employed with a Single Sample......Page 145
III. Null versus Alternative Hypotheses......Page 146
IV. Test Computations......Page 147
V. Interpretation of the Test Results......Page 148
1. The interpretation of a negative z value......Page 149
2. The standard error of the population mean and graphical representation of the resultsof the single-sample z test......Page 150
3. Additional examples illustrating the interpretation of a computed z value......Page 154
4. The z test for a population proportion......Page 155
VIII. Additional Examples Illustrating the Use of the Single-Sample z Test......Page 156
Endnotes......Page 157
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 159
IV. Test Computations......Page 160
V. Interpretation of the Test Results......Page 162
1. Determination of the power of the single-sample t test and the single-sample z test, and the application of Test 2a: Cohen's d index......Page 164
2. Computation of a confidence interval for the mean of the population represented by as ample......Page 173
Degrees of freedom......Page 180
VIII. Additional Examples Illustrating the Use of the Single-Sample t Test......Page 181
References......Page 182
Endnotes......Page 183
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 185
111. Null versus Alternative Hypotheses......Page 186
V. Interpretation of the Test Results......Page 187
1. Large sample normal approximation of the chi-square distribution......Page 190
2. Computation of a confidence interval for the variance of a population represented by a sample......Page 191
VIII. Additional Examples Illustrating the Use of the Single-Sample Chi-square Test for a Population Variance......Page 194
Endnotes......Page 196
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 197
111. Null versus Alternative Hypotheses......Page 198
IV. Test Computations......Page 199
V. Interpretation of the Test Results......Page 201
I. Exact tables for the single-sample test for evaluating population skewness......Page 202
References......Page 203
Endnotes......Page 204
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 205
111. Null versus Alternative Hypotheses......Page 206
IV. Test Computations......Page 207
1. Test 5a: The D'Agostino-Pearson test of normality......Page 209
References......Page 211
Endnotes......Page 212
11. Example......Page 213
IV. Test Computations......Page 214
V. Interpretation of the Test Results......Page 216
1. The normal approximation of the Wilcoxon T statistic for large sample sizes......Page 218
2. The correction for continuity for the normal approximation of the Wilcoxon signed-ranks test......Page 220
3. Tie correction for the normal approximation of the Wilcoxon test statistic......Page 221
1. Power-efficiency of the Wilcoxon signed-ranks test and the concept of asymptotic relative efficiency......Page 222
VIII. Additional Examples Illustrating the Use of the Wilcoxon Signed-Ranks Test......Page 223
References......Page 224
Endnotes......Page 225
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 227
11. Example......Page 228
111. Null versus Alternative Hypotheses......Page 229
IV. Test Computations......Page 230
V. Interpretation of the Test Results......Page 234
1. Computing a confidence interval for the Kolmogorov-Smirnov goodness-of-fit test for a single sample......Page 235
2. The power of the Kolmogorov-Smirnov goodness-of-fit test for a single sample......Page 236
3. Test 7a: The Lilliefors test for normality......Page 237
1. Effect of sample size on the result of a goodness-of-fit test......Page 238
References......Page 239
Endnotes......Page 241
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 242
111. Null versus Alternative Hypotheses......Page 243
IV. Test Computations......Page 244
1. Comparisons involving individual cells when k > 2......Page 246
2. The analysis of standardized residuals......Page 249
3. Computation of a confidence interval for the chi-square goodness-of-fit test (confidence interval for a population proportion)......Page 250
6. Application of the chi-square goodness-of-fit test for assessing goodness-of-fit for a theoretical population distribution......Page 252
8. Heterogeneity chi-square analysis......Page 256
1. Directionality of the chi-square goodness-of-fit test......Page 260
VIII. Additional Examples Illustrating the Use of the Chi-square Goodness-of-Fit Test......Page 262
References......Page 264
Endnotes......Page 265
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 268
111. Null versus Alternative Hypotheses......Page 269
IV. Test Computations......Page 270
V. Interpretation of the Test Results......Page 272
1. Test 9a: The z test for a population proportion......Page 273
2. Test 9b: The single-sample test for the median......Page 282
3. Computing the power of the binomial sign test for a single sample......Page 284
1. Evaluating goodness-of-fit for a binomial distribution......Page 285
VIII. Additional Example Illustrating the Use of the Binomial Sign Test for a Single Sample......Page 287
1. Discussion of additional discrete probability distributions......Page 288
2. Conditional probability, Bayes' theorem, and Bayesian statistics and hypothesis testing......Page 303
References......Page 350
Endnotes......Page 352
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 360
11. Example......Page 361
V. Interpretation of the Test Results......Page 362
1. The normal approximation of the single-sample runs test for large sample sizes......Page 363
2. The correction for continuity for the normal approximation of the single-sample runs test......Page 364
3. Extension of the runs test to data with more than two categories......Page 365
4. Test 10a: The runs test for serial randomness......Page 366
1. Additional discussion of the concept of randomness......Page 369
VIII. Additional Examples Illustrating the Use of the Single-Sample Runs Test......Page 370
1. The generation of pseudorandom numbers......Page 373
2. Alternative tests of randomness......Page 377
References......Page 390
Endnotes......Page 392
Inferential Statistical Tests Employed with Two Independent
Samples (and Related Measures of Association/Correlation)......Page 395
11. Example......Page 396
IV. Test Computations......Page 397
V. Interpretation of the Test Results......Page 400
1. The equation for the t test for two independent samples when a value for a difference other than zero is stated in the null hypothesis......Page 401
2. Test lla: Hartley's Fmatxes t for homogeneity of variance/F test for two population variances: Evaluation of the homogeneity of variance assumption of the t test for two independent samples......Page 403
3. Computation of the power of the t test for two independent samples and the application of Test 11 b: Cohen's d index......Page 408
4. Measures of magnitude of treatment effect for the t test for two independent samples: Omega squared (Test 1 1c) and Eta squared (Test 1 id)......Page 412
5. Computation of a confidence interval for the t test for two independent samples......Page 414
6. Test 1 le: The z test for two independent samples......Page 416
1. Unequal sample sizes......Page 418
2. Robustness of the t test for two independent samples......Page 419
3. Outliers (Test llf: Procedures for identifying outliers) and data transformation......Page 420
4. Missing data......Page 431
VIII. Additional Examples Illustrating the Use of the t Test for Two Independent Samples......Page 434
References......Page 435
Endnotes......Page 438
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 444
111. Null versus Alternative Hypotheses......Page 445
IV. Test Computations......Page 446
1. The normal approximation of the Mann-Whitney (7 statistic for large sample sizes......Page 449
3. Tie correction for the normal approximation of the Mann-Whitney U statistic......Page 451
1. Power-efficiency of the Mann-Whitney U test......Page 452
VIII. Additional Examples Illustrating the Use of the Mann-Whitney U Test......Page 453
IX. Addendum......Page 455
References......Page 468
Endnotes......Page 470
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 474
111. Null versus Alternative Hypotheses......Page 475
IV. Test Computations......Page 477
V. Interpretation of the Test Results......Page 479
1. Graphical method for computing the Kolmogorov-Smirnov test statistic......Page 480
3. Large sample chi-square approximation for a one-tailed analysis of the Kolmogorov-Smirnov test for two independent samples......Page 481
VIII. Additional Examples Illustrating the Use of the Kolmogorov-Smirnov Test for Two Independent Samples......Page 482
References......Page 483
Endnotes......Page 484
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 485
111. Null versus Alternative Hypotheses......Page 486
IV. Test Computations......Page 487
1. The normal approximation of the Siegel-Tukey test statistic for large sample sizes......Page 490
2. The correction for continuity for the normal approximation of the Siegel-Tukey test for equal variability......Page 491
4. Adjustment of scores for the Siegel-Tukey test for equal variability when θ1 ≠ θ2......Page 492
1. Analysis of the homogeneity of variance hypothesis for the same set of data with both a parametric and nonparametric test, and the power-efficiency of the Siegel-Tukey test for equal variability......Page 494
VIII. Additional Examples Illustrating the Use of the Siegel-Tukey Test for Equal Variability......Page 496
References......Page 497
Endnotes......Page 498
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 499
111. Null versus Alternative Hypotheses......Page 500
IV. Test Computations......Page 502
V. Interpretation of the Test Results......Page 504
1. The normal approximation of the Moses test statistic for large sample sizes......Page 505
1. Power-esciency of the Moses test for equal variability......Page 506
VIII. Additional Examples Illustrating the Use of the Moses Test for Equal Variability......Page 507
Endnotes......Page 511
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 513
11. Examples......Page 515
111. Null versus Alternative Hypotheses......Page 516
IV. Test Computations......Page 519
V. Interpretation of the Test Results......Page 520
1. Yates' correction for continuity......Page 522
2. Quick computational equation for a 2 x 2 table......Page 523
3. Evaluation of a directional alternative hypothesis in the case ofa 2 x 2 contingency table......Page 524
4. Test 16c: The Fisher exact test......Page 525
5. Test 16d: The z test for two independent proportions......Page 531
6. Computation of confidence interval for a difference between two proportions......Page 536
7. Test 16e: The median test for independent samples......Page 537
8. Extension of the chi-square test for r x c tables to contingency tables involving more than two rows and/or columns, and associated comparison procedures......Page 539
9. The analysis of standardized residuals......Page 545
10. Sources for computing the power of the chi-square test for r x c tables......Page 547
11. Heterogeneity chi-square analysis for a 2 x 2 contingency table......Page 548
12. Measures of association for r x c contingency tables......Page 551
Test 16f: The contingency coefficient (0......Page 553
Test 16g: The phi coefficient......Page 554
Test 16h: Cramdr's phi coefficient......Page 556
Test 16i: Yule's Q Yule's Q......Page 557
Test 16j: The odds ratio (and the concept of relative risk)......Page 558
Test 16j-a: Test of significance for an odds ratio and computation of a confidence interval for an odds ratio......Page 562
Test 16k: Cohen's kappa......Page 563
Test 16k-a: Test of significance for Cohen's kappa......Page 566
1. Equivalency of the chi-square test for r x c tables when c = 2 with the t test for two independent samples (when r = 2) and the single-factor between-subjects analysis of variance (when r 2 2)......Page 567
2. Simpson's paradox Simpson's paradox......Page 568
3. Analysis of multidimensional contingency tables......Page 570
VIII. Additional Examples Illustrating the Use of the Chi-square Test for r x c Tables......Page 581
References......Page 585
Endnotes......Page 587
Inferential Statistical Tests Employed with Two Dependent Samples (and Related Measures of Association/Correlation)......Page 592
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 593
111. Null versus Alternative Hypotheses......Page 594
IV. Test Computations......Page 595
V. Interpretation of the Test Results......Page 597
1. Alternative equation for the t test for two dependent samples......Page 598
3. Test 17a: The t test for homogeneity of variance for two dependent samples: Evaluationof the homogeneity of variance assumption of the t test for two dependent samples......Page 602
4. Computation of the power of the t test for two dependent samples and the application of Test 17b: Coben's d index......Page 605
5. Measure of magnitude of treatment effect for the t test for two dependent samples: Omega squared (Test 17c)......Page 609
6. Computation of a confidence interval for the t test for two dependent samples......Page 610
7. Test 17d: Sandler's A test......Page 611
8. Test 17e: The z test for two dependent samples......Page 613
1. The use of matched subjects in a dependent samples design......Page 616
2. Relative power of the t test for two dependent samples and the t test for two independent samples......Page 618
3. Counterbalancing and order effects......Page 619
4. Analysis of a one-group pretest-posttest design with the t test for two dependent samples......Page 620
VIII. Additional Example Illustrating the Use of the t Test for Two Dependent Samples......Page 622
Endnotes......Page 623
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 627
111. Null versus Alternative Hypotheses......Page 628
IV. Test Computations......Page 629
V. Interpretation of the Test Results......Page 631
1. The normal approximation of the Wilcoxon T statistic for large sample sizes......Page 632
3. Tie correction for the normal approximation of the Wilcoxon test statistic......Page 634
4. Sources for computing a confidence interval for the Wilcoxon matched-pairs signed ranks test......Page 635
References......Page 636
Endnotes......Page 637
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 639
111. Null versus Alternative Hypotheses......Page 640
N. Test Computations......Page 641
V. Interpretation of the Test Results......Page 643
1. The normal approximation ofthe binomial sign test for twodependent samples with and without a correction for continuity......Page 644
2. Computation of a confidence interval for the binomial sign test for two dependent samples......Page 647
3. Sources for computing the power of the binomial sign test for two dependent samples, and comments on asymptotic relative efficiency of the test......Page 648
References......Page 649
Endnotes......Page 650
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 651
11. Examples......Page 652
111. Null versus Alternative Hypotheses......Page 654
V. Interpretation of the Test Results......Page 656
1. Alternative equation for the McNemar test statistic based on the normal distribution......Page 657
2. The correction for continuity for the McNemar test......Page 658
3. Computation of the exact binomial probability for the McNemar test model with a small sample size......Page 659
4. Computation of the power of the McNemar test......Page 661
5. Additional analytical procedures for the McNemar test......Page 662
6. Test 20a: The Gart test for order effects......Page 663
1. Alternative format for the McNemar test summary table and modified test equation......Page 671
VIII. Additional Examples Illustrating the Use of the McNemar Test......Page 672
IX. Addendum......Page 673
1. Test 20b: The Bowker test of internal symmetry......Page 674
2. Test 20c: The Stuart-Maxwell test of marginal homogeneity......Page 678
References......Page 680
Endnotes......Page 681
Inferential Statistical Tests Employed with Two or More Independent Samples (and Related Measures of Association/Correlation)......Page 683
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 684
111. Null versus Alternative Hypotheses......Page 685
IV. Test Computations......Page 686
V. Interpretation of the Test Results......Page 690
1. Comparisons following computation of the omnibus F value for the single-factorbetween-subjects analysis of variance......Page 691
Planned comparisons (also known as apriori comparisons)......Page 692
Unplanned comparisons (also known as post hoc, multiple, or aposterion' comparisons)......Page 693
Linear contrasts......Page 694
Linear contrast of a planned simple comparison......Page 695
Linear contrast of a planned complex comparison......Page 697
Orthogonal cornparison......Page 699
Test 21a: Multiple t tests1Fisher's LSD test......Page 701
Test 21b: The Bonferroni-Dunn test......Page 704
Test 21c: Tukey's HSD test......Page 708
Test 21d: The Newman-Keuls test......Page 709
Test 21e: The Scbeffi test......Page 711
Test 21f: The Dunnett test......Page 714
Additional discussion of comparison procedures and final recommendations......Page 716
The computation of a confidence interval for a comparison......Page 719
2. Comparing the means of three or more groups when k ≥ 4......Page 720
3. Evaluation of the homogeneity of variance assumption of the single-factor between-subjects analysis of variance......Page 722
4. Computation of the power of the single-factor between-subjects analysis of variance......Page 725
5. Measures of magnitude of treatment effect for the single-factor between-subjects analysis of variance: Omega squared (Test 21g), eta squared (Test 21h), and Cohen's ƒ index (Test 21i)......Page 727
Omega squared (Test 21g)......Page 728
Eta squared (Test 21h)......Page 729
Cohen's ƒ index (Test 21i)......Page 730
Final comments on measures of effect size......Page 731
6. Computation of a confidence interval for tbe mean of a treatment population......Page 732
1. Theoretical rationale underlying the single-factor between-subjects analysis of variance......Page 733
2. Definitional equations for the single-factor between-subjects analysis of variance......Page 735
3. Equivalency of the single-factor between-subjects analysis of variance and the t test for two independent samples when k = 2......Page 737
5. Equivalency of the single-factor between-subjects analysis of variance and the t test for two independent samples with the chi-square test for r x c tables when c = 2......Page 738
7. Multivariate analysis of variance (MANOVA)......Page 741
VIII. Additional Examples Illustrating the Use of the Single-Factor Between-Subjects Analysis of Variance......Page 742
Test 21 j: The single-factor between subjects analysis of covariance......Page 743
References......Page 760
Endnotes......Page 762
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 774
111. Null versus Alternative Hypotheses......Page 775
IV. Test Computations......Page 776
1. Tie correction for the Kruskal-Wallis one-way analysis of variance by ranks......Page 778
2. Pairwise comparisons following computation of the test statistic for the Kruskal-Wallis one-way analysis of variance by ranks......Page 779
2. Equivalency of the Kruskal-Wallis one-way analysis of variance by ranks and the Mann-Whitney U test when k = 2......Page 783
VIII. Additional Examples Illustrating the Use of the Kruskal-Wallis One-way Analysis of Variance by Ranks......Page 784
IX. Addendum......Page 786
References......Page 793
Endnotes......Page 794
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 798
111. Null versus Alternative Hypotheses......Page 799
IV. Test Computations......Page 800
V. Interpretation of the Test Results......Page 802
1. Pairwise comparisons following computation of the test statistic for the van der Waerden normal-scores test for k independent samples......Page 803
1. Alternative normal-scores tests......Page 805
VIII. Additional Examples Illustrating the Use of the van der Waerden Normal-Scores Test for k Independent Samples......Page 806
References......Page 807
Endnotes......Page 808
Inferential Statistical Tests Employed with Two or More Dependent Samples (and Related Measures of Association/Correlation)......Page 811
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 812
IV. Test Computations......Page 814
V. Interpretation of the Test Results......Page 819
1. Comparisons following computation of the omnibus F value for the single-factor within-subject sanalysis of variance......Page 820
2. Comparing the means of three or more conditions when k ≥ 4......Page 828
3. Evaluation of the sphericity assumption underlying the single-factor within-subjects analysis of variance......Page 830
4. Computation of the power of the single-factor within-subjects analysis of variance......Page 835
5. Measures of magnitude of treatment effect for the single-factor within-subjects analysis of variance: Omega squared (Test 24g) and Cohen's ƒ index (Test 24h)......Page 837
6. Computation of a confidence interval for the mean of a treatment population......Page 840
7. Test 24i: The intraclass correlation coefiicient......Page 841
1. Theoretical rationale underlying the single-factor within-subjects analysis of variance......Page 843
2. Definitional equations for the single-factor within-subjects analysis of variance......Page 846
3. Relative power of the single-factor within-subjects analysis of variance and the single-factor between-subjects analysis of variance......Page 849
4. Equivalency of the single-factor within-subjects analysis of variance and the t test for two dependent samples when k = 2......Page 850
5. The Latin square design......Page 851
VIII. Additional Examples Illustrating the Use of the Single-Factor Within-Subjects Analysis of Variance......Page 852
References......Page 855
Endnotes......Page 856
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 860
111. Null versus Alternative Hypotheses......Page 861
IV. Test Computations......Page 862
V. Interpretation of the Test Results......Page 863
1. Tie correction for the Friedman two-way analysis of variance by ranks......Page 864
2. Pairwise comparisons following computation of the test statistic for the Friedman two-way analysis of variance by ranks......Page 865
1. Exact tables of the Friedman distribution......Page 869
2. Equivalency of the Friedman two-way analysis of variance by ranks and the binomial sign test for two dependent samples when k = 2......Page 870
5. Relationship between the Friedman two-way analysis of variance by ranks and Kendall's coefficient ofconcordance......Page 871
VIII. Additional Examples Illustrating the Use of the Friedman Two-way Analysis of Variance by Ranks......Page 872
IX. Addehdurn......Page 873
Endnotes......Page 878
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 881
IV. Test Computations......Page 882
1. Pairwise comparisons following computation of the test statistic for the Cochran Q test......Page 884
1. Issues relating to subjects who obtain the same score under all of the experimental conditions......Page 888
2. Equivalency of the Cochran Q test and the McNemar test when k = 2......Page 889
VIII. Additional Examples Illustrating the Use of the Cochran Q Test......Page 891
References......Page 895
Endnotes......Page 896
Inferential Statistical Test Employed with Factorial Design (and Related Measures of Association/Correlation)......Page 899
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 900
111. Null versus Alternative Hypotheses......Page 901
N. Test Computations......Page 903
V. Interpretation of the Test Results......Page 909
1. Comparisons following computation of the Fvalues for the between-subjects factorial anatysis of variance......Page 913
3. Computation of the power of the between-subjects factorial analysis of variance......Page 924
4. Measures of magnitude of treatment effect for the between-subjects factorial analysis of variance: Omega squared (Test 27g) and Cohen's ƒ index (Test 27h)......Page 926
6. Additional analysis of variance procedures for factorial designs......Page 930
2. Definitional equations for the between-subjects factorial analysis of variance......Page 931
3. Unequal sample sizes......Page 933
4. The randomized-blocks design......Page 934
5. Final comments on the between-subjects factorial analysis of variance......Page 938
VIII. Additional Examples Illustrating the Use of the Between-Subjects Factorial Analysis of Variance......Page 939
1. Test 27i: The factorial analysis of variance for a mixed design A mixed factorial design......Page 940
2. Test 27j: The within-subjects factorial analysis of variance A within-subjects factorial design......Page 945
References......Page 950
Endnotes......Page 951
Measures of Association/Correlation......Page 956
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 957
11. Example......Page 960
111. Null versus Alternative Hypotheses......Page 961
IV. Test Computations......Page 962
V. Interpretation of the Test Results......Page 963
1. Derivation of a regression line......Page 967
2. The standard error of estimate......Page 975
3. Computation of a confidence interval for the value of the criterion variable......Page 976
4. Computation of a confidence interval for a Pearson product-moment correlation coefficient......Page 977
6. Computation of power for the Pearson product-moment correlation coefficient......Page 980
7. Test 28c: Test for evaluating a hypothesis on whether there is a significant difference between two independent correlations......Page 981
8. Test 28d: Test for evaluating a hypothesis on whether k independent correlations are homogeneous......Page 983
9. Test 28e: Test for evaluating the null hypothesis Hο: ρxz = ρyz......Page 985
10. Tests for evaluating a hypothesis regarding one or more regression coefficients......Page 986
I. The definitional equation for the Pearson product-moment correlation coefficient......Page 989
2. Residuals and analysis of variance for regression analysis......Page 990
4. The homoscedasticity assumption ofthe Pearson product-moment correlation coefficient......Page 995
5. The phi coefficient as a special case of the Pearson product-moment correlation coefficient......Page 996
6. Autocorrelation/serial correlation......Page 997
IX. Addendum......Page 1001
1. Bivariate measures of correlation that are related to the Pearson-product moment correlation coefficient......Page 1002
2. Multiple regression analysis......Page 1012
3. Additional multivariate procedures invohhg correlational analysis......Page 1028
4. Meta-analysis and related topics......Page 1037
References......Page 1062
Endnotes......Page 1065
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 1073
111. Null versus Alternative Hypotheses......Page 1075
IV. Test Computations......Page 1076
V. Interpretation of the Test Results......Page 1077
1. Tie correction for Spearman's rank-order correlation coefficient......Page 1079
2. Spearman's rank-order correlation coefficient as a special case of the Pearson product-moment correlation coefficient......Page 1081
3. Regression analysis and Spearman's rank-order correlation coefficient......Page 1082
5. Use of Fisher's z, transformation with Spearman's rank-order correlation coefficient......Page 1083
1. The relationship between Spearman's rank-order correlation coefficient, Kendall's coefficient of concordance, and the Friedman two-way analysis of variance by ranks......Page 1084
References......Page 1087
Endnotes......Page 1088
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 1091
11. Example......Page 1092
111. Null versus Alternative Hypotheses......Page 1093
IV. Test Computations......Page 1094
V. Interpretation of the Test Results......Page 1096
1. Tie correction for Kendall's tau......Page 1099
3. Partial rank correlation......Page 1101
VIII. Additional Examples Illustrating the Use of Kendall's Tau......Page 1102
Endnotes......Page 1103
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 1105
111. Null versus Alternative Hypotheses......Page 1106
IV. Test Computations......Page 1107
V. Interpretation of the Test Results......Page 1108
VI. Additional Analytical Procedures for Kendall's Coefficient of Concordance and/or Related Tests......Page 1109
1. Relationship between Kendall's coefficient of concordance and Spearman's rank-order correlation coefficient......Page 1111
2. Relationship between Kendall's coefticient of concordance and the Friedman two-way analysis of variance by ranks......Page 1112
4. Kendall's coefficient of concordance versus the intraclass correlation coefficient......Page 1114
VIII. Additional Examples Illustrating the Use of Kendall's Coefficient of Concordance......Page 1116
References......Page 1117
Endnotes......Page 1118
I. Hypothesis Evaluated with Test and Relevant Background Information......Page 1120
11. Example......Page 1121
111. Null versus Alternative Hypotheses......Page 1122
IV. Test Computations......Page 1123
VI. Additional Analytical Procedures for Goodman and Kruskal's Gamma and/or Related Tests......Page 1126
1. The computation of a confidence interval for the value of Goodman and Kruskal's gamma......Page 1127
2. Test 32b: Test for evaluating the null hypothesis Hο: γ1 = γ2......Page 1128
VIII. Additional Examples Illustrating the Use of Goodman and Kruskal's Gamma......Page 1129
Endnotes......Page 1131
Acknowledgments and Sources for Tables in Appendix......Page 1133
Table A1 Table of the Normal Distribution......Page 1137
Table A2 Table of Student's t Distribution......Page 1142
Table A3-A (Two-Tailed .01 and One-Tailed .005 Values)......Page 1143
Table A3-B (Two-Tailed .02 and One-Tailed .01 Values)......Page 1144
Table A3-C (Two-Tailed .05 and One-Tailed .025 Values)......Page 1145
Table A3-D (Two-Tailed .10 and One-Tailed .05 Values)......Page 1146
Table A4 Table of the Chi-square Distribution......Page 1147
Table A5 Table of Critical T Values for Wilcoxon's Signed-Ranks and Matched-Pairs Signed-Ranks Test......Page 1148
Table A6 Table of the Binomial Distribution, Individual Probabilities......Page 1149
Table A7 Table of the Binomial Distribution, Cumulative Probabilities......Page 1152
Table A8 Table of Critical Values for the Single-Sample Runs Test......Page 1155
Table A9 Table of the Fmax Distribution......Page 1156
Table A10 Table of the F Distribution......Page 1157
F .975......Page 1158
F .99......Page 1159
F .995......Page 1160
(One-Tailed .05 Values)......Page 1161
(One-Tailed .01 Values)......Page 1162
Two-tailed level of significance......Page 1163
q.95 (α = .05)......Page 1164
q.99 (α = .01)......Page 1165
Tww-Tailed Values......Page 1166
One-Tailed Values......Page 1167
(Fixed-Effects Model)......Page 1168
Table A16 Table of Critical Values for Pearson r......Page 1172
Table A17 Table of Fisher's zr, Transformation......Page 1173
Table A18 Table of Critical Values for Spearman's Rho......Page 1174
Table A19 Table of Critical Values for Kendall's Tau......Page 1175
Table A20 Table of Critical Values for Kendall's Coefficient of Concordance......Page 1176
Table A21 Table of Critical Values for the Kolmogorov-Smirnov Goodness-of-Fit Test for a Single Sample......Page 1177
Table A22 Table of Critical Values for the Lilliefors Test for Normality......Page 1178
Table A23 Table of Critical Values for the Kolmogorov Smirnov Testfor Two Independent Samples......Page 1179
Table A24 Table of Critical Values for the JonckheereTerpstra Test Statistic......Page 1181
Table A25 Table of Critical Values for the Page Test Statistic......Page 1183
📜 SIMILAR VOLUMES
Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and mo
Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and mo
<span>Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples,
Called the "bible of applied statistics," the first edition of the bestselling Handbook of Parametric and Nonparametric Statistical Procedures was unsurpassed in its scope. The Second Edition goes even further - more tests, more examples, more than 250 pages of new material.<BR><BR>Thorough - Up-To-
<p>Following in the footsteps of its bestselling predecessors, the <strong>Handbook of Parametric and Nonparametric Statistical P</strong><strong>r</strong><strong>ocedures, Fifth Edition</strong> provides researchers, teachers, and students with an all-inclusive reference on univariate, bivariate,