Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and mo
Handbook of Parametric and Nonparametric Statistical Procedures: Third Edition
β Scribed by David J. Sheskin
- Publisher
- Chapman and Hall/CRC
- Year
- 2003
- Tongue
- English
- Leaves
- 1225
- Edition
- 3
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and more background information-more than 200 pages of new material.
The Third Edition provides unparalleled, up-to-date coverage of over 130 parametric and nonparametric statistical procedures as well as many practical and theoretical issues relevant to statistical analysis. If you need toβ¦
- Decide what method of analysis to use
- Use a particular test for the first time
- Distinguish acceptable from unacceptable research
- Interpret and better understand the results of pubished studies
β¦the Handbook of Parametric and Nonparametric Statistical Procedures will help you get the job done.
β¦ Table of Contents
Preface
Table of Contents with Summary of Topics
Introduction
Descriptive versus Inferential Statistics
Statistic versus Parameter
Levels of Measurement
Continuous versus Discrete Variables
Measures of Central Tendency
Measures of Variability
Measures of Skewness and Kurtosis
Visual Methods for Displaying Data
The Normal Distribution
Hypothesis Testing
A History and Critique of the Classical Hypothesis Testing Model
Estimation in Inferential Statistics
Relevant Concepts, Issues, and Terminology in Conducting Research
Experimental Design
Sampling Methodologies
Basic Principles of Probability
Parametric versus Nonparametric Inferential Statistical Tests
Selection of the Appropriate Statistical Procedure
References
Endnotes
Outline of Inferential Statistical Tests and Measures of Correlation/Association
Guidelines and Decision Tables for Selecting the Appropriate Statistical Procedure
Single Sample
Test 1: The Single-Sample z Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Sample z Test and/or Related Tests
VII. Additional Discussion of the Single-Sample z Test
1. The interpretation of a negative z value
2. The standard error of the population mean and graphical representation of the resultsof the single-sample z test
3. Additional examples illustrating the interpretation of a computed z value
4. The z test for a population proportion
VIII. Additional Examples Illustrating the Use of the Single-Sample z Test
Reference
Endnotes
Test 2: The Single-Sample t Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Sample t Test and/or Related Tests
1. Determination of the power of the single-sample t test and the single-sample z test, and the application of Test 2a: Cohen's d index
2. Computation of a confidence interval for the mean of the population represented by as ample
VII. Additional Discussion of the Single-Sample / Test
Degrees of freedom
VIII. Additional Examples Illustrating the Use of the Single-Sample t Test
References
Endnotes
Test 3: The Single-Sample Chi-square Test for a Population Variance
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Sample Chi-square Test for a Population Variance and/or Related Tests
1. Large sample normal approximation of the chi-square distribution
2. Computation of a confidence interval for the variance of a population represented by a sample
3. Sources for computing the power of the single-sample chi-square test for a population variance
VII. Additional Discussion of the Single-Sample Chi-square Test for a Population Variance
VIII. Additional Examples Illustrating the Use of the Single-Sample Chi-square Test for a Population Variance
References
Endnotes
Test 4: The Single-Sample Test for Evaluating Population Skewness
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Sample Test for Evaluating Population Skewness and/or Related Tests
1. Note on the D'Agostino-Pearson test of normality (Test 5a)
VII. Additional Discussion of the Single-Sample Test for EvaluatingPopulation Skewness
1. Exact tables for the single-sample test for evaluating population skewness
2. Note on a nonparametric test for evaluating skewness
VIII. Additional Examples Illustrating the Use of the Single-Sample Test for Evaluating Population Skewness
References
Endnotes
Test 5: The Single-Sample Test for Evaluating Population Kurtosis
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Sample Test for Evaluating Population Kurtosis and/or Related Tests
1. Test 5a: The D'Agostino-Pearson test of normality
VII. Additional Discussion of the Single-Sample Test for Evaluating Population Kurtosis
1. Exact tables for the single-sample test for evaluating population kurtosis
VIII. Additional Examples Illustrating the Use of the Single-Sample Test for Evaluating Population Kurtosis
References
Endnotes
Test 6: The Wilcoxon Signed-Ranks Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Wilcoxon Signed-Ranks Test and/or Related Tests
1. The normal approximation of the Wilcoxon T statistic for large sample sizes
2. The correction for continuity for the normal approximation of the Wilcoxon signed-ranks test
3. Tie correction for the normal approximation of the Wilcoxon test statistic
VII. Additional Discussion of the Wilcoxon Signed-Ranks Test
1. Power-efficiency of the Wilcoxon signed-ranks test and the concept of asymptotic relative efficiency
2. Note on symmetric population concerning hypotheses regarding median and meanConover
3. Confidence interval for the median difference
VIII. Additional Examples Illustrating the Use of the Wilcoxon Signed-Ranks Test
References
Endnotes
Test 7: The Kolmogorov-Smirnov Goodness-of-Fit Test for a Single Sample
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Kohnogorov-Smirnov Goodness-of-Fit Test for a Single Sample andlor Related Tests
1. Computing a confidence interval for the Kolmogorov-Smirnov goodness-of-fit test for a single sample
2. The power of the Kolmogorov-Smirnov goodness-of-fit test for a single sample
3. Test 7a: The Lilliefors test for normality
VII. Additional Discussion of the Kolmogorov-Smirnov Goodness-of-Fit Test for a Single Sample
1. Effect of sample size on the result of a goodness-of-fit test
2. The Kolmogorov-Smirnov goodness-of-fit test for a single sample versus the chi-square goodness-of-fit test and alternative goodness-of-fit tests
VIII. Additional Example Illustrating the Use of the Kolmogorov-Smirnov Goodness-of-Fit Test for a Single Sample
References
Endnotes
Test 8: The Chi-square Goodness-of-Fit Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Examples
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Chi-square Goodness-of-Fit Test and/or Related Tests
1. Comparisons involving individual cells when k > 2
2. The analysis of standardized residuals
3. Computation of a confidence interval for the chi-square goodness-of-fit test (confidence interval for a population proportion)
4. Brief discussion of the z test for a population proportion (Test 9a) and the single-sample test for the median (Test 9b)
5. The correction for continuity for the chi-square goodness-of-fit test
6. Application of the chi-square goodness-of-fit test for assessing goodness-of-fit for a theoretical population distribution
7. Sources for computing the power of the chi-square goodness-of-fit test
8. Heterogeneity chi-square analysis
VII. Additional Discussion of the Chi-square Goodness-of-Fit Test
1. Directionality of the chi-square goodness-of-fit test
2. Additional goodness-of-fit tests
VIII. Additional Examples Illustrating the Use of the Chi-square Goodness-of-Fit Test
References
Endnotes
Test 9: The Binomial Sign Test for a Single Sample
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Examples
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Binomial Sign Test for a Single Sample andlor Related Tests
1. Test 9a: The z test for a population proportion
2. Test 9b: The single-sample test for the median
3. Computing the power of the binomial sign test for a single sample
VII. Additional Discussion of the Binomial Sign Test for a Single Sample
1. Evaluating goodness-of-fit for a binomial distribution
VIII. Additional Example Illustrating the Use of the Binomial Sign Test for a Single Sample
IX. Addendum
1. Discussion of additional discrete probability distributions
2. Conditional probability, Bayes' theorem, and Bayesian statistics and hypothesis testing
References
Endnotes
Test 10: The Single-Sample Runs Test (and Other Tests of Randomness)
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Sample Runs Test and/or Related Tests
1. The normal approximation of the single-sample runs test for large sample sizes
2. The correction for continuity for the normal approximation of the single-sample runs test
3. Extension of the runs test to data with more than two categories
4. Test 10a: The runs test for serial randomness
VII. Additional Discussion of the Single-Sample Runs Test
1. Additional discussion of the concept of randomness
VIII. Additional Examples Illustrating the Use of the Single-Sample Runs Test
IX. Addendum
1. The generation of pseudorandom numbers
2. Alternative tests of randomness
References
Endnotes
Two Independent Samples
Test 11: The t Test for Two Independent Samples
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the t Test for Two Independent Samples and/or Related Tests
1. The equation for the t test for two independent samples when a value for a difference other than zero is stated in the null hypothesis
2. Test lla: Hartley's Fmatxes t for homogeneity of variance/F test for two population variances: Evaluation of the homogeneity of variance assumption of the t test for two independent samples
3. Computation of the power of the t test for two independent samples and the application of Test 11 b: Cohen's d index
4. Measures of magnitude of treatment effect for the t test for two independent samples: Omega squared (Test 1 1c) and Eta squared (Test 1 id)
5. Computation of a confidence interval for the t test for two independent samples
6. Test 1 le: The z test for two independent samples
VII. Additional Discussion of the t Test for Two Independent Samples
1. Unequal sample sizes
2. Robustness of the t test for two independent samples
3. Outliers (Test llf: Procedures for identifying outliers) and data transformation
4. Missing data
5. Hotelling's T2
VIII. Additional Examples Illustrating the Use of the t Test for Two Independent Samples
References
Endnotes
Test 12: Mann-Whitney U Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Mann-Whitney U Test and/or Related Tests
1. The normal approximation of the Mann-Whitney (7 statistic for large sample sizes
2. The correction for continuity for the normal approximation of the Mann-Whitney U test5
3. Tie correction for the normal approximation of the Mann-Whitney U statistic
4. Sources for computing a confidence interval for the Mann-Whitney U test
VII. Additional Discussion of the Mann-Whitney U Test
1. Power-efficiency of the Mann-Whitney U test
2. Equivalency of the normal approximation of the Mann-Whitney U test and the t test for two independent samples with rank-orders
3. Alternative nonparametric rank-order procedures for evaluating a design involving two independent samples
VIII. Additional Examples Illustrating the Use of the Mann-Whitney U Test
IX. Addendum
References
Endnotes
Test 13: The Kolmogorov-Smirnov Test for Two Independent Samples
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Kolmogorov-Smirnov Test for Two Independent Samples and/or Related Tests
1. Graphical method for computing the Kolmogorov-Smirnov test statistic
2. Computing sample confidence intervals for the Kolmogorov-Smirnov test for two independent samples
3. Large sample chi-square approximation for a one-tailed analysis of the Kolmogorov-Smirnov test for two independent samples
VII. Additional Discussion of the Kolmogorov-Smirnov Test for Two Independent Samples
1. Additional comments on the Kolmogorov-Smirnov test for two independent samples
VIII. Additional Examples Illustrating the Use of the Kolmogorov-Smirnov Test for Two Independent Samples
References
Endnotes
Test 14: The Siegel-Tukey Test for Equal Variability
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Siegel-Tukey Test for Equal Variability and/or Related Tests
1. The normal approximation of the Siegel-Tukey test statistic for large sample sizes
2. The correction for continuity for the normal approximation of the Siegel-Tukey test for equal variability
3. Tie correction for the normal approximation of the Siegel-Tukey test statistic
4. Adjustment of scores for the Siegel-Tukey test for equal variability when .1 . .2
VII. Additional Discussion of the Siegel-Tukey Test for Equal Variability
1. Analysis of the homogeneity of variance hypothesis for the same set of data with both a parametric and nonparametric test, and the power-efficiency of the Siegel-Tukey test for equal variability
2. Alternative nonparametric tests of dispersion
VIII. Additional Examples Illustrating the Use of the Siegel-Tukey Test for Equal Variability
References
Endnotes
Test 15: The Moses Test for Equal Variability
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Moses Test for Equal Variability and/or Related Tests
1. The normal approximation of the Moses test statistic for large sample sizes
VII. Additional Discussion of the Moses Test for Equal Variability
1. Power-esciency of the Moses test for equal variability
2. Issue of repetitive resampling
3. Alternative nonparametric tests of dispersion
VIII. Additional Examples Illustrating the Use of the Moses Test for Equal Variability
References
Endnotes
Test 16: The Chi-square Test for r x c Tables
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Examples
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Chi-square Test for r x c Tables and/or Related Tests
1. Yates' correction for continuity
2. Quick computational equation for a 2 x 2 table
3. Evaluation of a directional alternative hypothesis in the case ofa 2 x 2 contingency table
4. Test 16c: The Fisher exact test
5. Test 16d: The z test for two independent proportions
6. Computation of confidence interval for a difference between two proportions
7. Test 16e: The median test for independent samples
8. Extension of the chi-square test for r x c tables to contingency tables involving more than two rows and/or columns, and associated comparison procedures
9. The analysis of standardized residuals
10. Sources for computing the power of the chi-square test for r x c tables
11. Heterogeneity chi-square analysis for a 2 x 2 contingency table
12. Measures of association for r x c contingency tables
Test 16f: The contingency coefficient (0
Test 16g: The phi coefficient
Test 16h: Cramdr's phi coefficient
Test 16i: Yule's Q Yule's Q
Test 16j: The odds ratio (and the concept of relative risk)
Test 16j-a: Test of significance for an odds ratio and computation of a confidence interval for an odds ratio
Test 16k: Cohen's kappa
Test 16k-a: Test of significance for Cohen's kappa
Test 16k-b: Test of significance for two independent values of Cohen's kappa
VII. Additional Discussion of the Chi-square Test for r x c Tables
1. Equivalency of the chi-square test for r x c tables when c = 2 with the t test for two independent samples (when r = 2) and the single-factor between-subjects analysis of variance (when r 2 2)
2. Simpson's paradox Simpson's paradox
3. Analysis of multidimensional contingency tables
VIII. Additional Examples Illustrating the Use of the Chi-square Test for r x c Tables
References
Endnotes
Two Dependent Samples
Test 17: The t Test for Two Dependent Samples
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the t Test for Two Dependent Samples and/or Related Tests
1. Alternative equation for the t test for two dependent samples
2. The equation for the t test for two dependent samples when a value for a difference other than zero is stated in the null hypothesis
3. Test 17a: The t test for homogeneity of variance for two dependent samples: Evaluationof the homogeneity of variance assumption of the t test for two dependent samples
4. Computation of the power of the t test for two dependent samples and the application of Test 17b: Coben's d index
5. Measure of magnitude of treatment effect for the t test for two dependent samples: Omega squared (Test 17c)
6. Computation of a confidence interval for the t test for two dependent samples
7. Test 17d: Sandler's A test
8. Test 17e: The z test for two dependent samples
VII. Additional Discussion of the t Test for Two Dependent Samples
1. The use of matched subjects in a dependent samples design
2. Relative power of the t test for two dependent samples and the t test for two independent samples
3. Counterbalancing and order effects
4. Analysis of a one-group pretest-posttest design with the t test for two dependent samples
VIII. Additional Example Illustrating the Use of the t Test for Two Dependent Samples
References
Endnotes
Test 18: The Wilcoxon Matched-Pairs Signed-Ranks Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Wilcoxon Matched-Pairs Signed-Ranks Test and/or Related Tests
1. The normal approximation of the Wilcoxon T statistic for large sample sizes
2. The correction for continuity for the normal approximation of the Wilcoxon matched-pairs signed-ranks test
3. Tie correction for the normal approximation of the Wilcoxon test statistic
4. Sources for computing a confidence interval for the Wilcoxon matched-pairs signed ranks test
VII. Additional Discussion of the Wilcoxon Matched-Pairs Signed-Ranks Test
1. Power-efficiency of the Wilcoxon matched-pairs signed-ranks test
2. Alternative nonparametric procedures for evaluating a design involving two dependent samples
VIII. Additional Examples Illustrating the Use of the Wilcoxon Matched-Pairs Signed-Ranks Test
References
Endnotes
Test 19: The Binomial Sign Test for Two Dependent Samples
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Binomial Sign Test for Two Dependent Samples and/or Related Tests
1. The normal approximation ofthe binomial sign test for twodependent samples with and without a correction for continuity
2. Computation of a confidence interval for the binomial sign test for two dependent samples
3. Sources for computing the power of the binomial sign test for two dependent samples, and comments on asymptotic relative efficiency of the test
VII. Additional Discussion of the Binomial Sign Test for Two Dependent Samples
I. The problem of an excessive number of zero difference scores
2. Equivalency of the binomial sign test for two dependent samples and the Friedman two-way analysis variance by ranks when k = 2
VIII. Additional Examples Illustrating the Use of the Binomial Sign Test for Two Dependent Samples
References
Endnotes
Test 20: The McNemar Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Examples
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the McNemar Test and/or Related Tests
1. Alternative equation for the McNemar test statistic based on the normal distribution
2. The correction for continuity for the McNemar test
3. Computation of the exact binomial probability for the McNemar test model with a small sample size
4. Computation of the power of the McNemar test
5. Additional analytical procedures for the McNemar test
6. Test 20a: The Gart test for order effects
VII. Additional Discussion of the McNemar Test
1. Alternative format for the McNemar test summary table and modified test equation
2. Alternative nonparametric procedures for evaluating a design with two dependent samples involving categorical data
VIII. Additional Examples Illustrating the Use of the McNemar Test
IX. Addendum
1. Test 20b: The Bowker test of internal symmetry
2. Test 20c: The Stuart-Maxwell test of marginal homogeneity
References
Endnotes
Two or More Independent Samples
Test 21: The Single-Factor Between-Subjects Analysis of Variance
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Factor Between-Subjects Analysis of Variance and/or Related Tests
1. Comparisons following computation of the omnibus F value for the single-factorbetween-subjects analysis of variance
Planned comparisons (also known as apriori comparisons)
Unplanned comparisons (also known as post hoc, multiple, or aposterion' comparisons)
Linear contrasts
Linear contrast of a planned simple comparison
Linear contrast of a planned complex comparison
Orthogonal cornparison
Test 21a: Multiple t tests1Fisher's LSD test
Test 21b: The Bonferroni-Dunn test
Test 21c: Tukey's HSD test
Test 21d: The Newman-Keuls test
Test 21e: The Scbeffi test
Test 21f: The Dunnett test
Additional discussion of comparison procedures and final recommendations
The computation of a confidence interval for a comparison
2. Comparing the means of three or more groups when k = 4
3. Evaluation of the homogeneity of variance assumption of the single-factor between-subjects analysis of variance
4. Computation of the power of the single-factor between-subjects analysis of variance
5. Measures of magnitude of treatment effect for the single-factor between-subjects analysis of variance: Omega squared (Test 21g), eta squared (Test 21h), and Cohen's f index (Test 21i)
Omega squared (Test 21g)
Eta squared (Test 21h)
Cohen's f index (Test 21i)
Final comments on measures of effect size
6. Computation of a confidence interval for tbe mean of a treatment population
VII. Additional Discussion of the Single-Factor Between-Subjects Analysis of Variance
1. Theoretical rationale underlying the single-factor between-subjects analysis of variance
2. Definitional equations for the single-factor between-subjects analysis of variance
3. Equivalency of the single-factor between-subjects analysis of variance and the t test for two independent samples when k = 2
4. Robustness of the single-factor between-subjects analysis of variance
5. Equivalency of the single-factor between-subjects analysis of variance and the t test for two independent samples with the chi-square test for r x c tables when c = 2
6. Fixed-effects versus random-effects models for the single-factor between-subjects analysis of variance
7. Multivariate analysis of variance (MANOVA)
VIII. Additional Examples Illustrating the Use of the Single-Factor Between-Subjects Analysis of Variance
IX. Addendum
Test 21 j: The single-factor between subjects analysis of covariance
References
Endnotes
Test 22: The Kruskal-Wallis One-way Analysis of Variance by Ranks
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Kruskal-Wallis One-way Analysis of Variance by Ranks and/or Related Tests
1. Tie correction for the Kruskal-Wallis one-way analysis of variance by ranks
2. Pairwise comparisons following computation of the test statistic for the Kruskal-Wallis one-way analysis of variance by ranks
VII. Additional Discussion of the Kruskal-Wallis One-way Analysis of Variance by Ranks
1. Exact tables of the Kruskal-Wallis distribution
2. Equivalency of the Kruskal-Wallis one-way analysis of variance by ranks and the Mann-Whitney U test when k = 2
3. Power-efficiency of the Kruskal-Wallis one-way analysis of variance by ranks
4. Alternative nonparametric rank-order procedures for evaluating a design involving k independent samples
VIII. Additional Examples Illustrating the Use of the Kruskal-Wallis One-way Analysis of Variance by Ranks
IX. Addendum
References
Endnotes
Test 23: The van der Waerden Normal-Scores Test for k Independent Samples
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the van der Waerden Normal-Scores Test for k Independent Samples and/or Related Tests
1. Pairwise comparisons following computation of the test statistic for the van der Waerden normal-scores test for k independent samples
VII. Additional Discussion of the van der Waerden Normal-Scores Test for k Independent Samples
1. Alternative normal-scores tests
VIII. Additional Examples Illustrating the Use of the van der Waerden Normal-Scores Test for k Independent Samples
References
Endnotes
Two or More Dependent Samples
Test 24: The Single-Factor Within-Subjects Analysis of Variance
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Single-Factor Within-Subjects Analysis of Variance and/or Related Tests
1. Comparisons following computation of the omnibus F value for the single-factor within-subject sanalysis of variance
2. Comparing the means of three or more conditions when k = 4
3. Evaluation of the sphericity assumption underlying the single-factor within-subjects analysis of variance
4. Computation of the power of the single-factor within-subjects analysis of variance
5. Measures of magnitude of treatment effect for the single-factor within-subjects analysis of variance: Omega squared (Test 24g) and Cohen's β¦ index (Test 24h)
6. Computation of a confidence interval for the mean of a treatment population
7. Test 24i: The intraclass correlation coefiicient
VII. Additional Discussion of the Single-Factor Within-Subjects Analysis of Variance
1. Theoretical rationale underlying the single-factor within-subjects analysis of variance
2. Definitional equations for the single-factor within-subjects analysis of variance
3. Relative power of the single-factor within-subjects analysis of variance and the single-factor between-subjects analysis of variance
4. Equivalency of the single-factor within-subjects analysis of variance and the t test for two dependent samples when k = 2
5. The Latin square design
VIII. Additional Examples Illustrating the Use of the Single-Factor Within-Subjects Analysis of Variance
References
Endnotes
Test 25: The Friedman Two-way Analysis of Variance by Ranks
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Friedman Two-way Analysis of Variance by Ranks andlor Related Tests
1. Tie correction for the Friedman two-way analysis of variance by ranks
2. Pairwise comparisons following computation of the test statistic for the Friedman two-way analysis of variance by ranks
VII. Additional Discussion of the Friedman Two-way Analysis of Variance by Ranks
1. Exact tables of the Friedman distribution
2. Equivalency of the Friedman two-way analysis of variance by ranks and the binomial sign test for two dependent samples when k = 2
3. Power-efficiency ofthe Friedman two-way analysis ofvariance by ranks
4. Alternative nonparametric rank-order procedures for evaluating a design involving k dependent samples
5. Relationship between the Friedman two-way analysis of variance by ranks and Kendall's coefficient ofconcordance
VIII. Additional Examples Illustrating the Use of the Friedman Two-way Analysis of Variance by Ranks
IX. Addehdurn
References
Endnotes
Test 26: The Cochran Q Test
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Cochran Q Test and/or Related Tests
1. Pairwise comparisons following computation of the test statistic for the Cochran Q test
VII. Additional Discussion of the Cochran Q Test
1. Issues relating to subjects who obtain the same score under all of the experimental conditions
2. Equivalency of the Cochran Q test and the McNemar test when k = 2
3. Alternative nonparametric procedures with categorical data for evaluating a design involving k dependent samples
VIII. Additional Examples Illustrating the Use of the Cochran Q Test
References
Endnotes
Factorial Design
Test 27: The Between-Subjects Factorial Analysis of Variance
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Between-Subjects Factorial Analysis of Variance and/or Related Tests
1. Comparisons following computation of the Fvalues for the between-subjects factorial anatysis of variance
2. Evaluation of the homogeneity of variance assumption of the between-subjects factorial analysis of variance
3. Computation of the power of the between-subjects factorial analysis of variance
4. Measures of magnitude of treatment effect for the between-subjects factorial analysis of variance: Omega squared (Test 27g) and Cohen's β¦ index (Test 27h)
5. Computation of a confidence interval for the mean of a population represented by a group
6. Additional analysis of variance procedures for factorial designs
VII. Additional Discussion of the Between-Subjects Factorial Analysis of Variance
1. Theoretical rationale underlying the between-subjects factorial analysis of variance
2. Definitional equations for the between-subjects factorial analysis of variance
3. Unequal sample sizes
4. The randomized-blocks design
5. Final comments on the between-subjects factorial analysis of variance
VIII. Additional Examples Illustrating the Use of the Between-Subjects Factorial Analysis of Variance
IX. Addendum
Discussion of and computational procedures for additional analysis of variance proceduresfor factorial designs
1. Test 27i: The factorial analysis of variance for a mixed design A mixed factorial design
2. Test 27j: The within-subjects factorial analysis of variance A within-subjects factorial design
References
Endnotes
Measures of Association/Correlation
Test 28: The Pearson Product-Moment Correlation Coefficient
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for the Pearson Product-Moment Correlation Coefficient and/or Related Tests
1. Derivation of a regression line
2. The standard error of estimate
3. Computation of a confidence interval for the value of the criterion variable
4. Computation of a confidence interval for a Pearson product-moment correlation coefficient
6. Computation of power for the Pearson product-moment correlation coefficient
7. Test 28c: Test for evaluating a hypothesis on whether there is a significant difference between two independent correlations
8. Test 28d: Test for evaluating a hypothesis on whether k independent correlations are homogeneous
9. Test 28e: Test for evaluating the null hypothesis H.: .xz = .yz
10. Tests for evaluating a hypothesis regarding one or more regression coefficients
11. Additional correlational procedures
VII. Additional Discussion of the Pearson Product-Moment Correlation Coefficient
I. The definitional equation for the Pearson product-moment correlation coefficient
2. Residuals and analysis of variance for regression analysis
3. Covariance
4. The homoscedasticity assumption ofthe Pearson product-moment correlation coefficient
5. The phi coefficient as a special case of the Pearson product-moment correlation coefficient
6. Autocorrelation/serial correlation
VIII. Additional Examples Illustrating the Use of the Pearson Product-Moment Correlation Coefficient
IX. Addendum
1. Bivariate measures of correlation that are related to the Pearson-product moment correlation coefficient
2. Multiple regression analysis
3. Additional multivariate procedures invohhg correlational analysis
4. Meta-analysis and related topics
References
Endnotes
Test 29: Spearman's Rank-Order Correlation Coefficient
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for Spearman's Rank-Order Correlation Coefficient and/or Related Tests
1. Tie correction for Spearman's rank-order correlation coefficient
2. Spearman's rank-order correlation coefficient as a special case of the Pearson product-moment correlation coefficient
3. Regression analysis and Spearman's rank-order correlation coefficient
4. Partial rank correlation
5. Use of Fisher's z, transformation with Spearman's rank-order correlation coefficient
VII. Additional Discussion of Spearman's Rank-Order Correlation Coefficient
1. The relationship between Spearman's rank-order correlation coefficient, Kendall's coefficient of concordance, and the Friedman two-way analysis of variance by ranks
2. Power efficiency of Spearman's rank-order correlation coefficient
3. Brief discussion of Kendall's Tau: An alternative measure of association for two sets of ranks
4. Weighted rankltopdown correlation
VIII. Additional Examples Illustrating the Use of Spearman's Rank-Order Correlation Coefficient
References
Endnotes
Test 30: Kendall's Tau
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for Kendall's Tau and/or Related Tests
1. Tie correction for Kendall's tau
2. Regression analysis and Kendall's tau
3. Partial rank correlation
4. Sources for computing a confidence interval for Kendall's tau
VII. Additional Discussion of Kendall's Tau
1. Power efficiency of Kendall's tau
2. Kendall's coefficient of agreement
VIII. Additional Examples Illustrating the Use of Kendall's Tau
References
Endnotes
Test 31: Kendall's Coefficient of Concordance
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for Kendall's Coefficient of Concordance and/or Related Tests
VII. Additional Discussion of Kendall's Coefficient of Concordance
1. Relationship between Kendall's coefficient of concordance and Spearman's rank-order correlation coefficient
2. Relationship between Kendall's coefticient of concordance and the Friedman two-way analysis of variance by ranks
3. Weighted ranWtopdown concordance
4. Kendall's coefficient of concordance versus the intraclass correlation coefficient
VIII. Additional Examples Illustrating the Use of Kendall's Coefficient of Concordance
References
Endnotes
Test 32: Goodman and Kruskal's Gamma
I. Hypothesis Evaluated with Test and Relevant Background Information
II. Example
III. Null versus Alternative Hypotheses
IV. Test Computations
V. Interpretation of the Test Results
VI. Additional Analytical Procedures for Goodman and Kruskal's Gamma and/or Related Tests
1. The computation of a confidence interval for the value of Goodman and Kruskal's gamma
2. Test 32b: Test for evaluating the null hypothesis H.: .1 = .2
3. Sources for computing a partial correlation coeff~cient for Goodman and Kruskal's gamma
VII. Additional Discussion of Goodman and Kruskal's Gamma
1. Relationship between Goodman and Kruskal's gamma and Yule's Q
2. Somers' delta as an alternative measure of association for an ordered contingency table
VIII. Additional Examples Illustrating the Use of Goodman and Kruskal's Gamma
References
Endnotes
Appendix: Tables
Acknowledgments and Sources for Tables in Appendix
Table A1 Table of the Normal Distribution
Table A2 Table of Student's t Distribution
Table A3 Power Curves for Student's t Distribution
Table A3-A (Two-Tailed .01 and One-Tailed .005 Values)
Table A3-B (Two-Tailed .02 and One-Tailed .01 Values)
Table A3-C (Two-Tailed .05 and One-Tailed .025 Values)
Table A3-D (Two-Tailed .10 and One-Tailed .05 Values)
Table A4 Table of the Chi-square Distribution
Table A5 Table of Critical T Values for Wilcoxon's Signed-Ranks and Matched-Pairs Signed-Ranks Test
Table A6 Table of the Binomial Distribution, Individual Probabilities
Table A7 Table of the Binomial Distribution, Cumulative Probabilities
Table A8 Table of Critical Values for the Single-Sample Runs Test
Table A9 Table of the Fmax Distribution
Table A10 Table of the F Distribution
F .995
F .975
F .99
F .995
Table All Table of Critical Values for Mann-Whitney U Statistic
Two-Tailed .05 Values
One-Tailed .05 Values
Two-Tailed .01 Values
One-Tailed .01 Values
Table A12 Table of Sandier's A Statistic
One-tailed level of significance
Two-tailed level of significance
Table A13 Table of the Studentized Range Statistic
q.95 (a = .05)
q.99 (a = .01)
Table A14 Table of Dunnett's Modified t Statistic for a Control Group Comparison
Tww-Tailed Values
One-Tailed Values
Table A15 Graphs of the Power Function for the Analysis of Variance (Fixed-Effects Model)
Table A16 Table of Critical Values for Pearson r
Table A17 Table of Fisher's zr, Transformation
Table A18 Table of Critical Values for Spearman's Rho
Table A19 Table of Critical Values for Kendall's Tau
Table A20 Table of Critical Values for Kendall's Coefficient of Concordance
Table A21 Table of Critical Values for the Kolmogorov-Smirnov Goodness-of-Fit Test for a Single Sample
Table A22 Table of Critical Values for the Lilliefors Test for Normality
Table A23 Table of Critical Values for the Kolmogorov-Smirnov Testfor Two Independent Samples
Table A24 Table of Critical Values for the Jonckheere-Terpstra Test Statistic
Table A25 Table of Critical Values for the Page Test Statistic
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Y
Z
π SIMILAR VOLUMES
Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and mo
Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and mo
Called the "bible of applied statistics," the first two editions of the Handbook of Parametric and Nonparametric Statistical Procedures were unsurpassed in accessibility, practicality, and scope. Now author David Sheskin has gone several steps further and added even more tests, more examples, and mo
Called the "bible of applied statistics," the first edition of the bestselling Handbook of Parametric and Nonparametric Statistical Procedures was unsurpassed in its scope. The Second Edition goes even further - more tests, more examples, more than 250 pages of new material.<BR><BR>Thorough - Up-To-
<p>Following in the footsteps of its bestselling predecessors, the <strong>Handbook of Parametric and Nonparametric Statistical P</strong><strong>r</strong><strong>ocedures, Fifth Edition</strong> provides researchers, teachers, and students with an all-inclusive reference on univariate, bivariate,