𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Criterion-related validity of the assessment center for selection of school administrators

✍ Scribed by Neal Schmitt; Scott A. Cohen


Publisher
Springer
Year
1990
Tongue
English
Weight
540 KB
Volume
4
Category
Article
ISSN
1874-8597

No coin nor oath required. For personal study only.

✦ Synopsis


Assessment centers are extraordinarily popular among professionals and practitioners involved in the selection of managerial personnel. This popularity is reflected in a large number of published articles (e.g., Finkle, 1976;Huck, 1976;Klimoski & Brickner, 1987), the publication of standards and ethical considerations involved in assessment center operations (Task Force on Development of Assessment Center Standards, 1977), and the unusual support of the federal courts (Byham, 1979). This enthusiasm is supported by a great deal of validation research. Over 50 studies were reviewed by Huck (1976), all of which reported positive findings concerning the relationship between assessment center ratings and subsequent job performance. Klimoski and Strickland (1977) found a median validity of .40 over 90 studies.

More recently Gaugler, Rosenthal, Thornton, and Bentson (1987) reported a meta-analysis of assessment center validity. These authors calculated average validities for four different types of criteria. Results indicated the strongest evidence of validity occurred when assessment center performance was correlated with ratings of managerial potential (.45). Somewhat lower validities were found when the criteria used were ratings of job performance (.31), rating of the manager's job performance on the dimensions used in the assessment center (.25), the performance of a manager in a training program (.31), or career advancement (.32). Overall, Gaugler and associates (1987) reported the average validity of assessment centers to be .37.

Research evidence regarding the application of assessment centers in the educational context is much less common. In fact, a recent review of the literature on principal selection indicated only a handful of studies that provided any empirical data evaluating the utility of procedures used to select administrators (Schechtman & Schmitt, 1987). Ehinger and Guier (1985) used a concurrent validation strategy to evaluate the Management Development Program at the University of Tulsa. These authors developed a composite assessment center score representing performance on the in-basket and the leaderless group discussion and found a moderately strong relationship with superintendents' ratings of job performance (r= .41, n = 103).


πŸ“œ SIMILAR VOLUMES


Statistical assessment of a new criterio
✍ A.Gustavo Gonzalez; D GonzΓ‘lez-Arjona πŸ“‚ Article πŸ“… 1995 πŸ› Elsevier Science 🌐 English βš– 168 KB

Sir: It is well known that the most applied criteria for determining the number of underlying factors from a data matrix being factor-analyzed are the Malinowski's indicator @ND) function [l] and the Wold's double cross-validation (DCV) procedure [2]. Recently, we have proposed another proof that is

The criterion validity of the Center for
✍ R. Haringsma; G. I. Engels; A. T. F. Beekman; Ph. Spinhoven πŸ“‚ Article πŸ“… 2004 πŸ› John Wiley and Sons 🌐 English βš– 69 KB πŸ‘ 1 views

## Abstract ## Background The criterion validity of the Center for Epidemiological Studies Depression scale (CES‐D) was assessed in a group of elderly Dutch community‐residents who were self‐referred to a prevention program for depression. ## Methods Paper‐and‐pencil administration of the CES‐D