Combine the past 20 years of CHC-driven
intelligence test development and research activities (click
here and
here) with the ongoing refinement and
extension of CHC theory (
McGrew, 2005; 2009) and one concludes that
these are exciting times in the field of intelligence testing. But
is this excitement warranted in school psychology? Has the drawing
of a reasonably circumscribed “holy grail”taxonomy of
cognitive abilities led us to the promised land of intelligence
testing in the schools—using the results of cognitive
assessments to better the education of children with special needs?
Or, have we simply become more sophisticated in the range of
measures and tools used to “sink shafts at more critical
points” in the mind (see Lubinksi,
2000) which, although important for understanding
and studying human individual differences, fails to improve
diagnosis, classification, and instruction in
education?
It is an interesting coincidence that McDermott, Fantuzzo, and
Glutting’s (1990) now infamous and
catchy admonition to psychologists who administer intelligence
tests to “just say no to subtest analysis”
occurred almost 20 years ago—the time when contemporary CHC
intelligence theory and assessment was emerging. By 1990,
McDermott and colleagues had convincingly demonstrated, largely via
core profile analysis of the then current Wechsler trilogy of
batteries (WPPSI, WISC-R, WAIS-R) that ipsative strength and
weakness interpretation of subtest profiles was not
psychometrically sound. In essence, “beyond g (full
scale IQ)—don’t bother.”
We believe that optimism is appropriate regarding
the educational relevance of CHC-driven test development and
research. Surprisingly, cautious optimism has been voiced by
prominent school psychology critics of intelligence testing. In a
review of the WJ-R,
Ysseldyke (1990)described the WJ- R as
representing “a significant milestone in the applied
measurement of intellectual abilities” (p. 274). More
importantly, Ysseldyke indicated he was “excited about a
number of possibilities for use of the WJ-R in empirical
investigations of important issues in psychology, education, and,
specifically, in special education…we may now be able to
investigate the extent to which knowledge of pupil performance on
the various factors is prescriptively predictive of relative
success in school. That is, we may now begin to address treatment
relevance.” (p. 273).
Reschly (1997), in response to the first
CHC-based cognitive-achievement causal modeling research report
(McGrew,
Flanagan, Keith & Vanderwood, 1997)
which demonstrated that some specific CHC abilities are important
in understanding reading and math achievement above and beyond
the effect of general intelligence (g), concluded that
“the arguments were fairly convincing regarding the need
to reconsider the specific versus general abilities conclusions.
Clearly, some specific abilities appear to have potential for
improving individual diagnoses. Note, however, that it is
potential that has been demonstrated” (Reschly, 1997, p.
238).
Clearly the potential and promise of improved
intelligence testing, vis-à-vis CHC organized test batteries,
has been recognized since 1989. But has this promise been realized
during the past 20 years? Has our measurement of CHC abilities
improved? Has CHC-based cognitive assessment provided a
better understanding of the relations between specific cognitive
abilities and school achievement? Has it improved identification
and classification? More importantly, in the current educational
climate, where does CHC- grounded intelligence testing fit within
the context of the emerging Response-to-Intervention
(RTI) paradigm?
|
|
|
|