Why NBPTS and Other Innovations Receive so Little Critical Attention

 

By J. E. Stone

Education Consumers ClearingHouse

June 2002

 

The Education Commission of the States’s (ECS) decision to review the Tennessee study of NBPTS-certified teachers led me to take a closer look at why reform initiatives like NBPTS are given so little critical scrutiny.  Our May 2002 briefing is one of a very few critiques. 

 

What I found is that despite a good bit of policy analysis in the area of teaching quality, neither ECS nor any of the other major organizations that advise policymakers publicized the fact that NBPTS’s claims are built on a shaky foundation.  I refer here to organizations such as the regional education labs, the state departments of education, and the university research centers.  Rather, all of the publicly voiced skepticism seems to have come from organizations that fall outside of the governmentally funded, grant-seeking, research and policy establishment.  What is more remarkable--in light of their interest in teaching quality--neither have these organizations offered any direct, public criticism of the various pedagogical fads and failures that have come to light over the decades

 

Feted and Fizzled Innovations

 

Since the early sixties—when I entered teaching--the number of educational innovations and initiatives that have been feted, then fizzled is astounding.  I won’t attempt a comprehensive list but the following ten readily come to mind: 

1)  whole-language reading instruction,

2)  bilingual education,

3)  open education,

4)  self-esteem enhancement,

5)  discovery learning,

6)  new math,

7)  learning style matching,

8)  developmentally appropriate practice,

9)  outcomes-based education, and

10) heterogeneous grouping.  

 

Where are the Reports?

 

Given their notoriety and the number of organizations concerned with teaching quality, I searched Education Week for stories referencing cautionary reports, pessimistic recommendations, or postmortem analyses.  Education Week’s archives go back to 1981, and my assumption was that a cautionary statement or critical investigation by a major research and policy organization would be newsworthy. 

 

What I found was a remarkable absence of reports on faulty teaching innovations.  If the major education research and policy organizations issued reports on this subject, they must not have publicized their findings.  At least, “American Education’s Newspaper of Record” references no such reports. 

 

Education Week did contain several cautionary assessments of bilingual education that were drawn from U. S. Department of Education sponsored research.  At the time, William Bennett was head of USDOE and Chester E. Finn, Jr. was in charge of OERI.  In particular, a number of Education Week articles cited an American Institutes for Research study of bilingual education.  Also a 1985 article cited USDOE sponsored studies questioning the efficacy of early intervention programs. 

 

Another notable exception was a 1987 article on whole-language reading instruction written by Professor Patrick Groff—a current member of our Education Consumers Consultants Network.  Many years in advance of the National Reading Panel’s report, Groff found whole-language to be gravely flawed and unsupported by sound research.  On the whole, however, articles containing references to failed or failing teaching innovations were rare and none cited studies from the organizations and agencies noted above.  It is as though they collectively followed a policy of “see, hear, and speak no evil.” 

 

Questions:  Mistakes made, money wasted, potential lost?

 

Clearly, studies of failed innovations would be useful to those who seek to improve teaching.  For example, teachers and policymakers would have a better chance of avoiding the same mistakes if they knew how such innovations originate and how they are propagated.  Questions of this kind could be answered by nothing more complicated than a survey of teachers. 

 

Studies of cost would be worthwhile too.  For example, the cost of implementing a failed innovation, the cost of retraining teachers, the cost of student remediation, and the human cost of unremediated failure are all factors that teachers and policymakers would want to consider in decisions about future innovations.   

 

So why have the major education research and policy organizations failed to report on ineffective innovations? 

 

This question is the subject of our June 2002 Consultants Network Briefing.  It examines an excellent report from the Fordham Foundation on the devolution of a major reform initiative--the New American Schools.  In essence, NAS was unsuccessful because it mistook decades-old fads as revolutionary reforms. 

 

Our Briefing suggests that fads escape critical examination because the research and policy groups that should be studying them cannot afford to offend educators.   For grant-seeking organizations, collaborative relationships with the education community are a critical necessity. 

 

Policymakers listening to the recommendations of education’s research and policy establishment should bear this limitation in mind.  Just as most brokerage houses give only “buy” recommendations, most research and policy organizations explore only an innovation’s promise, not its risks.