Teacher evaluation program shows promising results

Print More

It’s a common refrain among some education policymakers: The way to get high-quality teachers is by offering pay for performance or instituting turnarounds to sweep in new faculties.

Yet the first study of a promising pilot program to overhaul teacher evaluation suggests that policymakers should turn to teachers themselves to have the best shot at weeding out poor performers and helping lackluster teachers improve.

It’s a common refrain among some education policymakers: The way to get high-quality teachers is by offering pay for performance or instituting turnarounds to sweep in new faculties.

Yet the first study of a promising pilot program to overhaul teacher evaluation suggests that policymakers should turn to teachers themselves to have the best shot at weeding out poor performers and helping lackluster teachers improve.

The pilot, called the Excellence in Teaching Project, brought the well-regarded Charlotte Danielson Framework for Teaching to a diverse group of 44 Chicago elementary schools in 2008-09. At these schools, the Danielson framework—a rubric of effective practices that measures the quality of teachers’ lessons, classroom management and instruction—replaced the standard checklist that principals and teachers both say is virtually useless.

Of the 95 new, non-tenured teachers who were evaluated as part of the pilot, 8 percent received at least one “unsatisfactory” rating on practices that are part of the framework, according to the Consortium on Chicago School Research, which conducted the study. “Unsatisfactory” was defined as “doing harm to students” or “instruction that requires immediate intervention.”

Although not quite an apples-to-apples comparison, that 8 percent contrasts with just 0.3 percent of teachers throughout CPS who were rated “unsatisfactory” using the existing checklist system.

In the pilot, 37 percent of teachers received one of the two highest ratings, compared to 91 percent under the district’s existing system.

“One thing the pilot system does a good job of is differentiating between high and low (performers),” says Lauren Sartain, a Consortium researcher who worked on the study. “You don’t see ratings all clumped at the top. The checklist didn’t provide principals with a definition of what is excellent or superior. This system gives clear criteria for what that means.”

Having specific criteria opens the path to improving instruction, Sartain adds. “Principals and teachers can have a dialog about teaching performance that maybe they weren’t having before.”

In fact, 57 percent of principals had positive attitudes about the framework and said they saw changes in instruction as a result of using it.

Overall, principals “generally were consistent in the way they rated within schools,” Sartain says. “We didn’t see principals cherry-picking their favorite teachers.”

The researchers found no significant difference between the ratings given by principals and by veteran teachers who were also trained to use the framework, suggesting that the rubric—already in use in other districts—can be used in Chicago with reliable results.

Interestingly, though, the veteran teachers were most demanding about instruction. They were less likely than principals to give a “distinguished” rating—the highest—on instruction, a finding that bolsters the view that teachers themselves are the toughest judge when it comes to identifying high-quality instruction. The finding could add fuel to any push for peer evaluation of teachers, something that Cincinnati Public Schools has adopted along with the Danielson framework.

Under state law, half of school districts must have new evaluations in place by 2012.

In the next phase of the study, Consortium researchers will examine whether high ratings on the framework correlate with higher student test scores. That study will include 200 elementary schools and 30 high schools.