New Charter School Study More Bad News for Corporate Education Reform

Copied from Common Dreams

The first national charter school study was conducted in 2009 by CREDO at Stanford, and the co-funders of the study (the Walton Foundation and Pearson) were not enamored by the results. So bad were they for charter school fans that the study, though given skimpy coverage by the LA Times, was never reported by WaPo or the NYTimes, and received minimal coverage from one news magazine, U. S. News and World Report, which obviously did not get the memo:

On average, charter schools are not performing as well as their traditional public-school peers, according to a new study that is being called the first national assessment of these school-choice options. The study, conducted by the Center for Research on Education Outcomes at Stanford University, compared the reading and math state achievement test scores of students in charter schools in 15 states and the District of Columbia—amounting to 70 percent of U.S. charter school students—to those of their virtual “twins” in regular schools who shared with them certain characteristics. The research found that 37 percent of charter schools posted math gains that were significantly below what students would have seen if they had enrolled in local traditional public schools. And 46 percent of charter schools posted math gains that were statistically indistinguishable from the average growth among their traditional public-school companions. That means that only 17 percent of charter schools have growth in math scores that exceeds that of their traditional public-school equivalents by a significant amount.

In reading, charter students on average realized a growth that was less than their public-school counterparts but was not as statistically significant as differences in math achievement, researchers said.

“We are worried by these results,” Margaret Raymond, director of CREDO and lead author of the report, Multiple Choice: Charter School Performance in 16 States, said at a news conference. “This study shows that we’ve got a 2-to-1 margin of bad charters to good charters.”  . . . .

This new study released in Friday’s news dump, entitled “Charter-School Management Organization: Diverse Strategies and Diverse Student Impacts,” has more bad news for school privatizers who prefer the charter route.  Even though a swarm of urban school colonizers from Gates, Walton, and the New Schools Venture Fund helped set up the parameters for this study in order to get the most favorable outcome, and even though the Gates “research” hothouse, the Center for Reinventing Public Education co-authored the study,  there’s enough bad news for charter proponents that mirrors years of previous research on charters that this study, too, has been ignored by the corporate media.  Ed Week had a piece on the new study entitled “Academic Gains Vary Widely for Charter Networks,” and Time had a pre-release gloss by corporate spinner extraordinaire, Andy Rotherham.  That was it for coverage, except for a misleading and dissembling press release by Jim Peyser at the New Schools Venture Fund.  And only one of the national charter school associations offered a press release on this big event.  And most telling, the Gates “research” hothouse that co-authored the study, the Center for Reinventing Public Education, does not even mention it anywhere on its website.  Shhhh.

Mathematica led the study, and as their Press Release indicates, the study “was commissioned by NewSchools Venture Fund, with the generous support of the Bill  Melinda Gates Foundation and the Walton Family Foundation.” An undisclosed number of the sludge-tank “thought leaders,” including Andy Rotherham, carefully set up the parameters for the sample to pump the corporate welfare Charter Management Organizations (CMOs).  These are the corporate non-profit tax sponges preferred by the vulture philanthropy movement.

Even though the names of the CMOs are not listed in the Report, Jim Peyser, insider and hovering point man for the NSVF’s involvement in the study, mentions these well-funded total compliance testing camps as representative of the CMOs that were part of the study:  KIPP, Aspire, Achievement First, Noble Network, Uncommon Schools.

Rotherham, now writing for Time, was given, in fact, exclusive access to the Report in order to spin the story the best way possible before the release.  And as lead spinner, Rotherham gave it the ole’ college try.  A  few clips with comments:

Rotherham spinning in Time:

The study found that, in general, students at charter-network schools outperform similar students at traditional public schools, although sometimes not by very much.

Findings from the Study’s Executive Summary:
Test score impact estimates for the average CMO after two to three years in middle school are positive in all four subjects, but they are not statistically significant.

The overall average impacts mask a great deal of variation among CMOs. Two years after students enroll in the CMOs covered by the impact analysis, they experience significantly positive math impacts in half of these CMOs (11 of 22), while students in about one-third of the CMOs (7 of 22) do significantly worse in math. Similarly, students in nearly half of the CMOs (10 of 22) experience significantly positive impacts in reading, while students in about a quarter of CMOs (6 of 22) experience reading impacts that are significantly negative.  Table 3 shows that half of the CMOs (11 of 22) have significantly positive impacts in math or reading and nine have significantly negative impacts in one or both subjects; 10 of the 22 CMOs have significantly positive impacts in both subjects while only four have significantly negative impacts in both subjects (p. xxvii).

That is, even with all the advantages that charter schools enjoy, and even with the selective culling that took place to create the sample for this study, charters are, on average, doing no better than the public schools that the charterites want to shut down.

Table 1 from the Report illustrates two of the primary reason that charters have a test performance advantage over public school: charters regularly have fewer students who are English language learners, and fewer students with special needs and disabilities.


Notice, too, that this study compares charters to the host district average, rather than the schools in the immediate vicinity.  Gary Miron and others have noted elsewhere that these district comparisons often mask even larger percentages of ELL and SPED children in the poorest communities where charters replace public schools.

And how about class size differences between charters and their public counterparts in the host districts?

Class sizes and pupil-to-instructor ratios are also smaller in CMO schools than in their host districts. The average pupil-to-instructor ratios in math and reading are about 20.9 students per instructor; by contrast, in comparison schools the ratios are 23.5 in math and 23.2 in reading (p. xxiv)

And how about that “creaming” reputation that charters have, drawing as they do students with higher achievement to begin with, thus making any subsequent comparisons to public school students skewed? Remember that we know from Table 1 above that the sample for this study was disproportionately African American and Hispanic when compared to the host district.  From the Report:

Students Entering CMO (Middle) Schools Typically Have Prior Achievement Levels That Are Similar to the Local Average and Somewhat Higher Than the Local Average For Black And Hispanic Students

. . . most CMOs attract somewhat higher achieving students of color relative to those served by their host districts.  Thirteen of 22 CMOs in our sample serve black students who had significantly higher average pre-entry reading test scores than the averages for their black peers in the host district; only two CMOs served black students with scores significantly lower than those of black students locally. Likewise, the pre-entry reading scores of Hispanic students in 13 of 23 CMOs were significantly higher than Hispanic averages locally, and only three CMOs served Hispanic students with significantly lower baseline reading achievement than that of other Hispanic students in their districts. The percentages are similar for reading test scores.  Thus, while CMOs attract a disproportionate number of black and Hispanic students, these students tend to have higher test scores on average when they enter the CMO than their black and Hispanic peers in the host districts (pp. 19-20).

The concluding section of the study takes up this subject again, in discussing peer effects (my bolds):

. . .because CMOs operate schools of choice, the families they attract are different in both measurable and unmeasurable ways, which may give rise to peer effects. The selection process of students is driven in part by who learns about and chooses to apply to CMO schools. It is possible that the parents or students who end up enrolling in some CMO schools are more motivated or have other assets. In addition, CMOs can encourage certain families to apply or enroll in their school; even those with random lotteries can target their recruitment efforts and ask students to sign agreements to attend regularly and do their homework. An individual student may benefit from being in the same school and classroom with other students with higher levels of motivation or parental support. If peer effects are contributing to CMO impacts, this does not mean that our impacts are improperly measured. Indeed, our experimental results suggest the impacts are accurate. But it could affect our understanding of the mechanisms behind the impacts: Peer effects may explain why CMO students do better than they would have had they been placed in a school or classroom where there are fewer students like themselves. If that turns out to be true, it would also have important implications for policy: Similar effects might not be achieved, for example, if CMO practices were directly applied to conventional public schools that are not schools of choice. While peer effects can be challenging to estimate, future research should explore their importance (p. 75).

Besides Rotherham, the other propaganda spin comes from Jim Peyser of the venture philanthropy outfit, NewSchools City Funds, a spinoff of NSVF.  Peyser is intent on making the claim that larger is better (volume, volume, volume!) and that TFA is better (cheaper, cheaper, cheaper!):

There is a statistically significant association between math achievement in CMOs and the percentage of new teachers coming from Teach For America and Teaching Fellows teachers. This finding not only demonstrates the value of TFA to the charter school sector, but it underscores the importance of alternative teacher preparations programs in general to addressing public education’s human capital challenges. Other staffing decisions (including opportunities for tenure) were not associated with positive impacts.

This is what the Report actually says (my bolds):

Math impacts are higher among CMOs that rely more heavily on TFA and the Teaching Fellows programs as sources of new teachers. Specifically there is a statistically significant association between math impacts and the percentage of new teachers from these two sources, both of which tend to recruit and provide some training to recent graduates of highly selective colleges. One should be cautious about placing substantial weight on this finding because this is one of the many secondary hypotheses tested and the positive association could be due to random chance (p. 69).

Another of Peyser’s misleading conclusion that he would like to see in the Report has to do with size of CMOs (Peyser’s bolds):

The strongest CMOs tend to be larger than the lower performing ones, countering a long-held hypothesis that scale and quality are incompatible.

And yet the Report makes a specific warning against drawing the conclusion that Peyser draws (my bolds):

Large CMOs in our sample tend to have positive impacts, while small CMOs are more likely to have negative impacts. This might indicate that funders have had some success in supporting the expansion of CMOs that are more effective.  In particular, eight of the 12 large CMOs (those operating more than 8 schools in 2009-10) have significant positive impacts in at least one subject, while only 3 of the 10 small CMOs (those operating 8 or fewer schools in 2009-10) have significant positive impacts in at least one subject. Meanwhile, only 2 of 12 large CMOs have significantly negative impacts in at least one subject, while 7 of 10 small CMOs have significantly negative impacts in at least one subject. CMOs that have positive impacts in both reading and math operate an average of 12 schools, while those with negative impacts in both subjects operate an average of 6 schools. Despite this pattern effectiveness is not related to size in a linear way: Correlations between math and reading CMO impacts and CMO size are not statistically significant (p. 58). 

. . . .

We also looked at whether absolute CMO growth (change in the number of schools operated by the CMO between fall 2004 and fall 2009) and relative CMO growth (the number of schools operated by the CMO in fall 2009 divided by the number of schools operated by the CMO in fall 2004) are associated with two-year impacts in math and reading. In both of these cross-sectional analyses, we found no statistically significant associations (p. 59).

Finally, there are other findings of this study that you would never see mentioned by Rotherham, Peyser, or Arne Duncan:  no positive impact could be attributed to performance-based teacher compensation, singular curricular or instructional approaches (think Common Core), or the constant use of “formative” testing to prepare for more testing:

Several other notable CMO-level characteristics do not show significant relationships with impacts.

We found no significant relationship between impacts and three other factors that we posited might contribute to student achievement. Specifically, impacts are not correlated with (1) the extent to which CMOs define a consistent educational approach through the selection of curricula and instructional materials, (2) performance-based teacher compensation, or (3) frequent formative student assessments (although impacts are larger when teachers frequently use student test results to modify lesson plans). Nor are impacts significantly associated with school or class sizes.  Math impacts are positively correlated with more hours of annual instruction, but this relationship appears to be largely due to the association of instructional time with behavior policies and coaching. We ran multivariate regressions of impacts on key practices that were significantly associated with impacts in bivariate regressions. In the multivariate regressions, the association between impacts and instructional time declined substantially and became not statistically significant (p. xxx).

One has to wonder what it will take for this latest Gates/Walton/Broad failure to become too obvious to ignore.  The elephant trumpets, the corporate spinners and scammers double down, and the politicians concur that In God We Trust.

If you liked this post please share it:
Follow Us:
This entry was posted in News, Research and tagged . Bookmark the permalink.