[...] We estimate the causal effect of exam school attendance using a regression-discontinuity design, reporting both parametric and non-parametric estimates. We also develop a procedure that addresses the potential for confounding in regression-discontinuity designs with multiple, closely-spaced admissions cutoffs. The outcomes studied here include scores on state standardized achievement tests, PSAT and SAT participation and scores, and AP scores. Our estimates show little effect of exam school offers on most students' achievement in most grades. We use two-stage least squares to convert reduced form estimates of the effects of exam school offers into estimates of peer and tracking effects, arguing that these appear to be unimportant in this context. On the other hand, a Boston exam school education seems to have a modest effect on high school English scores for minority applicants. A small group of 9th grade applicants also appears to do better on SAT Reasoning. These localized gains notwithstanding, the intense competition for exam school seats does not appear to be justified by improved learning for a broad set of students.
For the non-economists on this blog, Josh Angrist is one of the top empirical economists in the world (as well as enormously fun to read, viz my beach reading from last spring break) so having him evaluate your high school's academic outcomes is sort of like having John Madden come in and critique your JV football team.
As the authors openly admit in the paper, the experimental design (regression discontinuity, which was begging to be used to evaluate NYC specialized high school outcomes) is inherently limited in what it can say about students who were not near the cutoff. By assumption one treats students who barely make it into the school as being more or less the same as students who barely fail to make it in, and thus "going to the specialized high school" can be considered quasi-randomly assigned. Given that (and all of the tests that are run to make sure this assumption is valid) it looks like the simple act of going to a specialized high school when you're on the cusp is pretty nil. This may be surprising (and of course gets summarized in the Gothamist as "Stuyvesant, Bronx Science, Top Public Schools Not Worth It") but I think becomes less so with a little unpacking...
First, it's important to note that there's a very big difference between being in the bottom 10% at one school versus the top 10% at another. Kids who barely make it into a specialized high school are competing and comparing themselves against the remaining 90% of students who had little difficulty getting in. Meanwhile, their comparables at other schools are at the high end of the distribution. Given the complex nature of peer effects, teacher attention, and everything else, I'd say that these populations end up having very different experiences.
Second, I'd argue that a major reason specialized schools exist is not to help marginal kids do better but to allow superstar kids to do extraordinarily well. Stuy is famously referred to as a "haven for nerds" and like many top schools succeeds by virtue of giving driven and talented kids the opportunity and resources to do what they want. I imagine it'd be difficult to tease out (perhaps something geographic? I know a lot of kids from my neighborhood in the Bronx who went to Bronx Science despite getting into Stuy because it was much closer...) but I strongly suspect that the causal effect of going to the school is hugely nonlinear in ability.
Lastly, I'd say that even if we were to grant that the local treatment effect identified in the RD design reasonably proxied for the school's impact, test scores might not be the best place to look at outcomes. The specialized high schools are often touted as a means of leveling the playing field between poor (often immigrant) public school kids and rich private school ones. I suspect that if the outcomes of interest were not test scores but rather admission to elite colleges or wages in one's mid-20s, the results would be rather different.
In sum, the paper is super tightly identified, but given the populations they can plausibly claim to compare and the outcomes evaluated I'm not hugely surprised that the authors find little effect. My friends on Facebook and G+ who've been forwarding this to me can calm down. Unless they scored within 10% or so of the cutoff, in which case: sorry, guys, it was all for nothing...
* Full disclosure: I went to one of these schools (Stuyvesant) and *barely* made the cutoff, an experience that scared me into overperforming on standardized tests for the rest of my life.