11.30.2012

Come to our AGU session: Quantitative Modeling of Social and Environmental Systems


Jesse and I are convening a session at the American Geophysical Union with our colleagues Ram Fishman and Gordon McCord this coming Monday. If you're in the Bay Area, come check it out! We have a diverse and exciting lineup.

U14A.  Quantitative Modeling of Social and Environmental Systems
4:00 PM - 6:00 PM Monday; 102 (Moscone South)

4:00 PM - 4:30 PM
U14A-01. Climate Change: Modeling the Human Response
Michael Oppenheimer; Solomon M. Hsiang; Robert E. Kopp
ABSTRACT: Integrated assessment models have historically relied on forward modeling including, where possible, process-based representations to project climate change impacts. Some recent impact studies incorporate the effects of human responses to initial physical impacts, such as adaptation in agricultural systems, migration in response to drought, and climate-related changes in worker productivity. Sometimes the human response ameliorates the initial physical impacts, sometimes it aggravates it, and sometimes it displaces it onto others. In these arenas, understanding of underlying socioeconomic mechanisms is extremely limited. Consequently, for some sectors where sufficient data has accumulated, empirically based statistical models of human responses to past climate variability and change have been used to infer response sensitivities which may apply under certain conditions to future impacts, allowing a broad extension of integrated assessment into the realm of human adaptation. We discuss the insights gained from and limitations of such modeling for benefit-cost analysis of climate change. 
4:30 PM - 5:00 PM
U14A-02. Dams and Intergovernmental Transfers
Xiaojia Bao
ABSTRACT: Gainers and Losers are always associated with large scale hydrological infrastructure construction, such as dams, canals and water treatment facilities. Since most of these projects are public services and public goods, Some of these uneven impacts cannot fully be solved by markets. This paper tried to explore whether the governments are paying any effort to balance the uneven distributional impacts caused by dam construction or not. It showed that dam construction brought an average 2% decrease in per capita tax revenue in the upstream counties, a 30% increase in the dam-location counties and an insignificant increase in downstream counties. Similar distributional impacts were observed for other outcome variables. like rural income and agricultural crop yields, though the impacts differ across different crops. The paper also found some balancing efforts from inter-governmental transfers to reduce the unevenly distributed impacts caused by dam construction. However, overall the inter-governmental fiscal transfer efforts were not large enough to fully correct those uneven distributions, reflected from a 2% decrease of per capita GDP in upstream counties and increase of per capita GDP in local and downstream counties. This paper may shed some lights on the governmental considerations in the decision making process for large hydrological infrastructures. 
5:00 PM - 5:30 PM
U14A-03. Physically-based Assessment of Tropical Cyclone Damage and Economic Losses
Ning Lin
ABSTRACT: Estimating damage and economic losses caused by tropical cyclones (TC) is a topic of considerable research interest in many scientific fields, including meteorology, structural and coastal engineering, and actuarial sciences. One approach is based on the empirical relationship between TC characteristics and loss data. Another is to model the physical mechanism of TC-induced damage. In this talk we discuss about the physically-based approach to predict TC damage and losses due to extreme wind and storm surge.  
We first present an integrated vulnerability model, which, for the first time, explicitly models the essential mechanisms causing wind damage to residential areas during storm passage, including windborne-debris impact and the pressure-debris interaction that may lead, in a chain reaction, to structural failures (Lin and Vanmarcke 2010; Lin et al. 2010a). This model can be used to predict the economic losses in a residential neighborhood (with hundreds of buildings) during a specific TC (Yau et al. 2011) or applied jointly with a TC risk model (e.g., Emanuel et al 2008) to estimate the expected losses over long time periods. Then we present a TC storm surge risk model that has been applied to New York City (Lin et al. 2010b; Lin et al. 2012; Aerts et al. 2012), Miami-Dade County, Florida (Klima et al. 2011), Galveston, Texas (Lickley, 2012), and other coastal areas around the world (e.g., Tampa, Florida; Persian Gulf; Darwin, Australia; Shanghai, China). 
These physically-based models are applicable to various coastal areas and have the capability to account for the change of the climate and coastal exposure over time. We also point out that, although made computationally efficient for risk assessment, these models are not suitable for regional or global analysis, which has been a focus of the empirically-based economic analysis (e.g., Hsiang and Narita 2012). A future research direction is to simplify the physically-based models, possibly through parameterization, and make connections to the global loss data and economic analysis. 
5:30 PM - 6:00 PM
U14A-04. Modeling agricultural commodity prices and volatility in response to anticipated climate change
David B. Lobell; Nam Anh Tran; Jarrod Welch; Michael Roberts; Wolfram Schlenker
ABSTRACT: Food prices have shown a positive trend in the past decade, with episodes of rapid increases in 2008 and 2011. These increases pose a threat to food security in many regions of the world, where the poor are generally net consumers of food, and are also thought to increase risks of social and political unrest. The role of global warming in these price reversals have been debated, but little quantitative work has been done. A particular challenge in modeling these effects is that they require understanding links between climate and food supply, as well as between food supply and prices. Here we combine the anticipated effects of climate change on yield levels and volatility with an empirical competitive storage model to examine how expected climate change might affect prices and social welfare in the international food commodity market. We show that price level and volatility do increase over time in response to decreasing yield, and increasing yield variability. Land supply and storage demand both increase, but production and consumption continue to fall leading to a decrease in consumer surplus, and a corresponding though smaller increase in producer surplus.

Help map agriculture in Africa


Lyndon Estes writes
I have been working with Kelly Caylor developing a crowdsourced crop mapping project that is ready for some user testing.  I have asked my network of friends to test it a bit, in the hopes that we can see how the system performs, and get some initial data to show at my talk on this project next week at AGU.  
His instructions:
Hello Friends, I am kindly requesting your help with a research project. Our goal is to use crowdsourcing + google satellite imagery to map crop fields in Africa. We have just developed our prototype, which connects users on Amazon's Mechanical Turk Service to our field mapping interface.  
So, if any of you had a few minutes to spare over the next few days and an Amazon account (it's very easy to register as a Mechanical Turk user if you have an account, and not much harder to get an Amazon account if you don't have one), I would be very grateful if you could join up and map a few fields. The link below is to our website, which describes the registration and mapping process further.  
This project will be a for-pay endeavor within a few weeks, but for this stage when we are still working out bugs, we are going through Mechanical Turk's testing site (workersandbox.mturk.com), which pays fake money. Once you have registered, please go to sandbox to look for our HITs (Human Intelligence Tasks).  
Your help and feedback will be greatly appreciated, both for development purposes and for providing some data that I can include in my presentation on this project (Tuesday in San Fran).  
Thanks, Lyndon 
http://mappingafrica.princeton.edu/

11.12.2012

Were the cost estimates for Waxman-Markey overstated by 200-300%?


Jesse and I both come from the Sustainable Development PhD Program at Columbia, which has once again turned out a remarkable crop of job market candidates (see outcomes from 2012 and 2011). We both agreed that their job market papers were so innovative, diverse, rigorous and important that we wanted to feature them at FE.  Their results are striking and deserve dissemination (we would probably post them anyway even if the authors weren't on the market), but they also clearly illustrate what the what the Columbia program is all about. (Apply to it here, hire one of these candidates here.) This is the first post.

Good policy requires good cost-benefit analysis. But when we are developing innovative policies, like those used to curb greenhouse gas emissions, it's notoriously difficult to estimate both costs and benefits since no analogous policies have ever been implemented before.  The uncertainty associated with costs and benefits tends to make many forms of environmental policy difficult to implement in part because the imagined costs (when policy-makers are considering a policy) tend to exceed actual costs (what we observe after policies are actually implemented). Kyle Meng develops an innovative approach, linking Intrade predictions about the success of Waxman-Markey with stock-market returns and abrupt political events, to measure the cost of the bill to firms as predicted by the market. This is very different from standard technocratic approaches used by the government to assess the cost of future policies, which rely on parameterized models of technology and econometric models of behavior ("structural models").

By relying on the market, Meng infers what players in affected industries actually expect to happen in their own industry. The result is a bit surprising: Meng estimates that standard costs-estimates for WM (produced before it failed to pass) are 200-300% larger than what players in the industry actually expected it to cost them.  But this still didn't stop industry players from fighting the bill -- one of the ways that Meng validates his approach is to use lobby records to show that firms which expect to suffer more from the bill (as recovered using his approach) spend more money to fight it.

It's tough to tell whether Meng's approach or the structural models are more accurate predictors of firm-level costs since WM was never brought into law, so the outcomes will remain forever unobserved. But he does show that for several similar laws (eg. the Montreal Protocol), the structural predictions tended to overestimate the actual costs of implementation (which were observed after the law was implemented and outcomes observed) by roughly a factor of two. This doesn't prove that Meng's approach is more accurate, but it shows that his estimate for the bias of the structural approach (with regard to WM) is consistant with the historical biases of these models.

The paper:

The Cost of Potential Cap-and-Trade Policy: An Event Study using Prediction Markets and Lobbying Records
Kyle Meng
Abstract: Efforts to understand the cost of climate policy have been constrained by the limited number of policies available for evaluation. This paper develops an empirical method for forecasting the expected cost to firms of a proposed climate policy that was never realized. I combine prediction market prices, which reflect market beliefs over regulatory prospects, with stock returns in order to estimate the expected cost to firms of the Waxman-Markey cap-and-trade bill, had it been implemented. I find that Waxman-Markey would have reduced the market value of a listed firm by an average of 2.0%, resulting in a total cost of $165 billion for all listed firms. The strongest effects are found in sectors with greater carbon and energy intensity, import penetration, and exposure to U.S. product markets, and in sectors granted free allowances. Because the values of unlisted firms are not observed, I use firm-level lobbying expenditures within a partial identification framework to obtain bounds for the costs borne by unlisted firms. This procedure recovers a total cost to all firms between $110 and $260 billion. I conclude by comparing estimates from this method with Waxman-Markey forecasts by prevailing computable general equilibrium models of climate policy.
In figures...

Abrupt political events that affect the expected success of WM are quantified by looking at expectations in Intrade markets:

click to enlarge

When WM appears more likely, the stock prices of CO2 intensive firms falls on average:

click to enlarge

Firms that are more CO2 intensive are affected more strongly:

click to enlarge

Firms whose stock prices are more responsive to WM lobby harder against it:

click to enlarge

How these cost estimates compare with structural cost estimates, and similar statistics for historical regulations that actually passed into law.

click to enlarge

Take home summary: Cap and trade in the USA probably would have been cheaper to implement than we thought, according to the firms it was going to regulate. 

11.09.2012

Climate and Conflict in East Africa (carefully interpreting statistical results revisited)

Andrew Revkin asked for thoughts on a recent PNAS paper on conflict. He posted a watercolor regression plot that Marshall Burke and I made, but I guess he (understandably) didn't have space for my lengthy statistical commentary.  Read my appendix to Revkin's post on the G-FEED blog here.

figure explained here

11.07.2012

An American, a Canadian and a physicist walk into a bar with a regression... why not to use log(temperature)

Many of us applied staticians like to transform our data (prior to analysis) by taking the natural logarithm of variable values.  This transformation is clever because it transforms regression coefficients into elasticities, which are especially nice because they are unitless. In the regression

log(y) = b* log(x)

b represents the percentage change in y that is associated with a 1% change in x. But this transformation is not always a good idea.  

I frequently see papers that examine the effect of temperature (or control for it because they care about some other factor) and use log(temperature) as an independent variable.  This is a bad idea because a 1% change in temperature is an ambiguous value. 

Imagine an author estimates

log(Y) = b*log(temperature)

and obtains the estimate b = 1. The author reports that a 1% change in temperature leads to a 1% change in Y. I have seen this done many times.

Now an American reader wants to apply this estimate to some hypothetical scenario where the temperature changes from 75 Fahrenheit (F) to 80 F. She computes the change in the independent variable  D:

DAmerican = log(80)-log(75) = 0.065

and concludes that because temperature is changing 6.5%, then Y also changes 6.5% (since 0.065*b = 0.065*1 = 0.065).

But now imagine that a Canadian reader wants to do the same thing.  Canadians use the metric system, so they measure temperature in Celsius (C) rather than Fahrenheit. Because 80F = 26.67C and 75F = 23.89C, the Canadian computes

DCanadian = log(26.67)-log(23.89) = 0.110

and concludes that Y increases 11%.

Finally, a physicist tries to compute the same change in Y, but physicists use Kelvin (K) and 80F = 299.82K and 75F = 297.04K, so she uses

Dphysicist = log(299.82) - log(297.04) = 0.009

and concludes that Y increases by a measly 0.9%.

What happened? Usually we like the log transformation because it makes units irrelevant. But here changes in units dramatically changed the predication of this model, causing it to range from 0.9% to 11%! 

The answer is that the log transformation is a bad idea when the value x = 0 is not anchored to a unique [physical] interpretation. When we change from Fahrenheit to Celsius to Kelvin, we change the meaning of "zero temperature" since 0 F does not equal 0 C which does not equal 0 K.  This causes a 1% change in F to not have the same meaning as a 1% change in C or K.   The log transformation is robust to a rescaling of units but not to a recentering of units.

For comparison, log(rainfall) is an okay measure to use as an independent variable, since zero rainfall is always the same, regardless of whether one uses inches, millimeters or Smoots to measure rainfall.

11.05.2012

Sexism in science persists, it is unacceptable and female mentors exhibit a substantially larger bias than male mentors

This is a recent, elegant and upsetting PNAS paper:

Science faculty’s subtle gender biases favor male students
Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Grahama, and Jo Handelsman
Abstract: Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty exhibit a bias against female students that could contribute to the gender disparity in academic science. In a randomized double-blind study (n = 127), science faculty from research-intensive universities rated the application materials of a student—who was randomly assigned either a male or female name—for a laboratory manager position. Faculty participants rated the male applicant as significantly more competent and hireable than the (identical) female applicant. These participants also selected a higher starting salary and offered more career mentoring to the male applicant. The gender of the faculty participants did not affect responses, such that female and male faculty were equally likely to exhibit bias against the female student. Mediation analyses indicated that the female student was less likely to be hired because she was viewed as less competent. We also assessed faculty participants’ preexisting subtle bias against women using a standard instrument and found that preexisting subtle bias against women played a moder- ating role, such that subtle bias against women was associated with less support for the female student, but was unrelated to reactions to the male student. These results suggest that interventions addressing faculty gender bias might advance the goal of increasing the participation of women in science.
The authors construct a fake job application for a hypothetical undergraduate student who is applying to work as a scientific technician/lab manager in a laboratory (this is a common stepping-stone to entering a doctoral program and becoming a PhD researcher). The authors randomly assign a male or female name to the applicant and distribute the application to principle investigators (PhD scientists who run real labs), asking them to score the applicant on a variety of metrics such as "competence" and "hireability." The only difference between applications is the gender of the student. The results are unambiguous:

Click to enlarge

It is possible that one could construct an explanation for why researchers would mentor male students more, without invoking sexism -- eg. perhaps if the scientist believes a female student is more likely to leave the field, they will feel like there is less personal reward for spending time mentoring female students. But (and this is where I congratulate the authors for a well-designed experiment) there is no way that differences in "competence" scores can be explained without sexism.

The authors then ask these principle investigators how much they would be willing to pay the applicant to work in their lab:

Click to enlarge

The authors write: 
Finally, using a previously validated scale, we also measured how much faculty participants liked the student (see SI Materials and Methods). In keeping with a large body of literature, faculty participants reported liking the female (mean = 4.35, SD = 0.93) more than the male student [(mean = 3.91, SD = 0.1.08), t(125) = −2.44, < 0.05]. However, consistent with this previous literature, liking the female student more than the male student did not translate into positive perceptions of her composite competence or material outcomes in the form of a job offer, an equitable salary, or valuable career mentoring.
Every teacher, research and mentor in the sciences should read the paper (open access here) and do some soul-searching, asking themselves if they consciously or subconsciously discriminate against female students, employees in the lab or colleagues.  In addition, I also think we all have the responsibility to keep one another honest and to make one another aware of situations and decisions when we might mistakenly judge our students, employees or peers based on their sex and not on their scientific merit.

Overall, this paper is carefully designed and convincing with writing that is thoughtful and readable.  Just a few comments before proceeding:
  1. The title of the paper describes the results as a "subtle" bias. But as my fiance (a PhD) points out, if effects of this size were found in any other context, they would held up as "big" effects.  I think the use of the word "subtle" is a bit confusing, since I think the authors are referring to the bias being subconscious, rather than referring to the magnitude of the bias (which is not subtle).
  2. Even if these biases are subconscious, they are still sexist. I understand why the authors don't use this language in a published paper, but in discussing these results in the context of our own conduct, it seems important to not shy away from what is really going on. Describing these results as a subconscious bias, rather than sexism, may make them seem more excusable.
  3. The central contribution of the paper is to simply point out what is going on.  Once we are aware of our own biases, especially if they are subconscious, we can make a conscious effort to correct them. But in addition to each of us reflecting on our own actions, there are some easy institutional mechanisms that can be developed to help us avoid these biases. For example, universities could make it easy for scientists to receive job applications through an electronic system that automatically double-blinds the applicant.  (Many peer-reviewed journals do this when sending papers out to referees.)  Unfortunately, it is harder to protect students and researchers in day-to-day activities, so if sexist treatment persists even after the hiring process, this will be harder to address. But much more certainly could be done. For example, it should be standard that an oversight committee anonymously surveys students and employees regularly to determine if there is statistical evidence of discrimination within departments or individuals laboratories (which tend to behave a bit like small fiefdoms, with little to no oversight of the principle investigator's behavior towards students/employees).  The NSF and various funding agencies frequently award money to labs for "teaching and mentoring," so they should make these anonymous evaluations and their analysis (or something similar) a requirement for this funding.
The authors are careful to check whether sexism is a strictly male phenomenon (i.e. men faculty discriminating against female students). They do this by constructing this table:

Click to enlarge

The authors find that both male and female mentors exhibit sexism. But the authors do not push the data as far as they could as they make no statements about whether the bias of male mentors is larger or smaller than that of female mentors.  The authors write:
In support of hypothesis B, faculty gender did not affect bias (Table 1). Tests of simple effects (all d < 0.33) indicated that female faculty participants did not rate the female student as more competent [t(62) = 0.06, P = 0.95] or hireable [t(62) = 0.41, P = 0.69] than did male faculty. Female faculty also did not offer more mentoring [t(62) = 0.29, P = 0.77] or a higher salary [t(61) = 1.14, P = 0.26] to the female student than did their male colleagues. In addition, faculty participants’ scientific field, age, and tenure status had no effect (all P < 0.53). Thus, the bias appears pervasive among faculty and is not limited to a certain demographic subgroup.
And later in the discussion:
Our results revealed that both male and female faculty judged a female student to be less competent and less worthy of being hired than an identical male student, and also offered her a smaller starting salary and less career mentoring. Although the differences in ratings may be perceived as modest, the effect sizes were all moderate to large (d = 0.60–0.75). Thus, the current results suggest that subtle gender bias is important to address because it could translate into large real-world dis- advantages in the judgment and treatment of female science students (39). Moreover, our mediation findings shed light on the processes responsible for this bias, suggesting that the female student was less likely to be hired than the male student because she was perceived as less competent. Additionally, moderation results indicated that faculty participants’ preexisting subtle bias against women undermined their perceptions and treatment of the female (but not the male) student, further suggesting that chronic subtle biases may harm women within academic science. Use of a randomized controlled design and established practices from audit study methodology support the ecological validity and educational implications of our findings (SI Materials and Methods). 
It is noteworthy that female faculty members were just as likely as their male colleagues to favor the male student. The fact that faculty members’ bias was independent of their gender, scientific discipline, age, and tenure status suggests that it is likely un- intentional, generated from widespread cultural stereotypes rather than a conscious intention to harm women (17). Additionally, the fact that faculty participants reported liking the fe- male more than the male student further underscores the point that our results likely do not reflect faculty members’ overt hostility toward women. Instead, despite expressing warmth to- ward emerging female scientists, faculty members of both genders appear to be affected by enduring cultural stereotypes about women’s lack of science competence that translate into biases in student evaluation and mentoring.
Now here is a bit that I am adding. Looking at Table 1, it seemed like the bias for female mentors was larger, but it was hard to tell based on the layout of the table (you have to hold the differences in your head since they aren't written down). This caught my attention because its an issue that was raised by Anne Marie-Slaughter's recent Atlantic article, which I had discussed extensively with family and friends.

So I copied the data from Table 1 into Excel, reorganized it and explicitly compared the magnitude of the bias based on the gender of the faculty-mentor. In the first two panels, the column "difference" is the magnitude of the bias in favor of male students. In the bottom panel, I compare these biases and take their difference (a difference-in-differences) to see whether male or female mentors are more biased (a positive number means female faculty are more biased). The last column lists how much larger the bias is for female faculty relative to male faculty.



Both male and female faculty exhibit sexism. But across all four measures, female faculty exhibit a larger bias then male faculty (I don't have the raw data, so I can't know if the difference is statistically significant in this sample -- but, the direction of the bias is clearly consistent across measures).  Without any evidence that the male student has more merit, the female faculty member is on average 15% more biased when it comes to evaluating whether the applicant is competent; and the female faculty member offers the male student an additional $920 in salary on top of the $3,400 extra that the male faculty offered him. Now, I am not trying to point the finger at women faculty to distract from the fact that male faculty are sexist. Discrimination by either group is unacceptable. I am simply trying to highlight one additional point that was skipped over in the original analysis and which Anne Marie-Slaughter argues is an important (but under-discussed) obstacle for professional women.

The findings of this study are important, and they indicate that all of us, men and women alike, should take a cold hard look at our own decisions, behaviors and tendencies. Sexism of this magnitude and scale, among some of the most highly educated members of society is unacceptable. If you observe a colleague who treats their male and female students differently, or if you see that they run a lab full of happy young men and miserable young women, take them aside and ask them what is going on.  It is not easy to call a colleague out on these things, but they would probably rather hear it from you than an internal review board -- and more importantly, we owe it to our students and employees who work hard for us and look up to their mentors for education, guidance and leadership.

It is morally indefensible that sexism of this magnitude persists in our scientific communities and that the young women who are discriminated against suffer at the hands of their teachers and mentors.  Moreover, we all lose out every time that a talented young woman, who would have made scientific discoveries benefitting the world, leaves science because of discrimination.

11.04.2012

Median Voter Theorem: proof by data visualization

The "median voter theorem" is a game-theoretic solution to democratic politics. It basically says that in equilibrium for a two party system, both candidates will have platforms that reflect the values of the median voter (in this election, Ohio). The second prediction is that both parties get almost exactly 50% of the vote, with very small changes in voting behavior near the median voter determining who wins.

Co.Design points us to an excellent data visualization by Felix Gonda of 156 years of voting behavior in the US (for president, the Senate and the House of Representatives). Run the little scroll-bar for years and you'll see the power of the median voter [theorem].


Check it out.

11.02.2012

Should people living in high-risk locations get relief when they are hit by natural disasters?


National survey evidence on disasters and relief: Risk beliefs, self-interest, and compassion
W. Kip Viscusi, Richard J. Zeckhauser
Abstract: A nationally representative sample of respondents estimated their fatality risks from four types of natural disasters, and indicated whether they favored governmental disaster relief. For all hazards, including auto accident risks, most respondents assessed their risks as being below average, with one-third assessing them as average. Individuals from high- risk states, or with experience with disasters, estimate risks higher, though by less than reasonable calculations require. Four-fifths of our respondents favor government relief for disaster victims, but only one-third do for victims in high-risk areas. Individuals who perceive themselves at higher risk are more supportive of government assistance.
Un-gated version here. The conclusion is succinct:
This paper explored two broad questions: 1. What factors drive individuals’ beliefs about their risks from various disasters, and how accurate are those beliefs? 2. What policies do individuals favor for disaster relief, and how do those policies relate to their assessed risks?
The answer to the first question is that risk beliefs have many rational components, but fall short of what one would expect with fully rational Bayesian assessments of risk. Personal experience and location-related risk influence risk assessments in the right direction, but insufficiently. These factors should have a very powerful influence, as our Lorenz Curve for fatality risks by state shows that natural disaster risks are highly concentrated, unlike auto fatality risks. 
For each of our four natural disasters, more than half of our respondents thought that their fatality risk from natural disasters was below average, and another roughly thirty-five percent thought their risk was average. Even people who had experienced disasters did not differ markedly from those who had not. 
A common explanation for apparent underestimation of risks, such as those from auto accidents, is that individuals suffer from an illusion of control. That explanation does not apply to natural disasters. A plausible hypothesis, worthy of further study, is that individuals actually understand the skewness in the distribution of risk. Though only half of the population can be below median risk, the vast majority are below average in risk. That is surely true for auto accidents as well, the favorite domain for “control” hypotheses. 
More than four-fifths of our respondents favored government assistance for victims of natural disasters, but this fraction fell to only one-third when the natural disasters happened to people living in high-risk areas. This decline suggests that respondents intuitively un- derstand the concept of moral hazard. We label this phenomenon “efficient compassion.” That is, there is a strong element of compassion in their responses, but it is tempered when disaster victims have knowingly exposed themselves to high risk. Individuals who perceive themselves to be at greater personal risk are more supportive of government assistance, as are groups that tend to be liberal politically. Black respondents, who may have been particu- larly struck by the governmental failure to rescue the black population of New Orleans from Hurricane Katrina, are much more supportive of continued aid to that city. In short, policy preferences for disaster relief reflect both compassion for the unfortunate, and a dollop of self-interest.
More interesting excerpts:
Political orientation is a main driver of the support for relief, not just for the efficient compassion questions, but for all the relief options. In every instance, Republicans have a consistently lower probability of supporting the relief policies than do Democrats and independents. After controlling for political affiliation, blacks have higher probabilities for support; females also have higher probabilities, though not where moral hazard is a prime factor. Presumably, these groups are more liberal than their mere political affiliation indicates... 
The equations also included a measure of individual risk-taking behavior—the general health risk exposure of the respondent as reflected in whether they currently smoke cigarettes. Smokers face a considerable smoking-related mortality risk; their probability of premature death due to smoking is 1/6 to 1/3. The smoker variable consequently captures willingness to expose oneself to extremely large health risks. Beyond this, the smoker variable may also reflect a tolerance for others who take risks and are guilty of moral hazard, since smokers are frequent targets of criticism for their own risk-taking behavior. For the two relief questions involving individual choices to engage in risky behavior, smokers are more forgiving of decisions involving moral hazard and are more willing to support relief. Both effects are significant at the 10 percent level. 

Endogenous Literacy and One Laptop Per Child

There's a famous paper in development economics by Ted Miguel and Michael Kremer called "The Illusion of Sustainability" (ungated copy here). In it, Miguel and Kremer look at an initially incredibly cost-effective intervention (deworming, original paper here) and examine whether the one-time intervention could be made "sustainable," i.e., whether the local community would continue to support the deworming program. They test three separate techniques that have been widely advocated as means of making interventions self-sustaining (cost recovery, health education, and a social psych "commitment" technique) and find that all fail. The paper is an excellent piece of evidence whenever someone brings up pie-in-the-sky "this will pay for itself in the long run and we won't need to pay operating costs!" -type arguments.

Which is why I was so surprised to come across this article over at MIT's technology review:
Given Tablets but No Teachers, Ethiopian Children Teach Themselves
With 100 million first-grade-aged children worldwide having no access to schooling, the One Laptop Per Child organization is trying something new in two remote Ethiopian villages—simply dropping off tablet computers with preloaded programs and seeing what happens. 
[...] 
The experiment is being done in two isolated rural villages with about 20 first-grade-aged children each, about 50 miles from Addis Ababa. One village is called Wonchi, on the rim of a volcanic crater at 11,000 feet; the other is called Wolonchete, in the Great Rift Valley. Children there had never previously seen printed materials, road signs, or even packaging that had words on them, Negroponte said.  
Earlier this year, OLPC workers dropped off closed boxes containing the tablets, taped shut, with no instruction. “I thought the kids would play with the boxes. Within four minutes, one kid not only opened the box, found the on-off switch … powered it up. Within five days, they were using 47 apps per child, per day. Within two weeks, they were singing ABC songs in the village, and within five months, they had hacked Android,” Negroponte said. “Some idiot in our organization or in the Media Lab had disabled the camera, and they figured out the camera, and had hacked Android.”  
Elaborating later on Negroponte’s hacking comment, Ed McNierney, OLPC’s chief technology officer, said that the kids had gotten around OLPC’s effort to freeze desktop settings. “The kids had completely customized the desktop—so every kids’ tablet looked different. We had installed software to prevent them from doing that,” McNierney said. “And the fact they worked around it was clearly the kind of creativity, the kind of inquiry, the kind of discovery that we think is essential to learning.”  
“If they can learn to read, then they can read to learn,” Negroponte said.
I'm not quite sure how settled I am in my thinking about this (first thought: IRB), but as an idea it's fascinating, and reminds me a bit about the old saw that "if marginal productivity is declining, returns to capital investments in developing countries should be stratospherically high." Perhaps that statement, which seems not to hold true for physical capital, is truer for human capital, and this may be the small investment that's needed. If your first thought is "that's not a small investment," may I direct your attention to the end of the article:
The idea of dropping off tablets outside of the context of schools is a new paradigm for OLPC. Through the late 2000s, the company was focused on delivering a custom miniaturized and ruggedized laptop, the XO, of which about 3 million have been distributed to kids in 40 countries. 
Giving computers directly to poor kids without any instruction is even more ambitious than OLPC’s earlier pushes.“What can we do for these 100 million kids around the world who don’t go to school?” McNierney said. “Can we give them tool to read and learn—without having to provide schools and teachers and textbooks and all that?”
Quality school systems are not cheap compared to tablet PCs.