Pale Blue Dot

My high-school buddy Brett pointed me towards this story, which I was ashamed that I didn't already know. I post it here so that other FE readers can be saved similar embarrassment.

In February of 1990, the unmanned Voyager 1 spacecraft had finished its primary mission of exploring the solar system (it had been launched in 1977) and it was rocketing out of the range of our communication technology at 64,000 km/h (40k mph). 6 billion kilometers away (well past Pluto) it was  further from Earth than anything else we humans had ever built. Realizing a unique opportunity, Carl Sagan requested that NASA turn the camera on the Voyager around and take a snapshot of planet Earth. This is the resulting photograph:

Photo of Planet Earth from 6 billion kilometers away. NASA, February 14, 1990.

Earth is the "Pale Blue Dot" on the right hand side of the photo, sitting in the right-most "beam of sunlight"  (which is actually an artifact of the camera). Click here if you need a hint identifying our planet.

This simple image immediately changes your view on the human condition. As Sagan writes:
From this distant vantage point, the Earth might not seem of any particular interest. But for us, it's different. Consider again that dot. That's here. That's home. That's us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every "superstar," every "supreme leader," every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam. 
The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that in glory and triumph they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner. How frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Our posturings, our imagined self-importance, the delusion that we have some privileged position in the universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity – in all this vastness – there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known, so far, to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment, the Earth is where we make our stand. It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another and to preserve and cherish the pale blue dot, the only home we've ever known. 
—Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space, 1997 reprint, pp. xv–xvi

For more (albeit closer) pivotal images of Earth from space, see this earlier post.


Unleash your inner cartographer

In my work, I make a lot of maps. But they're usually just a single image and they're mediocre in terms of color-choice and design.  Mapbox is a product from a DC startup that can help us data-jockeys build svelte and scalable maps that integrate data from lots of sources. From their about page:
MapBox is a platform for designing and publishing fast and beautiful maps. We provide MapBox Streets, a complete customizable world base map, develop the powerful open source map design studio TileMill, make it easy to integrate maps into applications and websites, and support all of these tools on top of scalable, high-performance hosting. We've made MapBox developer friendly with an open API.
The development team has worked on all sorts of projects, from tracking elections to helping document hurricane damage. Their blog is also way cool.

h/t Young


If scientists had a Book of Psalms, it would be this book

While wandering through the Princeton bookstore, I stumbled upon this gem. The Oxford Book of Modern Science Writing by Richard Dawkins will become a treasure of the scientific community. Dawkins gathers 83 choice writing excerpts from the "Greats" of scientific writing (e.g. Pinker, Diamond, Turing, Einstein, Sagan, Penrose, Greene, Hawking, Chandrasekhar, Sacks, Oppenheimer, Wilson, Carson, Dyson, Snow... the whole list is here). The excerpts are each short (a few pages) but masterfully chosen, and Dawkins provides a brief discussion of each writer and their style before presenting the text.  The selected excerpts discuss many of the central philosophical questions/insights of science, as well as many of its key contributions -- so readers are educated about actual science in addition to seeing how to write about it beautifully.

The book is thick, and I haven't finished it myself, but I can't recommend it enough for anyone who considers themselves a scientist.  If science were art, this text would be like a distillation of the best masterworks from the world's best museums into a potent liquor that makes you feel guilty when you read from it because it is so rich and amazing -- representing much of humanity's collective accomplishments -- and undeservingly, you're still just sitting on your couch.

If you're looking for a holiday gift for a scientist, I would recommend this. Or if you're a scientist whose annoyed that your loved ones didn't buy you this book for the holidays, you can read a lot of it for free on google here.

An aside: If I ever get the chance, I hope to lead a seminar/clinic for phd students on scientific communication.  I think this book on writing will round out the curriculum alongside Tufte's book on data display and Baron's book on communicating verbally.


Is it true that "Everyone's a winner?" Dams in China and the challenge of balancing equity and efficiency during rapid industrialization

Jesse and I both come from the Sustainable Development PhD Program at Columbia which has once again turned out a remarkable crop of job market candidates (see outcomes from 2012 and 2011). We both agreed that their job market papers were so innovative, diverse, rigorous and important that we wanted to feature them at FE.  Their results are striking and deserve dissemination (we would probably post them anyway even if the authors weren't on the market), but they also clearly illustrate what the what the Columbia program is all about. (Apply to it here, hire one of these candidates here.) Here is the third and final post.

Large infrastructure investments are important for large-scale industrialization and economic development. Investments in power plants, roads, bridges and telecommunications, among others, provide important returns to society and are compliments to many types of private investment. But during rapid industrialization, as leaders focus on growth, there is often concern that questions of equity are cast aside. In the case of large-scale infrastructure investments, there are frequently populations ("losers") that suffer private costs when certain types of infrastructure are built -- for example, people whose homes are in the path of a new highway or who are affected by pollution from a power plant.

In public policy analysis and economics, we try to think objectively of the overall benefits of large investments to an entire society, keeping in mind that there will usually be some "losers" from the new policy in addition to a (hopefully larger) group of "winners."  In the cost-benefit analysis of large projects, we usually say if that a project is worth doing if the gains to the winners outweigh the loses to the losers -- making the implicit assumption that somehow the winners can compensate the losers for their loses and continue to benefit themselves. In cases where the winners compensate the losers enough that their losses are fully offset (i.e. they are no longer net losers), we say that the investment is "Pareto improving" because nobody is made worse off by the project.

A Pareto improving project is probably a good thing to do, since nobody is hurt and probably many people benefit. However, in the case of large infrastructure investments, it is almost guaranteed that some groups will be worse off because of the project's effects, so making sure that everyone benefits from these projects will require that the winners actually compensate the losers. Occasionally this occurs privately, but that tends to be uncommon, so with large-scale projects we often think that a central government authority has a role to play in transferring some of benefits from the project away from the winners and towards the losers.

But do these transfers actually occur? In a smoothly functioning government, one would hope so.  But the governments of rapidly developing countries don't always have the most experienced regulators and often pathologies, like corruption, lead to doubt as to whether large financial transfers will be successful.  Empirically, we have little to no evidence as to whether governments in rapidly industrializing countries (1) accurately monitor the welfare lost by losers in the wake of large projects and (2) have the capacity necessary to compensate these losers for their loses. Thus, establishing whether governments can effectively compensate losers is important for understanding whether large-scale infrastructure investments can be made beneficial (or at least "not harmful") for all members of society.

Xiaojia Bao investigates this question for the famous and controversial example of dams in China. Over the last few decades, a large number of hydroelectric dams have been build throughout China. These dams are an important source of power for China's rapidly growing economy, but they also can lead to inundation upstream, a reduction in water supply downstream, and a slowed flow of water that leads to an accumulation of pollutants both upstream and downstream.

Bao asks whether the individuals who are adversely affected by new dams are compensated for their losses. To do this, she obtains data on dams and municipal-level data on revenue and transfers from the central government.   She uses geospatial analysis to figure out which municipalities are along rivers that are dammed and also which are upstream, downstream or at the dam site.  She then compares how the construction of a new dam alters the distribution of revenues and federal transfers to municipalities along the dammed river, in comparison to adjacent municipalities that are not on the river.

Bao finds that the Chinese government has been remarkably good at compensating those communities who suffer when dams are built.  Municipalities upstream of a dam lose the most revenue both while the dam is being built and after it become operational. But at the same time, the central government increases transfers to those municipalities sufficiently so that these municipalities suffer no net loss in revenue. In contrast, populations just downstream look like they benefit slightly from the dam's operation, increasing their revenue -- and it appears that the central government is also good at reducing transfers to those municipalities so that these gains are effectively "taxed away." The only group that is a clear net winner are the municipalities that host the actual dam itself, as their revenue rises and the central government provides them with additional transfers during a dam's construction.

These findings are important because we often worry that large-scale investment projects may exacerbate existing patterns of inequality, as populations that are already marginalized are saddled with new burdens for the sake of the "greater good." However, in cases where governments can effectively distribute the benefits from large projects so that no group is made worse off, then we should not let this fear prevent us from making the socially-beneficial investments in infrastructure that are essential to long run economic development.

The paper:
Dams and Intergovernmental Transfer: Are Dam Projects Pareto Improving in China?
Xiaojia Bao  
Abstract: Large-scale dams are controversial public infrastructure projects due to the unevenly distributed benefits and losses to local regions. The central government can make redistributive fiscal transfers to attenuate the impacts and reduce the inequality among local governments, but whether large-scale dam projects are Pareto improving is still a question. Using the geographic variation of dam impacts based on distances to the river and distances to dams, this paper adopts a difference-in-difference approach to estimate dam impacts at county level in China from 1996 to 2010. I find that a large-scale dam reduces local revenue in upstream counties significantly by 16%, while increasing local revenue by similar magnitude in dam-site counties. The negative revenue impacts in upstream counties are mitigated by intergovernmental transfers from the central government, with an increase rate around 13% during the dam construction and operation periods. No significant revenue and transfer impacts are found in downstream counties, except counties far downstream. These results suggest that dam-site counties benefit from dam projects the most, and intergovernmental transfers help to balance the negative impacts of dams in upstream counties correspondingly, making large-scale dam projects close to Pareto improving outcomes in China.
In figures...

In China, Bao obtains the location, height, and construction start/stop dates for all dams built before 2010.

click to enlarge

For every dam, Bao follows the corresponding river and calculates which municipalities are "upstream" and which are "downstream." She then computes finds comparison "control" municipalities that are adjacent to these "treatment" municipalities (to account for regional trends). Here is an example for a single dam:

Click to enlarge

Bao estimates the average effect of dam construction (top) and operation(bottom) on municipal revenues as a function of distance upstream (left) or downstream (right).  Locations just upstream lose revenue, perhaps from losing land (inundation) or pollution. Locations at the dam gain revenue, perhaps because of spillovers from dam-related activity (eg. consumer spending). During operation, downstream locations benefit slightly, perhaps from flood control.

click to enlarge

Government transfers during construction/operation upstream/downstream. Upstream locations receive large positive transfers. Municipalities at the dam receive transfers during construction. Downstream locations lose some transfers (taxed away).

click to enlarge

Transfers (y-axis) vs. revenue (x-axis) for locations upstream/downstream and at the dam site, during dam construction. Locations are net "winners" if they are northeast of the grey triangle. Upstream municipalities are more than compensated for their lost revenue through transfers.   Municipalities at the dam site benefit through revenue increases and transfers.

click to enlarge

Same, but for dam operation (after construction is completed). Upstream locations are compensated for losses. Benefits to downstream locations are taxed away. Dam-site locations are net "winners".

Click to enlarge


Urban bus pollution and infant health: how New York City's smog reduction program generates millions of dollars in benefits

Jesse and I both come from the Sustainable Development PhD Program at Columbia which has once again turned out a remarkable crop of job market candidates (see outcomes from 2012 and 2011). We both agreed that their job market papers were so innovative, diverse, rigorous and important that we wanted to feature them at FE.  Their results are striking and deserve dissemination (we would probably post them anyway even if the authors weren't on the market), but they also clearly illustrate what the what the Columbia program is all about. (Apply to it here, hire one of these candidates here.) Here is the second post.

Around the world, diesel-powered vehicles play a major role in moving people and goods. In particular, buses are heavily utilized in densely populated cities where large numbers of people are exposed to their exhaust. If bus exhaust has an impact on human health, then urban policy-makers would want to know this since it will affect whether or not it's worth it to invest in cleaner bus technologies. Upgrading the quality of public transport systems is usually expensive, but upgrading could have potentially large benefits since so many people live in dense urban centers and are exposed to their pollution. Deciding whether or not to invest in cleaner bus technologies is an important policy decision made by city officials, since buses aren't replaced very often and poor choices can affect city infrastructure for decades -- so its important that policy-makers know what the trade offs are when they make these decisions.

Unfortunately, to date, it has been extremely difficult to know if there are any effects of bus pollution on human health because cities are complex and bustling environments where people are constantly exposed to all sorts of rapidly changing environmental conditions. As one might imagine, looking at a city of ten-million people, each of whom is engaged daily in dozens of interacting activities, and trying to disentangle the web of factors that affect human health to isolate the effect of bus pollution is a daunting task. To tackle this problem, we would need to assemble a lot of data and conduct a careful analysis. This is exactly what Nicole Ngo has done.

Between 1990 and 2010,  New York City made major investments that transformed the city's bus fleet, reducing its emissions dramatically. To study the impact of this policy on human health, Ngo assembled a new massive data set that details exactly which bus drove on which route at what time every single day. Because the city's transition from dirty buses to clean buses occurred gradually over time, and because the dispatcher at the bus depot randomly assigns buses to different routes at different times, the people who live along bus routes were sometimes exposed to exhaust from dirtier buses and sometimes exposed to exhaust from clean buses.  By comparing health outcomes in households that are randomly exposed to the dirtier bus pollution with comparable households randomly exposed to cleaner bus pollution, Ngo can isolate the effect of the bus pollution on health.

In this paper, Ngo focuses on infant health (although I expect she will use this unique data set to study many more outcomes in the future) and measures the effect of a mother's exposure to bus pollution during pregnancy on a child's health at birth.  This is hard problem, since its impossible to know exactly all the different things that a mother does while she's pregnant and because Ngo has to use pollution data collected from air-quality monitors to model how pollution spreads from bus routes to nearby residences.  Despite these challenges, Ngo is able to detect the effect of in utero exposure to bus pollution on an infant's health at birth.  Fetuses that are exposed to higher levels of bus-generated Nitrous-Oxides (NOx) during their second and third trimester have a lower birthweight on average and fetuses exposed to more bus-generated particulate matter (PM) during those trimesters have a lower Apgar 5 score (a doctors subjective evaluation of newborn health).

The size of the effects that Ngo measures are relatively small for any individual child (so if you are pregnant and living near a bus route, you shouldn't panic).  But the aggregate effect of New York City's investment in clean buses is large, since there are many pregnant mothers who live near bus routes and who were exposed to less dangerous emissions because of these policies. Since its easiest to think about city-wide impacts using monetized measures, and because previous studies have demonstrated that higher birth weight causes an infants future income to be higher, Ngo aggregates these small impacts across many babies and estimates that the city's effort to upgrade buses increase total future earnings of these children by $66 million. Considering that the city upgraded roughly 4500 buses, this implies that each bus that was upgraded generated about $1,460 in value just through its influence on infant health and future earnings. Importantly however, Ngo notes:
This [benefit] is likely a lower bound since I do not consider increased hospitalizations costs from lower birth weights as discussed in Almond et al. (2005), nor could I find short-run or long-run costs associated with lower Apgar 5 scores.
and I expect that Ngo will uncover additional health benefits of New York City's bus program, which will likely increase estimates for the program's total benefits. Furthermore, I suspect that these estimates for the value of pollution control can be extrapolated to diesel trucks, although Ngo is appropriately cautious about doing so in her formal analysis.

These results are important for urban planners and policy-makers in cities around the world who must decide whether or not it is worth it to invest in cleaner public transit systems.  In addition, they are an excellent example of how great data and careful analysis can help us understand important human-environment relationships in complex urban systems.

The paper:
Transit buses and fetal health: An evaluation of bus pollution policies in New York City 
Nicole Ngo
Abstract The U.S. Environmental Protection Agency (EPA) reduced emission standards for transit buses by 98% between 1988 and 2010. I exploit the variation caused by these policy changes to evaluate the impacts of transit bus pollution policies on fetal health in New York City (NYC) by using bus vintage as a proxy for street-level bus emissions. I construct a novel panel data set for the NYC Transit bus fleet to assign maternal exposure to bus pollution at the census block level. Results show a 10% reduction in emission standards for particulate matter (PM) and nitrogen oxides (NOx) during pregnancy increased infant Apgar 5 scores by 0.003 points and birth weight by 6.6 grams. While the impacts on fetal health are modest, the sensitivity of later-life outcomes to prenatal conditions suggests improved emission standards between 1990 and 2009 have increased total earnings for the 2009 birth cohort who live near bus routes in NYC by at least $65.7 million.
In figures...

Bus routes in New York City, which Ngo links to residential exposure through geospatial analysis:

(click to enlarge)

Buses are upgraded throughout the two decades, with several large and abrupt changes in the fleet's composition:

(click to enlarge)

When dirtier buses are randomly assigned to travel a route, Ngo can detect this using air-monitoring stations near that route:

(click to enlarge)

Using her mathematical model of bus pollution (and its spatial diffusion) Ngo computes how New York City's investment in buses lead to a dramatic reduction in exposure to bus-generated pollutants:

(click to enlarge)

Exposure to bus-generated NOx during the second and third trimesters lowers birthweight, and exposure to bus-generated PM lowers Apgar5 scores:

(click to enlarge)

Probably the most important class I took at MIT

"Solving complex problems" (aka "12.000" or "Mission") is an innovated class for MIT freshman that was just awarded a Science Prize for Inquiry-Based Instruction.  

The course is a component of the larger Terrascope program (of which I am a proud member of the first cohort) designed to teach leadership and teamwork skills to students as they work on unbearably large and complex problems related to global environmental management (usually) or a coupling between human and environmental systems more broadly. The class is expertly designed and run, played a major role in my own personal development, and would be the one class that I would unconditionally recommend to any incoming MIT freshman.  Furthermore, if any faculty FE-reader is trying to build a program in "sustainable development" at the undergraduate level, I would strongly recommend that they try to develop a similar course.

The course is described by Kip Hodges (my first research supervisor, now at ASU) in a Science article this week (read it here for free):
Students are presented in the first class with a challenge that can be stated simply, but that is deceptively complex and has no straightforward answer. Over the course of the semester, it is their job collectively to “imagineer” a proposed solution, to articulate their solution, and to explain how they arrived at it.

(To be clear about how the class works, on the first day of my first semester at MIT, Kip walked into the room and put up on the board 
"Develop a way to characterize and monitor the well-being of one of the last true frontiers on Earth – the Amazon Basin rainforest – and devise a set of practical strategies to ensure its preservation."

and said "go". I'm not kidding.)

The instructor’s role in this class is primarily to create an environment conducive to self-directed learning. There are no lectures, although the students are exposed in a casual way to a series of case studies that are germane to their problem.... 
In the early years of offering this subject, we passed on to the students a list of people who had been recruited by the instructional staff and had volunteered to participate in such discussions. However, we soon found that such recruiting efforts were unnecessary; many at all levels of the academic community are open to such informal interactions when they are precipitated by students asking questions that begin with a phrase like: “What is your take on ….” These casual conversations are especially valuable because they impart an appreciation for practical integration of acquired knowledge
The course is currently run by Sam Bowring and Ari Epstein and it's material is up on MIT's Open CourseWare.  This year's cohort recently gave their final presentation (watch it here). My cohort's website is still up here (search long enough, and you can even find my freshman photo).

h/t Kip


Come to our AGU session: Quantitative Modeling of Social and Environmental Systems

Jesse and I are convening a session at the American Geophysical Union with our colleagues Ram Fishman and Gordon McCord this coming Monday. If you're in the Bay Area, come check it out! We have a diverse and exciting lineup.

U14A.  Quantitative Modeling of Social and Environmental Systems
4:00 PM - 6:00 PM Monday; 102 (Moscone South)

4:00 PM - 4:30 PM
U14A-01. Climate Change: Modeling the Human Response
Michael Oppenheimer; Solomon M. Hsiang; Robert E. Kopp
ABSTRACT: Integrated assessment models have historically relied on forward modeling including, where possible, process-based representations to project climate change impacts. Some recent impact studies incorporate the effects of human responses to initial physical impacts, such as adaptation in agricultural systems, migration in response to drought, and climate-related changes in worker productivity. Sometimes the human response ameliorates the initial physical impacts, sometimes it aggravates it, and sometimes it displaces it onto others. In these arenas, understanding of underlying socioeconomic mechanisms is extremely limited. Consequently, for some sectors where sufficient data has accumulated, empirically based statistical models of human responses to past climate variability and change have been used to infer response sensitivities which may apply under certain conditions to future impacts, allowing a broad extension of integrated assessment into the realm of human adaptation. We discuss the insights gained from and limitations of such modeling for benefit-cost analysis of climate change. 
4:30 PM - 5:00 PM
U14A-02. Dams and Intergovernmental Transfers
Xiaojia Bao
ABSTRACT: Gainers and Losers are always associated with large scale hydrological infrastructure construction, such as dams, canals and water treatment facilities. Since most of these projects are public services and public goods, Some of these uneven impacts cannot fully be solved by markets. This paper tried to explore whether the governments are paying any effort to balance the uneven distributional impacts caused by dam construction or not. It showed that dam construction brought an average 2% decrease in per capita tax revenue in the upstream counties, a 30% increase in the dam-location counties and an insignificant increase in downstream counties. Similar distributional impacts were observed for other outcome variables. like rural income and agricultural crop yields, though the impacts differ across different crops. The paper also found some balancing efforts from inter-governmental transfers to reduce the unevenly distributed impacts caused by dam construction. However, overall the inter-governmental fiscal transfer efforts were not large enough to fully correct those uneven distributions, reflected from a 2% decrease of per capita GDP in upstream counties and increase of per capita GDP in local and downstream counties. This paper may shed some lights on the governmental considerations in the decision making process for large hydrological infrastructures. 
5:00 PM - 5:30 PM
U14A-03. Physically-based Assessment of Tropical Cyclone Damage and Economic Losses
Ning Lin
ABSTRACT: Estimating damage and economic losses caused by tropical cyclones (TC) is a topic of considerable research interest in many scientific fields, including meteorology, structural and coastal engineering, and actuarial sciences. One approach is based on the empirical relationship between TC characteristics and loss data. Another is to model the physical mechanism of TC-induced damage. In this talk we discuss about the physically-based approach to predict TC damage and losses due to extreme wind and storm surge.  
We first present an integrated vulnerability model, which, for the first time, explicitly models the essential mechanisms causing wind damage to residential areas during storm passage, including windborne-debris impact and the pressure-debris interaction that may lead, in a chain reaction, to structural failures (Lin and Vanmarcke 2010; Lin et al. 2010a). This model can be used to predict the economic losses in a residential neighborhood (with hundreds of buildings) during a specific TC (Yau et al. 2011) or applied jointly with a TC risk model (e.g., Emanuel et al 2008) to estimate the expected losses over long time periods. Then we present a TC storm surge risk model that has been applied to New York City (Lin et al. 2010b; Lin et al. 2012; Aerts et al. 2012), Miami-Dade County, Florida (Klima et al. 2011), Galveston, Texas (Lickley, 2012), and other coastal areas around the world (e.g., Tampa, Florida; Persian Gulf; Darwin, Australia; Shanghai, China). 
These physically-based models are applicable to various coastal areas and have the capability to account for the change of the climate and coastal exposure over time. We also point out that, although made computationally efficient for risk assessment, these models are not suitable for regional or global analysis, which has been a focus of the empirically-based economic analysis (e.g., Hsiang and Narita 2012). A future research direction is to simplify the physically-based models, possibly through parameterization, and make connections to the global loss data and economic analysis. 
5:30 PM - 6:00 PM
U14A-04. Modeling agricultural commodity prices and volatility in response to anticipated climate change
David B. Lobell; Nam Anh Tran; Jarrod Welch; Michael Roberts; Wolfram Schlenker
ABSTRACT: Food prices have shown a positive trend in the past decade, with episodes of rapid increases in 2008 and 2011. These increases pose a threat to food security in many regions of the world, where the poor are generally net consumers of food, and are also thought to increase risks of social and political unrest. The role of global warming in these price reversals have been debated, but little quantitative work has been done. A particular challenge in modeling these effects is that they require understanding links between climate and food supply, as well as between food supply and prices. Here we combine the anticipated effects of climate change on yield levels and volatility with an empirical competitive storage model to examine how expected climate change might affect prices and social welfare in the international food commodity market. We show that price level and volatility do increase over time in response to decreasing yield, and increasing yield variability. Land supply and storage demand both increase, but production and consumption continue to fall leading to a decrease in consumer surplus, and a corresponding though smaller increase in producer surplus.

Help map agriculture in Africa

Lyndon Estes writes
I have been working with Kelly Caylor developing a crowdsourced crop mapping project that is ready for some user testing.  I have asked my network of friends to test it a bit, in the hopes that we can see how the system performs, and get some initial data to show at my talk on this project next week at AGU.  
His instructions:
Hello Friends, I am kindly requesting your help with a research project. Our goal is to use crowdsourcing + google satellite imagery to map crop fields in Africa. We have just developed our prototype, which connects users on Amazon's Mechanical Turk Service to our field mapping interface.  
So, if any of you had a few minutes to spare over the next few days and an Amazon account (it's very easy to register as a Mechanical Turk user if you have an account, and not much harder to get an Amazon account if you don't have one), I would be very grateful if you could join up and map a few fields. The link below is to our website, which describes the registration and mapping process further.  
This project will be a for-pay endeavor within a few weeks, but for this stage when we are still working out bugs, we are going through Mechanical Turk's testing site (workersandbox.mturk.com), which pays fake money. Once you have registered, please go to sandbox to look for our HITs (Human Intelligence Tasks).  
Your help and feedback will be greatly appreciated, both for development purposes and for providing some data that I can include in my presentation on this project (Tuesday in San Fran).  
Thanks, Lyndon 


Were the cost estimates for Waxman-Markey overstated by 200-300%?

Jesse and I both come from the Sustainable Development PhD Program at Columbia, which has once again turned out a remarkable crop of job market candidates (see outcomes from 2012 and 2011). We both agreed that their job market papers were so innovative, diverse, rigorous and important that we wanted to feature them at FE.  Their results are striking and deserve dissemination (we would probably post them anyway even if the authors weren't on the market), but they also clearly illustrate what the what the Columbia program is all about. (Apply to it here, hire one of these candidates here.) This is the first post.

Good policy requires good cost-benefit analysis. But when we are developing innovative policies, like those used to curb greenhouse gas emissions, it's notoriously difficult to estimate both costs and benefits since no analogous policies have ever been implemented before.  The uncertainty associated with costs and benefits tends to make many forms of environmental policy difficult to implement in part because the imagined costs (when policy-makers are considering a policy) tend to exceed actual costs (what we observe after policies are actually implemented). Kyle Meng develops an innovative approach, linking Intrade predictions about the success of Waxman-Markey with stock-market returns and abrupt political events, to measure the cost of the bill to firms as predicted by the market. This is very different from standard technocratic approaches used by the government to assess the cost of future policies, which rely on parameterized models of technology and econometric models of behavior ("structural models").

By relying on the market, Meng infers what players in affected industries actually expect to happen in their own industry. The result is a bit surprising: Meng estimates that standard costs-estimates for WM (produced before it failed to pass) are 200-300% larger than what players in the industry actually expected it to cost them.  But this still didn't stop industry players from fighting the bill -- one of the ways that Meng validates his approach is to use lobby records to show that firms which expect to suffer more from the bill (as recovered using his approach) spend more money to fight it.

It's tough to tell whether Meng's approach or the structural models are more accurate predictors of firm-level costs since WM was never brought into law, so the outcomes will remain forever unobserved. But he does show that for several similar laws (eg. the Montreal Protocol), the structural predictions tended to overestimate the actual costs of implementation (which were observed after the law was implemented and outcomes observed) by roughly a factor of two. This doesn't prove that Meng's approach is more accurate, but it shows that his estimate for the bias of the structural approach (with regard to WM) is consistant with the historical biases of these models.

The paper:

The Cost of Potential Cap-and-Trade Policy: An Event Study using Prediction Markets and Lobbying Records
Kyle Meng
Abstract: Efforts to understand the cost of climate policy have been constrained by the limited number of policies available for evaluation. This paper develops an empirical method for forecasting the expected cost to firms of a proposed climate policy that was never realized. I combine prediction market prices, which reflect market beliefs over regulatory prospects, with stock returns in order to estimate the expected cost to firms of the Waxman-Markey cap-and-trade bill, had it been implemented. I find that Waxman-Markey would have reduced the market value of a listed firm by an average of 2.0%, resulting in a total cost of $165 billion for all listed firms. The strongest effects are found in sectors with greater carbon and energy intensity, import penetration, and exposure to U.S. product markets, and in sectors granted free allowances. Because the values of unlisted firms are not observed, I use firm-level lobbying expenditures within a partial identification framework to obtain bounds for the costs borne by unlisted firms. This procedure recovers a total cost to all firms between $110 and $260 billion. I conclude by comparing estimates from this method with Waxman-Markey forecasts by prevailing computable general equilibrium models of climate policy.
In figures...

Abrupt political events that affect the expected success of WM are quantified by looking at expectations in Intrade markets:

click to enlarge

When WM appears more likely, the stock prices of CO2 intensive firms falls on average:

click to enlarge

Firms that are more CO2 intensive are affected more strongly:

click to enlarge

Firms whose stock prices are more responsive to WM lobby harder against it:

click to enlarge

How these cost estimates compare with structural cost estimates, and similar statistics for historical regulations that actually passed into law.

click to enlarge

Take home summary: Cap and trade in the USA probably would have been cheaper to implement than we thought, according to the firms it was going to regulate. 


Climate and Conflict in East Africa (carefully interpreting statistical results revisited)

Andrew Revkin asked for thoughts on a recent PNAS paper on conflict. He posted a watercolor regression plot that Marshall Burke and I made, but I guess he (understandably) didn't have space for my lengthy statistical commentary.  Read my appendix to Revkin's post on the G-FEED blog here.

figure explained here


An American, a Canadian and a physicist walk into a bar with a regression... why not to use log(temperature)

Many of us applied staticians like to transform our data (prior to analysis) by taking the natural logarithm of variable values.  This transformation is clever because it transforms regression coefficients into elasticities, which are especially nice because they are unitless. In the regression

log(y) = b* log(x)

b represents the percentage change in y that is associated with a 1% change in x. But this transformation is not always a good idea.  

I frequently see papers that examine the effect of temperature (or control for it because they care about some other factor) and use log(temperature) as an independent variable.  This is a bad idea because a 1% change in temperature is an ambiguous value. 

Imagine an author estimates

log(Y) = b*log(temperature)

and obtains the estimate b = 1. The author reports that a 1% change in temperature leads to a 1% change in Y. I have seen this done many times.

Now an American reader wants to apply this estimate to some hypothetical scenario where the temperature changes from 75 Fahrenheit (F) to 80 F. She computes the change in the independent variable  D:

DAmerican = log(80)-log(75) = 0.065

and concludes that because temperature is changing 6.5%, then Y also changes 6.5% (since 0.065*b = 0.065*1 = 0.065).

But now imagine that a Canadian reader wants to do the same thing.  Canadians use the metric system, so they measure temperature in Celsius (C) rather than Fahrenheit. Because 80F = 26.67C and 75F = 23.89C, the Canadian computes

DCanadian = log(26.67)-log(23.89) = 0.110

and concludes that Y increases 11%.

Finally, a physicist tries to compute the same change in Y, but physicists use Kelvin (K) and 80F = 299.82K and 75F = 297.04K, so she uses

Dphysicist = log(299.82) - log(297.04) = 0.009

and concludes that Y increases by a measly 0.9%.

What happened? Usually we like the log transformation because it makes units irrelevant. But here changes in units dramatically changed the predication of this model, causing it to range from 0.9% to 11%! 

The answer is that the log transformation is a bad idea when the value x = 0 is not anchored to a unique [physical] interpretation. When we change from Fahrenheit to Celsius to Kelvin, we change the meaning of "zero temperature" since 0 F does not equal 0 C which does not equal 0 K.  This causes a 1% change in F to not have the same meaning as a 1% change in C or K.   The log transformation is robust to a rescaling of units but not to a recentering of units.

For comparison, log(rainfall) is an okay measure to use as an independent variable, since zero rainfall is always the same, regardless of whether one uses inches, millimeters or Smoots to measure rainfall.


Sexism in science persists, it is unacceptable and female mentors exhibit a substantially larger bias than male mentors

This is a recent, elegant and upsetting PNAS paper:

Science faculty’s subtle gender biases favor male students
Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Grahama, and Jo Handelsman
Abstract: Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty exhibit a bias against female students that could contribute to the gender disparity in academic science. In a randomized double-blind study (n = 127), science faculty from research-intensive universities rated the application materials of a student—who was randomly assigned either a male or female name—for a laboratory manager position. Faculty participants rated the male applicant as significantly more competent and hireable than the (identical) female applicant. These participants also selected a higher starting salary and offered more career mentoring to the male applicant. The gender of the faculty participants did not affect responses, such that female and male faculty were equally likely to exhibit bias against the female student. Mediation analyses indicated that the female student was less likely to be hired because she was viewed as less competent. We also assessed faculty participants’ preexisting subtle bias against women using a standard instrument and found that preexisting subtle bias against women played a moder- ating role, such that subtle bias against women was associated with less support for the female student, but was unrelated to reactions to the male student. These results suggest that interventions addressing faculty gender bias might advance the goal of increasing the participation of women in science.
The authors construct a fake job application for a hypothetical undergraduate student who is applying to work as a scientific technician/lab manager in a laboratory (this is a common stepping-stone to entering a doctoral program and becoming a PhD researcher). The authors randomly assign a male or female name to the applicant and distribute the application to principle investigators (PhD scientists who run real labs), asking them to score the applicant on a variety of metrics such as "competence" and "hireability." The only difference between applications is the gender of the student. The results are unambiguous:

Click to enlarge

It is possible that one could construct an explanation for why researchers would mentor male students more, without invoking sexism -- eg. perhaps if the scientist believes a female student is more likely to leave the field, they will feel like there is less personal reward for spending time mentoring female students. But (and this is where I congratulate the authors for a well-designed experiment) there is no way that differences in "competence" scores can be explained without sexism.

The authors then ask these principle investigators how much they would be willing to pay the applicant to work in their lab:

Click to enlarge

The authors write: 
Finally, using a previously validated scale, we also measured how much faculty participants liked the student (see SI Materials and Methods). In keeping with a large body of literature, faculty participants reported liking the female (mean = 4.35, SD = 0.93) more than the male student [(mean = 3.91, SD = 0.1.08), t(125) = −2.44, < 0.05]. However, consistent with this previous literature, liking the female student more than the male student did not translate into positive perceptions of her composite competence or material outcomes in the form of a job offer, an equitable salary, or valuable career mentoring.
Every teacher, research and mentor in the sciences should read the paper (open access here) and do some soul-searching, asking themselves if they consciously or subconsciously discriminate against female students, employees in the lab or colleagues.  In addition, I also think we all have the responsibility to keep one another honest and to make one another aware of situations and decisions when we might mistakenly judge our students, employees or peers based on their sex and not on their scientific merit.

Overall, this paper is carefully designed and convincing with writing that is thoughtful and readable.  Just a few comments before proceeding:
  1. The title of the paper describes the results as a "subtle" bias. But as my fiance (a PhD) points out, if effects of this size were found in any other context, they would held up as "big" effects.  I think the use of the word "subtle" is a bit confusing, since I think the authors are referring to the bias being subconscious, rather than referring to the magnitude of the bias (which is not subtle).
  2. Even if these biases are subconscious, they are still sexist. I understand why the authors don't use this language in a published paper, but in discussing these results in the context of our own conduct, it seems important to not shy away from what is really going on. Describing these results as a subconscious bias, rather than sexism, may make them seem more excusable.
  3. The central contribution of the paper is to simply point out what is going on.  Once we are aware of our own biases, especially if they are subconscious, we can make a conscious effort to correct them. But in addition to each of us reflecting on our own actions, there are some easy institutional mechanisms that can be developed to help us avoid these biases. For example, universities could make it easy for scientists to receive job applications through an electronic system that automatically double-blinds the applicant.  (Many peer-reviewed journals do this when sending papers out to referees.)  Unfortunately, it is harder to protect students and researchers in day-to-day activities, so if sexist treatment persists even after the hiring process, this will be harder to address. But much more certainly could be done. For example, it should be standard that an oversight committee anonymously surveys students and employees regularly to determine if there is statistical evidence of discrimination within departments or individuals laboratories (which tend to behave a bit like small fiefdoms, with little to no oversight of the principle investigator's behavior towards students/employees).  The NSF and various funding agencies frequently award money to labs for "teaching and mentoring," so they should make these anonymous evaluations and their analysis (or something similar) a requirement for this funding.
The authors are careful to check whether sexism is a strictly male phenomenon (i.e. men faculty discriminating against female students). They do this by constructing this table:

Click to enlarge

The authors find that both male and female mentors exhibit sexism. But the authors do not push the data as far as they could as they make no statements about whether the bias of male mentors is larger or smaller than that of female mentors.  The authors write:
In support of hypothesis B, faculty gender did not affect bias (Table 1). Tests of simple effects (all d < 0.33) indicated that female faculty participants did not rate the female student as more competent [t(62) = 0.06, P = 0.95] or hireable [t(62) = 0.41, P = 0.69] than did male faculty. Female faculty also did not offer more mentoring [t(62) = 0.29, P = 0.77] or a higher salary [t(61) = 1.14, P = 0.26] to the female student than did their male colleagues. In addition, faculty participants’ scientific field, age, and tenure status had no effect (all P < 0.53). Thus, the bias appears pervasive among faculty and is not limited to a certain demographic subgroup.
And later in the discussion:
Our results revealed that both male and female faculty judged a female student to be less competent and less worthy of being hired than an identical male student, and also offered her a smaller starting salary and less career mentoring. Although the differences in ratings may be perceived as modest, the effect sizes were all moderate to large (d = 0.60–0.75). Thus, the current results suggest that subtle gender bias is important to address because it could translate into large real-world dis- advantages in the judgment and treatment of female science students (39). Moreover, our mediation findings shed light on the processes responsible for this bias, suggesting that the female student was less likely to be hired than the male student because she was perceived as less competent. Additionally, moderation results indicated that faculty participants’ preexisting subtle bias against women undermined their perceptions and treatment of the female (but not the male) student, further suggesting that chronic subtle biases may harm women within academic science. Use of a randomized controlled design and established practices from audit study methodology support the ecological validity and educational implications of our findings (SI Materials and Methods). 
It is noteworthy that female faculty members were just as likely as their male colleagues to favor the male student. The fact that faculty members’ bias was independent of their gender, scientific discipline, age, and tenure status suggests that it is likely un- intentional, generated from widespread cultural stereotypes rather than a conscious intention to harm women (17). Additionally, the fact that faculty participants reported liking the fe- male more than the male student further underscores the point that our results likely do not reflect faculty members’ overt hostility toward women. Instead, despite expressing warmth to- ward emerging female scientists, faculty members of both genders appear to be affected by enduring cultural stereotypes about women’s lack of science competence that translate into biases in student evaluation and mentoring.
Now here is a bit that I am adding. Looking at Table 1, it seemed like the bias for female mentors was larger, but it was hard to tell based on the layout of the table (you have to hold the differences in your head since they aren't written down). This caught my attention because its an issue that was raised by Anne Marie-Slaughter's recent Atlantic article, which I had discussed extensively with family and friends.

So I copied the data from Table 1 into Excel, reorganized it and explicitly compared the magnitude of the bias based on the gender of the faculty-mentor. In the first two panels, the column "difference" is the magnitude of the bias in favor of male students. In the bottom panel, I compare these biases and take their difference (a difference-in-differences) to see whether male or female mentors are more biased (a positive number means female faculty are more biased). The last column lists how much larger the bias is for female faculty relative to male faculty.

Both male and female faculty exhibit sexism. But across all four measures, female faculty exhibit a larger bias then male faculty (I don't have the raw data, so I can't know if the difference is statistically significant in this sample -- but, the direction of the bias is clearly consistent across measures).  Without any evidence that the male student has more merit, the female faculty member is on average 15% more biased when it comes to evaluating whether the applicant is competent; and the female faculty member offers the male student an additional $920 in salary on top of the $3,400 extra that the male faculty offered him. Now, I am not trying to point the finger at women faculty to distract from the fact that male faculty are sexist. Discrimination by either group is unacceptable. I am simply trying to highlight one additional point that was skipped over in the original analysis and which Anne Marie-Slaughter argues is an important (but under-discussed) obstacle for professional women.

The findings of this study are important, and they indicate that all of us, men and women alike, should take a cold hard look at our own decisions, behaviors and tendencies. Sexism of this magnitude and scale, among some of the most highly educated members of society is unacceptable. If you observe a colleague who treats their male and female students differently, or if you see that they run a lab full of happy young men and miserable young women, take them aside and ask them what is going on.  It is not easy to call a colleague out on these things, but they would probably rather hear it from you than an internal review board -- and more importantly, we owe it to our students and employees who work hard for us and look up to their mentors for education, guidance and leadership.

It is morally indefensible that sexism of this magnitude persists in our scientific communities and that the young women who are discriminated against suffer at the hands of their teachers and mentors.  Moreover, we all lose out every time that a talented young woman, who would have made scientific discoveries benefitting the world, leaves science because of discrimination.


Median Voter Theorem: proof by data visualization

The "median voter theorem" is a game-theoretic solution to democratic politics. It basically says that in equilibrium for a two party system, both candidates will have platforms that reflect the values of the median voter (in this election, Ohio). The second prediction is that both parties get almost exactly 50% of the vote, with very small changes in voting behavior near the median voter determining who wins.

Co.Design points us to an excellent data visualization by Felix Gonda of 156 years of voting behavior in the US (for president, the Senate and the House of Representatives). Run the little scroll-bar for years and you'll see the power of the median voter [theorem].

Check it out.


Should people living in high-risk locations get relief when they are hit by natural disasters?

National survey evidence on disasters and relief: Risk beliefs, self-interest, and compassion
W. Kip Viscusi, Richard J. Zeckhauser
Abstract: A nationally representative sample of respondents estimated their fatality risks from four types of natural disasters, and indicated whether they favored governmental disaster relief. For all hazards, including auto accident risks, most respondents assessed their risks as being below average, with one-third assessing them as average. Individuals from high- risk states, or with experience with disasters, estimate risks higher, though by less than reasonable calculations require. Four-fifths of our respondents favor government relief for disaster victims, but only one-third do for victims in high-risk areas. Individuals who perceive themselves at higher risk are more supportive of government assistance.
Un-gated version here. The conclusion is succinct:
This paper explored two broad questions: 1. What factors drive individuals’ beliefs about their risks from various disasters, and how accurate are those beliefs? 2. What policies do individuals favor for disaster relief, and how do those policies relate to their assessed risks?
The answer to the first question is that risk beliefs have many rational components, but fall short of what one would expect with fully rational Bayesian assessments of risk. Personal experience and location-related risk influence risk assessments in the right direction, but insufficiently. These factors should have a very powerful influence, as our Lorenz Curve for fatality risks by state shows that natural disaster risks are highly concentrated, unlike auto fatality risks. 
For each of our four natural disasters, more than half of our respondents thought that their fatality risk from natural disasters was below average, and another roughly thirty-five percent thought their risk was average. Even people who had experienced disasters did not differ markedly from those who had not. 
A common explanation for apparent underestimation of risks, such as those from auto accidents, is that individuals suffer from an illusion of control. That explanation does not apply to natural disasters. A plausible hypothesis, worthy of further study, is that individuals actually understand the skewness in the distribution of risk. Though only half of the population can be below median risk, the vast majority are below average in risk. That is surely true for auto accidents as well, the favorite domain for “control” hypotheses. 
More than four-fifths of our respondents favored government assistance for victims of natural disasters, but this fraction fell to only one-third when the natural disasters happened to people living in high-risk areas. This decline suggests that respondents intuitively un- derstand the concept of moral hazard. We label this phenomenon “efficient compassion.” That is, there is a strong element of compassion in their responses, but it is tempered when disaster victims have knowingly exposed themselves to high risk. Individuals who perceive themselves to be at greater personal risk are more supportive of government assistance, as are groups that tend to be liberal politically. Black respondents, who may have been particu- larly struck by the governmental failure to rescue the black population of New Orleans from Hurricane Katrina, are much more supportive of continued aid to that city. In short, policy preferences for disaster relief reflect both compassion for the unfortunate, and a dollop of self-interest.
More interesting excerpts:
Political orientation is a main driver of the support for relief, not just for the efficient compassion questions, but for all the relief options. In every instance, Republicans have a consistently lower probability of supporting the relief policies than do Democrats and independents. After controlling for political affiliation, blacks have higher probabilities for support; females also have higher probabilities, though not where moral hazard is a prime factor. Presumably, these groups are more liberal than their mere political affiliation indicates... 
The equations also included a measure of individual risk-taking behavior—the general health risk exposure of the respondent as reflected in whether they currently smoke cigarettes. Smokers face a considerable smoking-related mortality risk; their probability of premature death due to smoking is 1/6 to 1/3. The smoker variable consequently captures willingness to expose oneself to extremely large health risks. Beyond this, the smoker variable may also reflect a tolerance for others who take risks and are guilty of moral hazard, since smokers are frequent targets of criticism for their own risk-taking behavior. For the two relief questions involving individual choices to engage in risky behavior, smokers are more forgiving of decisions involving moral hazard and are more willing to support relief. Both effects are significant at the 10 percent level.