6.20.2010

Iceland, Freedom of Expression, and Institutional Competition


Iceland's Althing* just passed a resolution that was being heavily pushed by Julian Assange, the founder of Wikileaks (and recently profile-ee of the New Yorker, here) that seeks to make Iceland's protection of freedom of expression, especially over the internet, the strongest in the world.

Now, there are multiple ways of thinking about why this was done (that it's an attempt to bring international accolades to a country that's been rather macroeconomically embarrassed of late doesn't seem out of the realm of possibility...) but what I find most interesting about is that it's yet another example of traditionally noneconomic things getting some very economic treatment. The language that's been used to cover the bill thus far has been quite evocative of another form of "institutional competition," namely tax havens. It's fairly conceptually similar to the way that places like the Caymans Islands have decided to give themselves comparative advantage among investors by setting low tax laws and regulations that encourage the creation and hassle-free maintenance of off shore investment vehicles.

Yes, there's a fundamental information asymmetry difference here in that freedom of expression is, by definition, observable, so unless they do it anonymously dissidents from other countries will only be protected from Iceland's laws, which is probably not what they're worrying about in the first place. That does make it a lot less attractive than the knowledge that I could dump some ill-gotten gains in a numbered account in Lichtenstein and never have it get found, taxed, or linked to my ill-getting, but nonetheless. The decision to institutionally compete is there, and I'm curious to see how it'll pan out and whether it'll have any material effect.

Now all we need is a greater degree of institutional differentiation and a reduction in migration barriers and we can get some megascale Tiebout sorting. Lower taxes for Russian-style restrictions on free speech, anyone?

* I feel like somewhere there's an undergrad viking mythology professor who's very happy I'm linking to the webpage of the oldest parliamentary institution in the world. And no, I can't read Icelandic. But Google Chrome does have Google Translate built in...

6.17.2010

Watching soccer and celebrating planetary-scale achievements

I was watching the World Cup live on my laptop during breakfast today when I was struck by the massive achievements underlying this simple activity.

The players in Capetown are 7,800 miles (12,600 km) away from my NYC apartment (as the albatross flies, following the curved surface of the planet: see Google Earth image). Only 200 years ago on a fast clipper ship sailing 16.7 mph, it would have taken 19.5 days for information to travel that distance.  Driving a similar distance non-stop at 60 mph would take 5.4 days.  But now I can watch a goal being scored in almost real time (it only takes light, which is slightly faster than the transmitted signals, 0.042 seconds to travel that distance in a vacuum) while eating a bowl of oatmeal.

Not only have we developed the technology to send these signals around the world, but we have built the infrastructure to carry them, we have developed the legal and institutional machinery necessary to support and organize such massive infrastructure investments, and we have implemented an economic system that rewards individuals for doing all of the above.  Moreover, the information isn't just traveling from Capetown to my breakfast table, but to a few billion breakfast/lunch/dinner tables around the world.

It's easy to get down on humanity when you're focused on our mistakes and shortcomings, but sometimes its worth it to step back and celebrate our achievements.  We've come a long way from where we once were.

Risk Denialism and the Costs of Prevention


Sol's posts about the BP oil spill (here and here) got me thinking a little bit about the interplay between denialism and risk and how deeply related that is to the sprawling mess of concepts we put under the umbrella of "sustainable development."

One of the major themes in the coverage of the spill has been how poorly equipped BP was to deal with a spill of this magnitude. Now, regardless of your opinion of BP as a company, ex-post this is a bad thing. BP would much rather, right now, be known for its quick, competent and effective response to a major catastrophe than be roundly (and for that matter, rightly) villainized. They've lost about half their market cap since the spill happened, and now they have to set up a $20 billion clean up escrow account. Why weren't they better prepared?

There are a few potential answers to that. The spill's magnitude may simply have been completely unforeseeable, a "Black Swan" -style event that BP can be forgiven for not anticipating the same way New Yorkers can be forgiven for not buying tornado insurance. Or perhaps, net of prior cost-benefit analysis, the probability of a spill this big was so low compared to the cost of maintaining intervention equipment that BP decided to skimp on it, akin to how most New Yorkers spurn flood or wind insurance despite the fact that hurricanes intermittently hammer the city. But the answer that now seems most likely is that that there was a fundamental disconnect between what the rig workers told upper management (internal BP documents refer to it pre-spill as a "nightmare") and what upper management told them to do. This is akin to New Yorkers not buying renter's insurance after they've been told the burglary rate in their neighborhood is quite high.

What I find interesting about this is how closely the framework of this story jibes with so many of the other narratives in environment and development. A group of technically trained experts warns of a potentially catastrophic risk (climate change, overfishing, pandemic flu) only to have their warnings discarded by cost-bearing decision makers (politicians, corporate executives, voters) who deny that the risk is as great or even extant. In these scenarios, it's not that the decision makers wouldn't face massive costs should they turn out to be wrong, it's that there's a big difference between what their behavior indicates they think the probability of facing those costs is and what they're being told by technical advisors.

Why? Well, it seems that cost-bearing seems to have a very strong influence on how someone interprets difficult-to-verify information about risks, especially social / shared risks. In psychology this is known as the defensive denial hypothesis and there's a fair bit of empirical evidence to support it. The BP managers tasked with running the platform knew the immediate costs of reducing flow, or stopping drilling, or increasing safeguards, and it seems highly likely that this influenced how they interpreted warnings from the rig workers. The same sort of phenomenon seems to occur in a lot of other areas: fishermen are more sanguine about the risks of overfishing, oil executives downplay the risk of climate change, and derivatives traders claim that their activities are nowhere near dangerous enough to warrant regulation. Now, many other factors are clearly at stake in all of these, from discounting to strategic maneuvering to cheap talk, but given the genuineness with which deniers of, say, climate change argue their case, it seems difficult to say that they are not at least somewhat personally convinced that their interpretation of the evidence is correct.

Now that on its own isn't terribly revelatory, but when you combine it with the notion that perceptions of costs can be subject to manipulation as well, you get an interesting result. The more (or less) salient a risk's mitigation cost is made, the lower (or higher) people come to view the probability of that risk. Witness, for example, the repeated attempts to link efforts to combat climate change to personal tax burden by those who think (or claim to think) that it's a bogus risk. This may not even necessarily be an actual cost: the Jennifer McCarthy- led trend in parents refusing to vaccinate their children (leading to the actual risk of polio, measles, etc.) seems to be almost entirely a function of parents being led to believe that there is a potential cost to vaccination (possible autism) that is not supported by scientific evidence. BP's managers faced the immediate and salient costs of risk mitigation steps that they'd need to justify to highers-up and behaved in a way that seems to indicate that they didn't think the risk of a major industrial accident was worth fretting over.

So what? I think the lesson here is that while risk denial is often depicted as stemming from short-sightedness, or ignorance, or political zealotry, it's actually pretty common human behavior. People have preferences and like to align their behavior appropriately, and if that means that they have to subconsciously alter their assessments of how dangerous some far off activity may be, they'll do so. If we are concerned about arresting climate change, or preserving biodiversity, or managing natural resources, then it's important to keep in mind that the way people perceive the incidence of the cost of mitigation will not only affect their preferences in terms of raw cost-benefit analysis, but also legitimately move their perception of the riskiness of their behavior. If we want political support for efforts to deal with these sort of risks, it thus seems similarly important to find and then emphasize ways in which the costs can be made low and painless as it is to stress the potential for future damages.

6.16.2010

Standard error adjustment (OLS) for spatial correlation and serial correlation in panel data in (Stata and Matlab)

I've been doing statistical work on climate impacts (see a typhoon climatology of the Philippines to the right) and have been having trouble finding code that will properly account for spatial correlation and serial correlation when estimating linear regression models (OLS) with panel (longitudinal) data.  So I wrote my own scripts for Matlab and Stata.  They implement GMM estimates similar to Newey-West (see Conley, 2008). You can download them here.

They are "beta versions" at this point, they may contain errors.  If you find one, please let me know.

In the future, I may try to document what I've learned about the different estimators out there. But for the time being, some of the advantages of what I've written are:
  • The weighting kernels are isotropic in space
  • The spatial weighting kernel can be uniform (as Conley, 2008 recommends) or decay linearly (a la Bartlett).
  • Serial correlation at a specific location is accounted for non-parametrically
  • Locations are specified with Lat-Lon, but kernel cutoffs are specified in kilometers; the code accounts for the curvature of the earth (to first order) so relatively large spatial domains can be accounted for in a single sample
  • An option allows you to examine the impact of adjustments for spatial correlation and spatial + serial correlation on your standard errors
  • The Stata version follows the format of all Stata estimates, so it should be compatible with post-estimation commands (eg. you can output your results using "outreg2").
[If you use this code, please cite: Hsiang (PNAS 2010)]


STATA VERSION 2 UPDATE 2013: Thanks to my field-testing team (Gordon McCord and Kyle Meng), several bugs in the code and additional options have been added.  Most useful changes: the code now correctly accepts the wildcard "*" when specifying variables and the option "dropvar" can be used to drop variables that Stata regards as 'too collinear' (contributed by KM).

STATA VERSION 3 UPDATE 2018: Thanks to some careful bug-chasing by Mathias Thoenig and Jordan Adamson, I've fixed a single line of code that led to a miscalculation in the weights for autocorrelation across within-unit panel observations. This error did not affect the Matlab implementation. This error did not affect calculations for the influence of spatial autocorrelation within contemporary observations or adjustments for heteroskedasticity. The previous version (v2) sometimes over-inflated or under-estimated the standard error estimate adjustment due to auto-correlation, but the magnitude of this effect depends on both the serial-correlation structure in the data and the maximum lag length parameter (lagcutoff) determined by the user. For very long lag lengths (e.g. infinity) the standard error estimates are too small, but for shorter lag lengths the bias may be of either sign.

My (currently ad-hoc) help file for the Stata script is below the fold. The Matlab code has an associated help-file embedded.









Spatial data analysis resources

A nice site at Carleton College lists many resources for spatial data analysis (books, programs, data, etc).  Be sure to explore the menu bar on the left, since not all the links are visible initially.

It's been added to the page of Meta-resources.

6.13.2010

Free online JPAL lectures about conducting randomized field trials

Real experiments have set a new standard for establishing causality in development economics.  In an evaluation process that looks very similar to clinical trials in medicine, social policies in a randomly assigned "treatment group" are evaluated against outcomes for other individuals in a "control group." The argument for this kind of work is that (1) the effect of social policies must be measured carefully if we want to understand whether they are worth implementing at a national level and (2) historical analysis of policies is often insufficient because there is no "control group" against which outcomes can be compared.  For a good example of this kind of research, read this Tech Review article on Ben Olken's work on democracy in Indonesia.

Not all modern work in development is of this type since not all interventions or questions are suitable for the method (eg. my own work studying the impact of hurricanes on development cannot be), but there is a trend toward relying on it more and more. 


The Abdul Latif Jameel Poverty Action Lab (JPAL) {co-founded by Ester Duflo, the subject of an earlier post by Jesse} is a group of researchers that do randomized field trials of various policy interventions for poor communities. While many people do this kind of work, the folks at the (JPAL) have honed the method and mastered the logistics (which are tremendously complex).  There is course taught by several people at JPAL for executives about how to conduct a randomized trial (and why) which has been recorded and is free online.  To access it:
  1. open up iTunes (which you can download for free here
  2. go to the iTunes Store (there is a tab on the left for it)
  3. search "poverty action lab executive training
All the lectures should show up and are free to download.

6.10.2010

How wise is a cellulosic ethanol mandate?

The Tech Review describes recent "slow progress" on the development of commercial cellulosic ethanol.  Apparently, some companies are moving ahead with strategies to scale up:
ZeaChem, based in Lakewood, CO, has begun construction of a 250,000-gallon-per-year demonstration plant in Boardman, OR, that will produce chemicals from sugar and eventually ethanol from wood and other cellulosic materials...
The company's strategy for making the business a financial success and attracting investment for commercial scale plants is to start by producing ethyl acetate, which "takes about half the equipment and sells for twice the price of ethanol, so it's an ideal starter product," he says. Other biofuels companies are taking a similar approach--looking for high value products to offset high costs, at least initially. ZeaChem plans to incorporate the technology into an existing corn ethanol plant for commercial production of ethyl acetate. "If all goes well, that plant could be in operation by the end of next year," he says. A stand-alone commercial cellulosic ethanol plant would follow. It could switch between selling acetic acid, ethyl acetate, or ethanol, depending on the market.
If you're at all confused about why this happening, when we don't know how to make cellulosic ethanol without net energy expenditure, its driven by government support:

A renewable fuel standard signed into law in late 2007 requires the use of 100 million gallons of cellulosic ethanol in the United States this year and will ramp up to 16 billion gallons by 2022. But so far no commercial plants are operating, according to the Biotechnology Industry Organization (BIO), a leading trade group representing biofuel companies. The U.S. Environmental Protection Agency announced in February that it was scaling back the mandates to just 6.5 million gallons, which could be supplied by existing small-scale demonstration plants and new plants expected to open this year. That's up from approximately 3.5 million gallons produced in 2009.
Is it wise for our government to be driving this kind of investment?   One concern is that biofuels, in general, lead to the production of crops for energy which necessarily will increase the prices of food crops that are displaced.  In a recent working paper, Wolfram Schlenker and Michael Roberts try to identify (using weather shocks) the effect of the biofuel mandate on world food prices.  They predict that the biofuel mandate, as it stood at the time of writing, would lead to an increase of world food prices by 20-30%.  Further, they argue that since agricultural production will expand to meet this demand, and expansion of cultivated land releases CO2 in net, the policy may not even reduce GHG emissions.

This second point reminds me of a blog post I wrote two years ago, where I argued that innovations in the technology for the conversion of cellulose into fuel may have dramatic externalities.  If biomass that is usually considered "useless" suddenly has a shadow price, the strategic incentives to harvest entire ecosystems may be dangerously strong.  

The government should almost certainly not be subsidizing the development of this technology; and one can argue (depending on how risk-averse you are) that they should be taxing it for the risk we all are bearing should it succeed.  

6.06.2010

1980

This is surreal to me. This is an excerpt from an article in the New York Times on April 12, 1980.

I repeat. 1980. Thirty years ago. The internet was still science fiction then.

It is about the oil spill caused by the Ixtoc I oil well in the Gulf of Mexico, which spewed 140 million gallons over several months in 1980:
History's largest oil spill has been a fiasco form the beginning to end. Human errors and ineffctive safety equipment caused the blowout, and none of the "advanced" techniques for plugging the well or recapturing the oil worked satisfactorily thereafter. The gusher ran wild for nearly ten months....
The enduring question is whether a devastating blowout could occur in our own offshore waters....
A second question: Could a blowout in American waters be quickly capped and cleaned up? Ixtoc I shows that control technology is still quite primitive.  Attempts were made to jam the pipe back into the hole; a large cone was lowered over the well to capture oil and gas, and steel and lead ball were dropped down the well to plug it. Nothing worked. Relief wells to pump in mud failed for months to reach their target... The mop-up techniques did not function effectively either....
Most Americans would accept risking such blowouts to find oilfields as rich as Mexico's.  But the lessons of Ixtoc I can help reduce the risks.
If you don't know why this is darkly funny, look over the list of strategies BP has employed to stop the current spill.  There is obviously something wrong with the incentives to innovate emergency/cleanup technology.  In 1980, the techniques we are still using today were being joked about as "advanced."

The sheer volume of innovation that has occurred in the last 30 years across an uncountable number of research fields is astonishing and a tremendous feat of human ingenuity.  The fact that effectively zero innovation has occurred in oil-drilling-catastrophe-management suggests that nobody believed there was a sufficient payout to warrant such investments.  Since these catastrophes are massive public-bads, and the cost of the externality is almost certainly not internalized by the oil-companies, then standard economic theory would suggest the government needs to create incentives to invest in these technologies.  The problem is doubly difficult because we often think that research in profitable industries is under-supplied, since researchers cannot capture the full value of their work.  So policies to reduce our risk must counteract both a public-bad problem and an innovation problem.

The obvious tendency is to heap blame on BP.  But the current situation is a result of 30 years (at least) of improper policy.  Whether the US government had sufficient information in 1980 to realize its policies motivating innovation [in these technologies] was too weak, I cannot say.  But this article seems to suggest that perhaps they did.

6.05.2010

(Oil spill & windmill) x (technology & management)

The BP spill is tragic, but it's gotten me learning about oil drilling technology and history. Some nice sites by the NYTimes, which I think are generally much more interesting then their coverage of the political drama:

A graphical explanation of how the well was supposed to be plugged.

A reference describing the different attempts at stopping the leak.

A timeline of historical oil spills, with actual newspaper clippings from the events and descriptions of the legislation that followed each.

Also, quite depressing, is this innovative way of communicating to people exactly how large of an area is affected by the slick.

After all of this, I think (or at least hope) that we're all more educated about the technology underlying our massive energy infrastructure.

Trying to be a pragmatist, I keep looking at these diagrams (especially the ones that show how far underwater, and underground, the leaking well is) and thinking to myself, "is this really easier than windmills, seriously?" It's understandable (at least from a micro-economists perspective) that power companies with massive amounts of sunk capital would misrepresent the costs of a transition to cleaner technology, even if it would have large positive externalities. Interestingly, on this point, a recent NREL study suggests that the costs of stabilizing energy supply using wind and solar power have been exaggerated.

A major argument against more rapid investment in renewables provided by many energy experts has been that it is difficult to ensure a steady supply of energy when weather and daylight fluctuate. To me, this always seemed dubious, since the US is enormous and there are large anti-correlations in wind and cloud-cover across the country. It seemed that if panels and farms were distributed intelligently across space, we could take advantage of these known structures to provide a smoother power-supply to the country. All it would take is some planning, knowledge of meteorology and some cooperation among utilities. This is exactly the finding of the NREL study. They say that 35% of energy could be supplied by renewables without installing substantial backup capacity. All it would take is a little coordination.