6.20.2010

Iceland, Freedom of Expression, and Institutional Competition


Iceland's Althing* just passed a resolution that was being heavily pushed by Julian Assange, the founder of Wikileaks (and recently profile-ee of the New Yorker, here) that seeks to make Iceland's protection of freedom of expression, especially over the internet, the strongest in the world.

Now, there are multiple ways of thinking about why this was done (that it's an attempt to bring international accolades to a country that's been rather macroeconomically embarrassed of late doesn't seem out of the realm of possibility...) but what I find most interesting about is that it's yet another example of traditionally noneconomic things getting some very economic treatment. The language that's been used to cover the bill thus far has been quite evocative of another form of "institutional competition," namely tax havens. It's fairly conceptually similar to the way that places like the Caymans Islands have decided to give themselves comparative advantage among investors by setting low tax laws and regulations that encourage the creation and hassle-free maintenance of off shore investment vehicles.

Yes, there's a fundamental information asymmetry difference here in that freedom of expression is, by definition, observable, so unless they do it anonymously dissidents from other countries will only be protected from Iceland's laws, which is probably not what they're worrying about in the first place. That does make it a lot less attractive than the knowledge that I could dump some ill-gotten gains in a numbered account in Lichtenstein and never have it get found, taxed, or linked to my ill-getting, but nonetheless. The decision to institutionally compete is there, and I'm curious to see how it'll pan out and whether it'll have any material effect.

Now all we need is a greater degree of institutional differentiation and a reduction in migration barriers and we can get some megascale Tiebout sorting. Lower taxes for Russian-style restrictions on free speech, anyone?

* I feel like somewhere there's an undergrad viking mythology professor who's very happy I'm linking to the webpage of the oldest parliamentary institution in the world. And no, I can't read Icelandic. But Google Chrome does have Google Translate built in...

6.17.2010

Watching soccer and celebrating planetary-scale achievements

I was watching the World Cup live on my laptop during breakfast today when I was struck by the massive achievements underlying this simple activity.

The players in Capetown are 7,800 miles (12,600 km) away from my NYC apartment (as the albatross flies, following the curved surface of the planet: see Google Earth image). Only 200 years ago on a fast clipper ship sailing 16.7 mph, it would have taken 19.5 days for information to travel that distance.  Driving a similar distance non-stop at 60 mph would take 5.4 days.  But now I can watch a goal being scored in almost real time (it only takes light, which is slightly faster than the transmitted signals, 0.042 seconds to travel that distance in a vacuum) while eating a bowl of oatmeal.

Not only have we developed the technology to send these signals around the world, but we have built the infrastructure to carry them, we have developed the legal and institutional machinery necessary to support and organize such massive infrastructure investments, and we have implemented an economic system that rewards individuals for doing all of the above.  Moreover, the information isn't just traveling from Capetown to my breakfast table, but to a few billion breakfast/lunch/dinner tables around the world.

It's easy to get down on humanity when you're focused on our mistakes and shortcomings, but sometimes its worth it to step back and celebrate our achievements.  We've come a long way from where we once were.

Risk Denialism and the Costs of Prevention


Sol's posts about the BP oil spill (here and here) got me thinking a little bit about the interplay between denialism and risk and how deeply related that is to the sprawling mess of concepts we put under the umbrella of "sustainable development."

One of the major themes in the coverage of the spill has been how poorly equipped BP was to deal with a spill of this magnitude. Now, regardless of your opinion of BP as a company, ex-post this is a bad thing. BP would much rather, right now, be known for its quick, competent and effective response to a major catastrophe than be roundly (and for that matter, rightly) villainized. They've lost about half their market cap since the spill happened, and now they have to set up a $20 billion clean up escrow account. Why weren't they better prepared?

There are a few potential answers to that. The spill's magnitude may simply have been completely unforeseeable, a "Black Swan" -style event that BP can be forgiven for not anticipating the same way New Yorkers can be forgiven for not buying tornado insurance. Or perhaps, net of prior cost-benefit analysis, the probability of a spill this big was so low compared to the cost of maintaining intervention equipment that BP decided to skimp on it, akin to how most New Yorkers spurn flood or wind insurance despite the fact that hurricanes intermittently hammer the city. But the answer that now seems most likely is that that there was a fundamental disconnect between what the rig workers told upper management (internal BP documents refer to it pre-spill as a "nightmare") and what upper management told them to do. This is akin to New Yorkers not buying renter's insurance after they've been told the burglary rate in their neighborhood is quite high.

What I find interesting about this is how closely the framework of this story jibes with so many of the other narratives in environment and development. A group of technically trained experts warns of a potentially catastrophic risk (climate change, overfishing, pandemic flu) only to have their warnings discarded by cost-bearing decision makers (politicians, corporate executives, voters) who deny that the risk is as great or even extant. In these scenarios, it's not that the decision makers wouldn't face massive costs should they turn out to be wrong, it's that there's a big difference between what their behavior indicates they think the probability of facing those costs is and what they're being told by technical advisors.

Why? Well, it seems that cost-bearing seems to have a very strong influence on how someone interprets difficult-to-verify information about risks, especially social / shared risks. In psychology this is known as the defensive denial hypothesis and there's a fair bit of empirical evidence to support it. The BP managers tasked with running the platform knew the immediate costs of reducing flow, or stopping drilling, or increasing safeguards, and it seems highly likely that this influenced how they interpreted warnings from the rig workers. The same sort of phenomenon seems to occur in a lot of other areas: fishermen are more sanguine about the risks of overfishing, oil executives downplay the risk of climate change, and derivatives traders claim that their activities are nowhere near dangerous enough to warrant regulation. Now, many other factors are clearly at stake in all of these, from discounting to strategic maneuvering to cheap talk, but given the genuineness with which deniers of, say, climate change argue their case, it seems difficult to say that they are not at least somewhat personally convinced that their interpretation of the evidence is correct.

Now that on its own isn't terribly revelatory, but when you combine it with the notion that perceptions of costs can be subject to manipulation as well, you get an interesting result. The more (or less) salient a risk's mitigation cost is made, the lower (or higher) people come to view the probability of that risk. Witness, for example, the repeated attempts to link efforts to combat climate change to personal tax burden by those who think (or claim to think) that it's a bogus risk. This may not even necessarily be an actual cost: the Jennifer McCarthy- led trend in parents refusing to vaccinate their children (leading to the actual risk of polio, measles, etc.) seems to be almost entirely a function of parents being led to believe that there is a potential cost to vaccination (possible autism) that is not supported by scientific evidence. BP's managers faced the immediate and salient costs of risk mitigation steps that they'd need to justify to highers-up and behaved in a way that seems to indicate that they didn't think the risk of a major industrial accident was worth fretting over.

So what? I think the lesson here is that while risk denial is often depicted as stemming from short-sightedness, or ignorance, or political zealotry, it's actually pretty common human behavior. People have preferences and like to align their behavior appropriately, and if that means that they have to subconsciously alter their assessments of how dangerous some far off activity may be, they'll do so. If we are concerned about arresting climate change, or preserving biodiversity, or managing natural resources, then it's important to keep in mind that the way people perceive the incidence of the cost of mitigation will not only affect their preferences in terms of raw cost-benefit analysis, but also legitimately move their perception of the riskiness of their behavior. If we want political support for efforts to deal with these sort of risks, it thus seems similarly important to find and then emphasize ways in which the costs can be made low and painless as it is to stress the potential for future damages.

6.16.2010

Standard error adjustment (OLS) for spatial correlation and serial correlation in panel data in (Stata and Matlab)

I've been doing statistical work on climate impacts (see a typhoon climatology of the Philippines to the right) and have been having trouble finding code that will properly account for spatial correlation and serial correlation when estimating linear regression models (OLS) with panel (longitudinal) data.  So I wrote my own scripts for Matlab and Stata.  They implement GMM estimates similar to Newey-West (see Conley, 2008). You can download them here.

They are "beta versions" at this point, they may contain errors.  If you find one, please let me know.

In the future, I may try to document what I've learned about the different estimators out there. But for the time being, some of the advantages of what I've written are:
  • The weighting kernels are isotropic in space
  • The spatial weighting kernel can be uniform (as Conley, 2008 recommends) or decay linearly (a la Bartlett).
  • Serial correlation at a specific location is accounted for non-parametrically
  • Locations are specified with Lat-Lon, but kernel cutoffs are specified in kilometers; the code accounts for the curvature of the earth (to first order) so relatively large spatial domains can be accounted for in a single sample
  • An option allows you to examine the impact of adjustments for spatial correlation and spatial + serial correlation on your standard errors
  • The Stata version follows the format of all Stata estimates, so it should be compatible with post-estimation commands (eg. you can output your results using "outreg2").
[If you use this code, please cite: Hsiang (PNAS 2010)]


STATA VERSION 2 UPDATE 2013: Thanks to my field-testing team (Gordon McCord and Kyle Meng), several bugs in the code and additional options have been added.  Most useful changes: the code now correctly accepts the wildcard "*" when specifying variables and the option "dropvar" can be used to drop variables that Stata regards as 'too collinear' (contributed by KM).

STATA VERSION 3 UPDATE 2018: Thanks to some careful bug-chasing by Mathias Thoenig and Jordan Adamson, I've fixed a single line of code that led to a miscalculation in the weights for autocorrelation across within-unit panel observations. This error did not affect the Matlab implementation. This error did not affect calculations for the influence of spatial autocorrelation within contemporary observations or adjustments for heteroskedasticity. The previous version (v2) sometimes over-inflated or under-estimated the standard error estimate adjustment due to auto-correlation, but the magnitude of this effect depends on both the serial-correlation structure in the data and the maximum lag length parameter (lagcutoff) determined by the user. For very long lag lengths (e.g. infinity) the standard error estimates are too small, but for shorter lag lengths the bias may be of either sign.

My (currently ad-hoc) help file for the Stata script is below the fold. The Matlab code has an associated help-file embedded.









Spatial data analysis resources

A nice site at Carleton College lists many resources for spatial data analysis (books, programs, data, etc).  Be sure to explore the menu bar on the left, since not all the links are visible initially.

It's been added to the page of Meta-resources.

6.13.2010

Free online JPAL lectures about conducting randomized field trials

Real experiments have set a new standard for establishing causality in development economics.  In an evaluation process that looks very similar to clinical trials in medicine, social policies in a randomly assigned "treatment group" are evaluated against outcomes for other individuals in a "control group." The argument for this kind of work is that (1) the effect of social policies must be measured carefully if we want to understand whether they are worth implementing at a national level and (2) historical analysis of policies is often insufficient because there is no "control group" against which outcomes can be compared.  For a good example of this kind of research, read this Tech Review article on Ben Olken's work on democracy in Indonesia.

Not all modern work in development is of this type since not all interventions or questions are suitable for the method (eg. my own work studying the impact of hurricanes on development cannot be), but there is a trend toward relying on it more and more. 


The Abdul Latif Jameel Poverty Action Lab (JPAL) {co-founded by Ester Duflo, the subject of an earlier post by Jesse} is a group of researchers that do randomized field trials of various policy interventions for poor communities. While many people do this kind of work, the folks at the (JPAL) have honed the method and mastered the logistics (which are tremendously complex).  There is course taught by several people at JPAL for executives about how to conduct a randomized trial (and why) which has been recorded and is free online.  To access it:
  1. open up iTunes (which you can download for free here
  2. go to the iTunes Store (there is a tab on the left for it)
  3. search "poverty action lab executive training
All the lectures should show up and are free to download.

6.10.2010

How wise is a cellulosic ethanol mandate?

The Tech Review describes recent "slow progress" on the development of commercial cellulosic ethanol.  Apparently, some companies are moving ahead with strategies to scale up:
ZeaChem, based in Lakewood, CO, has begun construction of a 250,000-gallon-per-year demonstration plant in Boardman, OR, that will produce chemicals from sugar and eventually ethanol from wood and other cellulosic materials...
The company's strategy for making the business a financial success and attracting investment for commercial scale plants is to start by producing ethyl acetate, which "takes about half the equipment and sells for twice the price of ethanol, so it's an ideal starter product," he says. Other biofuels companies are taking a similar approach--looking for high value products to offset high costs, at least initially. ZeaChem plans to incorporate the technology into an existing corn ethanol plant for commercial production of ethyl acetate. "If all goes well, that plant could be in operation by the end of next year," he says. A stand-alone commercial cellulosic ethanol plant would follow. It could switch between selling acetic acid, ethyl acetate, or ethanol, depending on the market.
If you're at all confused about why this happening, when we don't know how to make cellulosic ethanol without net energy expenditure, its driven by government support:

A renewable fuel standard signed into law in late 2007 requires the use of 100 million gallons of cellulosic ethanol in the United States this year and will ramp up to 16 billion gallons by 2022. But so far no commercial plants are operating, according to the Biotechnology Industry Organization (BIO), a leading trade group representing biofuel companies. The U.S. Environmental Protection Agency announced in February that it was scaling back the mandates to just 6.5 million gallons, which could be supplied by existing small-scale demonstration plants and new plants expected to open this year. That's up from approximately 3.5 million gallons produced in 2009.
Is it wise for our government to be driving this kind of investment?   One concern is that biofuels, in general, lead to the production of crops for energy which necessarily will increase the prices of food crops that are displaced.  In a recent working paper, Wolfram Schlenker and Michael Roberts try to identify (using weather shocks) the effect of the biofuel mandate on world food prices.  They predict that the biofuel mandate, as it stood at the time of writing, would lead to an increase of world food prices by 20-30%.  Further, they argue that since agricultural production will expand to meet this demand, and expansion of cultivated land releases CO2 in net, the policy may not even reduce GHG emissions.

This second point reminds me of a blog post I wrote two years ago, where I argued that innovations in the technology for the conversion of cellulose into fuel may have dramatic externalities.  If biomass that is usually considered "useless" suddenly has a shadow price, the strategic incentives to harvest entire ecosystems may be dangerously strong.  

The government should almost certainly not be subsidizing the development of this technology; and one can argue (depending on how risk-averse you are) that they should be taxing it for the risk we all are bearing should it succeed.  

6.06.2010

1980

This is surreal to me. This is an excerpt from an article in the New York Times on April 12, 1980.

I repeat. 1980. Thirty years ago. The internet was still science fiction then.

It is about the oil spill caused by the Ixtoc I oil well in the Gulf of Mexico, which spewed 140 million gallons over several months in 1980:
History's largest oil spill has been a fiasco form the beginning to end. Human errors and ineffctive safety equipment caused the blowout, and none of the "advanced" techniques for plugging the well or recapturing the oil worked satisfactorily thereafter. The gusher ran wild for nearly ten months....
The enduring question is whether a devastating blowout could occur in our own offshore waters....
A second question: Could a blowout in American waters be quickly capped and cleaned up? Ixtoc I shows that control technology is still quite primitive.  Attempts were made to jam the pipe back into the hole; a large cone was lowered over the well to capture oil and gas, and steel and lead ball were dropped down the well to plug it. Nothing worked. Relief wells to pump in mud failed for months to reach their target... The mop-up techniques did not function effectively either....
Most Americans would accept risking such blowouts to find oilfields as rich as Mexico's.  But the lessons of Ixtoc I can help reduce the risks.
If you don't know why this is darkly funny, look over the list of strategies BP has employed to stop the current spill.  There is obviously something wrong with the incentives to innovate emergency/cleanup technology.  In 1980, the techniques we are still using today were being joked about as "advanced."

The sheer volume of innovation that has occurred in the last 30 years across an uncountable number of research fields is astonishing and a tremendous feat of human ingenuity.  The fact that effectively zero innovation has occurred in oil-drilling-catastrophe-management suggests that nobody believed there was a sufficient payout to warrant such investments.  Since these catastrophes are massive public-bads, and the cost of the externality is almost certainly not internalized by the oil-companies, then standard economic theory would suggest the government needs to create incentives to invest in these technologies.  The problem is doubly difficult because we often think that research in profitable industries is under-supplied, since researchers cannot capture the full value of their work.  So policies to reduce our risk must counteract both a public-bad problem and an innovation problem.

The obvious tendency is to heap blame on BP.  But the current situation is a result of 30 years (at least) of improper policy.  Whether the US government had sufficient information in 1980 to realize its policies motivating innovation [in these technologies] was too weak, I cannot say.  But this article seems to suggest that perhaps they did.

6.05.2010

(Oil spill & windmill) x (technology & management)

The BP spill is tragic, but it's gotten me learning about oil drilling technology and history. Some nice sites by the NYTimes, which I think are generally much more interesting then their coverage of the political drama:

A graphical explanation of how the well was supposed to be plugged.

A reference describing the different attempts at stopping the leak.

A timeline of historical oil spills, with actual newspaper clippings from the events and descriptions of the legislation that followed each.

Also, quite depressing, is this innovative way of communicating to people exactly how large of an area is affected by the slick.

After all of this, I think (or at least hope) that we're all more educated about the technology underlying our massive energy infrastructure.

Trying to be a pragmatist, I keep looking at these diagrams (especially the ones that show how far underwater, and underground, the leaking well is) and thinking to myself, "is this really easier than windmills, seriously?" It's understandable (at least from a micro-economists perspective) that power companies with massive amounts of sunk capital would misrepresent the costs of a transition to cleaner technology, even if it would have large positive externalities. Interestingly, on this point, a recent NREL study suggests that the costs of stabilizing energy supply using wind and solar power have been exaggerated.

A major argument against more rapid investment in renewables provided by many energy experts has been that it is difficult to ensure a steady supply of energy when weather and daylight fluctuate. To me, this always seemed dubious, since the US is enormous and there are large anti-correlations in wind and cloud-cover across the country. It seemed that if panels and farms were distributed intelligently across space, we could take advantage of these known structures to provide a smoother power-supply to the country. All it would take is some planning, knowledge of meteorology and some cooperation among utilities. This is exactly the finding of the NREL study. They say that 35% of energy could be supplied by renewables without installing substantial backup capacity. All it would take is a little coordination.

5.27.2010

Study Hacks on Esther Duflo

Cal Newport, the guy behind my favorite grad student philosophy / self-help /how-to-be-awesome website, Study Hacks, just posted a pretty interesting article on Esther Duflo, MIT econ professor, vanguard of the randomization movement in development economics, recent Clark medalist, undergrad professor of Sol's, and all around awesome professor. Specifically, he uses her as an example of the do's and don't's of finding one's "Life's Mission" :


This is what complicates the mission to find a mission. On the one hand, to discover them (and recognize them), you need a non-conformist’s confidence and a dedication to exploration. Duflo, for example, was a notorious searcher. Among other acts of defiance, she took time off in the middle of her studies to go work on practical economic problems in Moscow (where she met Jeffery Sachs). When she took Banerjee’s class she was actively seeking an outlet for her intellectual energies.
On the other hand, this sense of exploration has to be backed with competence in the relevant field. And developing this competence has a decidedly unexciting, conformist feel to it — a process replete with hard focus and resistance to distraction.
What I find interesting about this (aside from the various compelling tidbits about Duflo's life) is how much I think it has in common with doing good interdisciplinary work. People love to think about doing work that spans fields, and at the same time a lot of the interdisciplinary work that's done is rightfully lambasted as being unrigorous. I think that's because to do it right one has to go through the drudgery of learning the ins and outs of not one but two fields. There's the flight of inspiration that comes from thinking up great cross-field paper topics ("Can we use monsoon strength as an instrument for trade costs?") but there's also the long stretches of building confidence in relevant areas ("Hm. Guess I better learn about the global climate circulation. And trade.")

Viewed in this light, one can think of Duflo's work as in a lot of ways combining the statistical techniques from epidemiology (or maybe even more rightly, pharmacology) with the concerns of development economics. Economists of my generation I think largely take for granted the progression towards randomized trials and emphasis on identification (though even that's changing fast, see for the example the Spring 2010 JEP), but the progress to incorporate those techniques involved taking the field and broadening its boundaries by mixing it with the techniques and concerns of another. It's nice to hear the back story about how one of the major players decided to head down that route and appreciate just how much skill, drive, and chutzpah it took to do so.

And yes, I just noticed that Study Hacks' byline is "Demystifying Sustainable Success." How apt.

5.24.2010

Migrating birds on the front page of the NY Times

I'm very impressed and surprised by the coverage. And its a great story about innovations in scientific research. I didn't even realize that when I was a kid watching wildlife videos and learning about the Arctic tern that we couldn't actually observe them migrate. We could only see where they started and ended. But now with smarter tracking technology, we can observe their entire trajectory.

The whole story reminds me of other innovations in observational technology that slingshotted an entire field. For example, the invention of GFP, which can make portions of tissue glow, lead to enormous advances biology and related fields. (Martin Chalfie, one of the inventors who won the Nobel for it, is here at Columbia. I know because I saw him explain the idea to a gymnasium full of kids here with a [humorously] malfunctioning flash-light). I think that a lot of times, when we learn science in [grad]school, there is so much focus on theory, mechanisms and methods that we sometimes forget that the starting point of all science is observation.

PS. If you're like me and did a double take at the article's nonchalant statement about the groundbreaking technology
Geolocators ... just record changing light levels. If scientists can recapture birds carrying geolocators, they can retrieve the data from the devices and use sophisticated computer programs to figure out the location of the birds based on the rising and setting of the sun.
You should check out the website of the geolocator manufacturer Lotek, where they post scientific papers on the method. Here's an abstract from "An advance in geolocation by light" by P. A. Ekstrom (2004):
A new analysis of twilight predicts that for observations made in narrow-band blue light, the shape of the light curve (irradiance vs. sun elevation angle) between +3 and -5.DEG. (87 to 95.DEG. zenith angle) has a particular rigid shape not significantly affected by cloudiness, horizon details, atmospheric refraction or atmospheric dust loading. This shape is distinctive, can be located reliably in measured data, and provides a firm theoretical basis for animal geolocation by template-fitting to irradiance data. The resulting approach matches a theoretical model of the irradiance vs. time-of-day to the relevant portion of a given day's data, adjusting parameters for latitude, longitude, and cloudiness. In favorable cases, there is only one parameter choice that will fit well, and that choice becomes the position estimate. The entire process can proceed automatically in a tag. Theoretical estimates predict good accuracy over most of the year and most of the earth, with difficulties just on the winter side of equinox and near the equator. Polar regions are favorable whenever the sun crosses -5.DEG. to +3.DEG. elevation, and the method can yield useful results whenever the sun makes a significant excursion into that elevation range. Early results based on data taken on land at 48.DEG.N latitude confirm the predictions vs. season, and show promising performance when compared with earlier threshold-based methods.

5.22.2010

The achievements and under-achievements of our species

A friend sent me this spectacular time lapse video of the space shuttle preparation in an email. if you haven't seen it, you can watch it right here.

5.19.2010

NCDC: Warmest April Global Temperature on Record


GISS was already saying that 2010 would likely be the warmest year on record a while ago due partly to the current El Niño (see, for example, section 6 in the Current GISS Global Surface Temperature Analysis from January) and observation data are now bearing this out. NOAA's NCDC just released their April State of the Climate report, and not only was April 2010 the warmest April on record, but 2010's Jan-Apr four month span was the warmest on record as well. A gridded temperature anomaly map is at right.

There's a couple of things about this that are interesting. The first, which relates to some of my own work looking at how people internalize new info about climate change, is that record events seem to be one of the most straightforward ways of conveying to people that the climate is changing. "Climate" is, after all, the general distribution of various properties of the ocean and atmosphere over time, not any specific value at any specific time. Difficulty with the notion that climate change is a shift in this distribution is what often seems to underlie opposition to the theory of anthropogenic climate change; this is what gives rise to, say, your uncle's comment that a cold winter day proves that Al Gore is a liar.

Records, however, seem to be one of the ways of talking about distributions that people intuitively understand. Everyone understands that Usain Bolt is by objective measures a very fast runner cause he holds the world record for the 100m and 200m dashes. Similarly, people intuitively understand that if this is the "hottest year on record" then the climate is in a hot state. Moreover, if we get several hottest years on record in a row, then the climate is probably getting hotter.

So what? Well, given the above it's probably not as much of a surprise that studies like Yale's F&ES Project on Climate Change have been finding a downward trend in beliefs about climate change. After a slew of record hot years the planet has has a host of second-and-third hottest years since 2005, not exactly the sort of news that makes headlines, even if by historical standards the globe is still hot. Should 2010 shape up, as Hansen has repeatedly predicted, to be the hottest year on record, we'll probably see just as many news stories talking about how "global warming is back" as we have recent stories about the "cooling trend" of the past few years.

This, in turn, makes for an interesting counterfactual political economy question: what if Copenhagen had been scheduled for this December rather than last? Not to put too-high a hope on the established mechanisms, but maybe our outcome would have been a lot more positive if delegates had arrived knowing that they had just lived through the hottest calendar year on record...

5.18.2010

Time Lapse of Eyjafjallajokull Erupting

Want to see what hundreds of tons of aerosols being ejected into the atmosphere looks like in time lapse video? Of course you do. First commenter to suggest a way to use this as an instrument for something worthwhile gets all the fermented shark they can eat.

Hi, I'm Jesse, and I'm now blogging with Sol.



Iceland, Eyjafjallajökull - May 1st and 2nd, 2010 from Sean Stiegemeier on Vimeo.

Data transfer from Matlab to Stata and reverse

Zip file with Matlab code to export or import datasets to Stata. For use in conjunction with "insheet/outsheet". Importing data from Stata is restricted to numeric variables (but not the reverse).

3.22.2010

Sustainable Development at Columbia University

We've launched our new website for the Sustainable Development Doctoral Society for our PhD program at Columbia University:

www.columbia.edu/cu/sdds

We're aiming to maintain it well with active research updates.

3.12.2010

Blindness interventions

This is a pretty inspiring story of field work for children with treatable blindness in India, and neurological research that goes alongside treatments:

http://www.ted.com/talks/pawan_sinha_on_how_brains_learn_to_see.html

and another great story of work in the field on blindness (my classmate Mark Orrs is working with him).

http://adventure.nationalgeographic.com/2009/12/best-of-adventure/geoff-tabin/1

2.07.2010

Nomads in India, redistribution and Nash equilibria

This month's NGM has a nice article on nomadic groups in India. The basic story is that there are castes of nomads (an estimated 80M people total) that have traditional nomadic lifestyles and they are apparently "trapped" since they seem unwilling to settle, the gov't has difficulty providing them with services unless the settle, and settled populations are unwilling to let them settle. There are the usual undertones of discrimination and bureaucracy, but otherwise this seems like an unfortunate example of a classical Nash problem interfering with progressive redistribution.

I'm not entirely convinced by the gov'ts argument that they must settle in order for them to receive political rights or some forms of gov't benefits, such as basic healthcare, but I've also never tried administering an Indian municipality.

2.06.2010

Climate change and extremes

A lot of folks ask me what's going to happen with tropical cyclones or heat waves, etc with climate change. If you're not someone with any atmospheric science background, but want a single, easy few-page article to read, I'd recommend this one.

One of my advisors, Adam Sobel, gave me a copy of "Climate change, picturing the science", a coffee table book on climate change that he wrote a chapter for. His chapter on what we expect for extremes is very nice. Its honest about what we know or don't, and why, but it doesn't expect you to have thought much about the subject before. You can read it on google books here.

1.18.2010

Haiti

One of my supervisors, John Mutter, is a seismologist and has a recent post on the CNN blog in response to the earthquake in Haiti.

"Earthquakes don’t kill people; buildings do. And the poorest constructed buildings are inevitably home to the very poorest people. Homes and other structures built way out of safe building code – if codes even exist or are known about, or minimally enforced after the building inspector is bribed for a permit – are built by people who lack the resources to build minimally safe structures if they could."

I think that both him and I find nothing surprising about what has happened in Haiti. This doesn't make it any less tragic, but rather more tragic. The fact that this was at all foreseeable suggests that we can all be guilty of not having helped mitigate risk in advance. The Heifer Project is an organization that promotes development by letting wealthy Americans buy a cow or goat for a family in a poor country. Maybe an organization can enable an American to donate money to pay for rebar in a Haitian home during reconstruction.