Sol and I just published an article in PNAS in which we reexamine a controversy in the climate-conflict literature. The debate is centered over two previous PNAS articles: the first by Burke et al. (PNAS, 2009) which claims that higher temperature increases conflict risks in sub-Saharan Africa and a second PNAS article by Buhaug (PNAS, 2010) refuting the earlier study.
How did we get here?
First, a bit of background. Whether climate change causes societies to be more violent is a critical question for our understanding of climate impacts. If climate change indeed increases violence, the economic and social costs of climate change may be far greater than what was previously considered, and thus further prompt the need to reduce greenhouse gas emissions. To answer this question, researchers in recent years have turned to data from the past asking whether violence has responded historically to changes in the local climate. Despite the increasing volume of research (summarized by Sol, Marshall Burke, and Ted Miguel in their meta-analysis published in Science and the accompanying review article in Climatic Change) this question remained somewhat controversial in the public eye. Much of this controversy was generated by this pair of PNAS papers.
What did we do?
Our new paper takes a fresh look at these two prior studies by statistically examining whether the evidence provided by Buhaug (2010) overturns the results in Burke et al. (2009). Throughout, we examine the two central claims made by Buhaug:
1) that Burke et al.'s results "do not hold up to closer inspection" andBecause these are quantitative papers, Buhaug’s two claims can be answered using statistical methods. What we found was that Buhaug did not run the appropriate statistical procedures needed for the claims made. When we applied the correct statistical tests, we find that:
2) climate change does not cause conflict in sub-Saharan Africa.
a) the evidence in Buhaug is not statistically different from that of Burke et al. andA useful analogy
b) Buhaug’s results cannot support the claim that climate does not cause conflict.
The statistical reasoning in our paper is a bit technical so an analogy may be helpful here. Burke et al's main result is equivalent to saying "smoking increases lung cancer risks roughly 10%". Buhaug claims above are equivalent to stating that his analysis demonstrates that “smoking does not increase lung cancer risks” and furthermore that “smoking does not affect lung cancer risks at all”.
What we find, after applying the appropriate statistical method, is that the only equivalent claim that can be supported by Buhaug’s analysis is "smoking may increase lung cancer risks by roughly 100% or may decrease them by roughly 100% or may have no effect whatsoever". Notice this is a far different statement than what Buhaug claims he has demonstrated in 1) and 2) above. Basically, the results presented in Buhaug are so uncertain that they do not reject zero effect, but they also do not reject the original work by Burke et al.
Isn’t Buhaug just showing Burke et al.’s result is “not robust”?
In statistical analyses, we often seek to understand if a result is “robust” by demonstrating that reasonable alterations to the model do not produce dramatically different results. If successful, this type of analysis sometimes convinces us that we have not failed to account for important omitted variables (or other factors) that would alter our estimates substantively.
Importantly, however, the reverse logic is not true and “non-robustness” is not a conclusive (or logical) result. Obtaining different estimates from the application of model alterations alone does not necessarily imply that the original result is wrong since it might be the new estimate that is biased. Observing unstable results suggests that there are errors in the specification of some (or all) of the models. It merely means the analyst isn’t working with the right statistical model.
There must exist only one “true” relationship between climate and conflict, it may be a coefficient of zero or a larger coefficient consistent with Burke et al., but it cannot be all these coefficients at the same time. If models with very different underlying assumptions provide dramatically different estimates, this suggests that all of the models (except perhaps one) is misspecified and should be thrown out.
A central error in Buhaug is his interpretation of his findings. He removes critical parts of Burke et al.’s model (e.g. those that account for important differences in geography, history and culture) or re-specifies them in other ways and then advocates that the various inconsistent coefficients produced should all be taken seriously. In reality, the varying estimates produced by Buhaug are either due to added model biases or to sampling uncertainty caused by the techniques that he is using. It is incorrect to interpret this variation as evidence that Burke et al.’s estimate is “non-robust”.
So are you saying Burke et al. was right?
No. And this is a very important point. In our article, we carefully state:
“It is important to note that our findings neither confirm nor reject the results of Burke et al.. Our results simply reconcile the apparent contradiction between Burke et al. and Buhaug by demonstrating that Buhaug does not provide evidence that contradicts the results reported in Burke et al. Notably, however, other recent analyses obtain results that largely agree with Burke et al., so we think it is likely that analyses following our approach will reconcile any apparent disagreement between these other studies and Buhaug.”That is, taking Burke et al’s result as given, we find that the evidence provided in Buhaug does not refute Burke et al. (the central claim of Buhaug). Whether Burke et al. was right about climate causing conflict in sub-Saharan Africa is a different question. We’ve tried to answer that question in other settings (e.g. our joint work published in Nature), but that’s not the contribution of this analysis.
Lastly, we urge those interested to read our article carefully. Simply skimming the paper by hunting for statistically significant results would be missing the paper’s point. Our broader hope besides helping to reconcile this prior controversy is that the statistical reasoning underlying our work becomes more common in data-driven analyses.