Consequentialism and Chilcot

The Chilcot Report, also known as the Iraq Inquiry, is the British Government’s official inquiry into the Iraq War in order to, “establish…what happened and to identify the lessons that can be learned.”[1] The report dropped last Wednesday, July 6th, and was pretty—well—damning. Damning, but not really surprising or particularly revelatory; the conclusions reached by the inquiry directly echo the perspectives of writers like Daniel P. Bolger, Emma Sky, and Thomas E. Ricks—whose books I recently read this Spring. The report provides exhaustive clarity and brings an incredible amount of evidence daunting in size and scope to the fore. To call it comprehensive would be woefully inapt; an insult to its 150 page long executive summary. While it does not diverge significantly from past analyses, it rigorously validates them, its whopping 2.3 million words (roughly equivalent to four times the length of War and Peace)[2] joining a chorus of previous works. The reaction to the Chilcot Report, to some extent, was an elaborate exercise in political tribalism, responses dividing on ideological grounds. But most accept what appears to be the Inquiry’s central thesis: the war was an elaborate and expensive mistake. What a surprise! As Max Boot, a conservative military historian and foreign-policy analyst (and Managing Editor Ethan Gelfer’s celebrity crush), admits “Not even the war’s staunchest supporters would deny at this late date the basic thrust of the inquiry’s conclusions.”[3]

By far the most interesting to me of the perspectives offered on the war was the articulation of a peculiar flavor of consequentialism by Boot himself. What separates the “good” wars from the “bad” ones,” Boot argues, “is not in how countries get into them but, rather, how they get out of them.”[4] He echoes this belief on twitter, asserting:

Boot’s framework for producing value judgement of wars is premised on somewhat shaky grounds. Implicit is the assertion that we should look at wars through this lens, simply because we’ve done so in the past. But because something exists in the status quo, does not mean it is correct or effective. Pointing to this type of judgement’s past use without elucidating the reasons for its use is no basis  to accept it.

But Boot’s ultimate conclusions, moreover, the philosophy undergirding those conclusions, are somewhat problematic. The idea that we can judge an act simply on its consequences, known as consequentialism, while an attractive moral doctrine, is a poor guide when it comes to policy and policymaking, because it completely ignores the importance of probability in decision-making. Furthermore, in contrast to Boot’s assertions, for the purpose of gleaning insight from the Iraq War, the decision to enter it is just as important as the conduct of the war itself.

Let’s ask ourselves the question that Shadi Hamid, a Brookings scholar, posed in a Vox article on the merits of Libyan intervention, “If Iraq had quickly turned out “well” and become a relatively stable, flawed, yet functioning democracy, would that have retroactively justified an unjustified war? Presumably not, even though we would all be happy that Iraq was on a promising path.”[5] While the course of the war was not inexorable; if we hadn’t pursued debaathification, if the CPA did more to empower the Iraqi people, if horrible atrocities like Abu Ghraib did not occur, and if we did not back Maliki in the election, the state of Iraq might be entirely different. But even in that case, it does not make the decision to enter the war the correct one. If the decision to invade was most likely going to result in a destabilized region and a negative outcome, a resultant positive outcome would not have made the decision valid.

If you were, say, betting your money on a horse race, and you put your money on a horse very unlikely to win, and by golly, by some freak of nature, you end up winning. Now, would it be sensible to bet on the horse again, even though its still unlikely to win the next race? The same question applies to the Iraq War, if we are presented with a  decision to invade again would we take it? Given that the report demonstrates that British intelligence reliably predicted the collapse of the country and the disorder that followed, I would say we ought not to. To me, is important, because the Chilcot report is nominally about learning and being better prepared for the future, and not about partisan finger pointing. Though, naturally, the reports conclusions will be appropriated for use in the latter.

To be clear, I have not read the entire of the report. I am not absolutely insane, I don’t have the months required to read it in its entirely, and Popular Discourse does not have the army of interns like major publications do. In this article, I rely extensively upon the summaries of organizations like CFR, Vox, the Washington Post, The Telegraph, and other news organizations, and would like to mention their fine work here. However, the parts that I have read about are remarkable, at some points bizarre and at others tragic. But if the report says two things about the decision to invade, it is this. The false pretenses made it immoral and the faulty policymaking made it a bad decision.

 

 

 

[1] http://www.iraqinquiry.org.uk/the-inquiry/

 

[2] http://www.telegraph.co.uk/news/2016/06/28/chilcot-inquiry-when-is-the-report-being-published-and-why-has-i/

 

[3] https://www.commentarymagazine.com/foreign-policy/middle-east/iraq/chilcot-missed-iraq/

 

[4] Ibid

[5] http://www.vox.com/2016/4/5/11363288/libya-intervention-success

 

 

 

 

 

UK General Elections: Why The Models Were Wrong

The United Kingdom went to the polls on May 7th to vote in a general election for whom would be the occupant of 10 Downing Street.  Would it be the incumbent Conservative David Cameron or Labor’s Ed Miliband?  There was much excitement by the news media prior to the election as all of the major polls had the two politicians and their respective coalitions within one percentage point of each other, even down to election night.  However, all of that polling and almost every statistical forecast by almost every modeler called the election wrong.  The final results had the Conservatives winning not only their predicted 270 seats, but 330, securing a marginal majority without a coalition with the Liberal Democrats.  Speaking of the Liberal Democrats, they are now similar to the Northern White Rhinoceros: not technically extinct, but will require many decades of breeding in captivity before they are seen in the wild again.  In fact, their performance was so bad that the party leader, Nick Clegg, resigned.  The party’s performance was worse than even the worst prediction by any pollster.  Labor’s Ed Miliband joined Nick Clegg by retiring, as did UKIP’s divisive leader Nigel Farage.  All three leaders were confronted with disappointing vote totals.   Why were so many polls wrong?  The answer lies in failure to adjust and not giving proper emphasis to the likelihood of error.

The most accurate forecast for the UK General Election were by Steve Fisher at ElectionsEtc.  538, one of the largest forecasters of the election, adjusted polling numbers to account for polls overstating changes from the last election (They believed that the polls would overstate Labor’s rise from the previous election’s failure).  This adjustment meant that the tories (Conservatives) were to win by 1.6 percentage points (as per 538’s predictions).  What Fisher did differently than 538 and other major pollsters was not only adjusting for polls overstating changes, but also making additional adjustments that were party specific.  This adjustment was based off historical data of last minute swings such as the infamous 1992 election which had pollsters reporting a dead heat between the two parties even though the tories came out a whopping 7.5% points ahead.  This party specific adjustment moved Fisher’s 95% confidence interval upwards of those of other forecasters.

On the topic of error, 538 was quick to zero in on its flaws as was Fisher of ElectionsEtc.  National polling error was immediately identified as suspect as it did not reflect final exit poll results which showed a swing towards the Conservatives.  The massive forecast miss of the 1992 general election was accounted for in all adjustments to the polling error; however, that was apparently not enough to introduce significant variance in polling data.  Aside from national polling, it seems as though constituency level (akin to congressional district) polling was not accounted for properly.  This year’s polling on the constituency level was sponsored by Lord Ashcroft, a Conservative billionaire who sponsored polls in small districts that would otherwise never be under statistical scrutiny.  Ashcroft, with his financial backing of widespread polling, has acknowledged that publicized constituency level polls might induce voters to vote tactically in Britain multiparty system rather than true to their beliefs e.g. voting for the conservative candidate if they hear that Labor might win their constituency instead of voting for their original far-right choice of UKIP.

Ashcroft asked two voting-intention questions in all his constituency polls. The first was the “generic” question that is widely used: “If there was a general election tomorrow, which party would you vote for?” This was followed up with a more “specific” question: “Thinking specifically about your own parliamentary constituency at the next general election and the candidates who are likely to stand for election to Westminster there, which party’s candidate do you think you will vote for in your own constituency?” The Liberal Democrats did far better in the latter question, particularly where they were incumbents.  Many forecasters used the latter question as a better indicator of parliamentary success to their detriment, overestimating the Liberal Democrats’ voting share.

The final problem that lay in every forecaster’s model was an inadequate way for accounting for the possibility of substantial error.  Simply put, no model successfully captured the fickleness of the general public and the tendency for last minute swings.  No one predicted that the level of error recorded in the national polling data would ever be close to that recorded in 1992.  Therefore, cumulative seat totals were calculated on the incorrect assumption that nationwide, Labor and Conservatives had reasonably equal levels of support when the opposite was true. 

Interestingly, Labor did in fact have a 1 point swing vote advantage over the Conservatives in key constituencies.  However, why did this not convert to MP’s as predicted by models?   One of the key reasons is that they were fighting against first-term incumbent MPs that won their seats from Labour in 2010.  Most statisticians are aware of a phenomenon known as the “sophomore surge” whereby new incumbents build up a substantial personal vote and thus buck any attempt at predicting voting behavior.  The Conservatives utilized this effect to maintain a grip, although tenuous, over key districts in Wales particularly.

Overall, the 2015 General Elections were a statistical disaster.  Most forecasters called the election correctly, predicting David Cameron keeping his residence at 10 Downing Street; however, over-predicted both Liberal Democrat and Labor performance.  This was because they either failed to account for historical error in national polling data, weighted certain polling responses over others, or ignored established statistical phenomenon such as the sophomore surge.  We can only look forward to a more exciting 2016 election as Hillary Clinton faces off with the Republican primary candidate.  With this experience under their belt, we can only expect better and far more accurate predictions from the forecasting community.