Monday, October 24, 2016

[I had a really funny title for this post, honest]

But it played off of a Red Meat cartoon and I can't find the original online. This one has nothing to do with the topic at hand, but since I brought up the comic strip...






There are two ways of reading the collapse of traditional data journalism that started about a year ago. Neither of them had anything to do with "listening to the data." (Unless you are seriously off your meds, data never tell you anything; you draw inferences from data. That's an important distinction but we'll have to wait till later to explore in depth.)

What the data journalists were arguing was that, at that early stage of the election, certain other metrics and historical patterns (which not coincidentally happened to support the standard narrative) were far better indicators of primary results than were traditional opinion polls. This preferred set of indicators changed from week to week – – depending on how you count, there were somewhere between five or six of them – – but they always reached the same conclusions.

This could be looked at as a case of extended cherry picking, rummaging through the data until you come up with a statistic that points in a direction that does not upend your worldview. The other way of looking at it (which is not entirely mutually exclusive) is that the arguments had been valid in the past and would have been valid during the primary if things were still the same. In other words, the underlying assumptions about fundamental relationships and mechanisms were breaking down.

The second interpretation reflects much better on the people like Nate Silver who were making the arguments, but it has far more troubling broader implications. If this truly is a case of previously reliable indicators losing their predictive power, then we need to start asking serious questions about the stability of all of our models.







And range of data. We definitely need to talk about range of data.

2 comments:

  1. The modern system of presidential primaries only came into existence in 1972. So the universe of experience with this system is 11 events. It's really pretty ridiculous to think that any generalizations that are supported by that little data can be considered reliable.

    That said, I'd still prefer to hear extrapolations from a limited data set than impressionistic predictions. The problem is that the data journalists didn't pay enough attention to the uncertainty in their predictions and they over-reached.

    Nate Silver, I think, was particularly chastened by this and won't make that mistake again. It was really out of character for him: he's usually quite cautious about emphasizing the uncertainty in his predictions.

    ReplyDelete
    Replies
    1. 1. I think the focus on sample size in this case is badly misplaced. From a modeling standpoint, the main issue in this election has been range-of-data, followed by model stability and what we might call the asymmetry of black swans

      2. Dismissing this as overconfidence badly understates the train wreck that occurred in data journalism during the primary. Starting with the initial projections that almost perfectly reversed the actual strength of the candidates (excluding Trump, you have Walker at the top and Cruz at the bottom) going on to a parade of often mutually exclusive arguments for an increasingly indefensible predictions and ending with a series of woefully inadequate mea culpas.

      3. Nate Silver really hasn't corrected his mistake; he's just started making it in the opposite direction. Recently, FiveThirtyEight has been the outlier when it comes to overestimating Trump's chances.

      Delete