AuthorList

American Talk

View from Australia: Polling perils and Paul Ryan

Mitt Romney’s announcement of Wisconsin congressman Paul Ryan as his running mate might be significant for lots of reasons, but his effect on polling shouldn’t be one of them. Even so, there will be no shortage of pundits wasting words on the most trifling of polling observations in an attempt to ascribe greater significance to the event.

Has there been a boost for Romney now he has announced his choice of Ryan? The answer is either “no” or “possibly, but the polls aren’t precise enough to measure it”.

The belief behind polling an electorate is that there is a relationship between how a sample of voters responds to a question about voting intentions and their actual voting behavior at that election. You could think of an election as just being a poll with an enormous sample size and no margin of error.

Many statisticians, pundits, and analysts try to extrapolate a final result from surveys by using mathematical models, received wisdom, or sometimes just a general “feeling” about the national situation, the candidates, and electoral architecture. Some of these methods are quite sophisticated and precise, yet the most important part of these models — the polling data — is anything but.

When you extrapolate a small sample out to a broader population, there will always be a margin of error. The bigger the sample, the smaller the margin of error will be. However there comes a point at which increasing the sample size doesn’t cut the margin of error significantly, so polling firms find a compromise between the cost of doing the survey and getting the best data — you won’t find many polls with a sample size of 20,000 people, or any good ones under 500.

When forecasters describe a state as “lean Obama” or “lean Romney,” it is an admission that polls taken away from the election are less predictors of the future than assessments of the present. There is time for the economy, a candidate’s gaffe, or a major national issue to change voters’ minds. Even so, analysts often look to movements in polls over time to say that the race has changed significantly. This is, in theory, a smarter use of the data, but in practice the findings are usually not borne out by the data themselves.

Often in an election campaign, a candidate’s numbers will briefly uptick and then seem to subside — it’s frequently called a bounce — but rather than it being an effect that vanishes as people find out more about the candidate or become less interested once the novelty has worn out, it’s pretty likely that the polling result was just part of natural statistical fluctuation or an outlier.

Responsible reporting would usually come to the same conclusion, but it often doesn’t. A recent Gallup headline faithfully reports a lack of movement in the company’s own polling, yet the article equivocates, suggesting that perhaps a bounce might show up because the second tranche of polling that produced those numbers was more favorable. No details of any statistically significant data were given – the two halves of the polling differ by an insignificant 1 per cent.

There’s no story in the difference, but everyone needs an angle. Because they are businesses, even responsible polling companies have to make their polls worthwhile and sought-after; many polling companies use the attention lavished upon them around election time as a way of making a name they then use to get market research clients. As such, polls are glorified advertisements; you can see why their accompanying analyses might be designed to draw attention to them. The diligent reader should be aware of the agendas of pollsters and media outlets and judge accordingly.

If you take a look at a poll tracker or aggregator (like that of Real Clear Politics), don’t just look at the headline figures. Check the movement and see if it is inside the margin of error.

As an example, 49–45 to 45–49 is a turnaround of eight points in the margin, but both results could easily occur if the true level of support is 47–47. In reality, the story of this campaign is much the same as it was last month: this is a competitive election with Obama apparently ahead in a few key swing states. And we know from the recent precedents of 2000 and 2004 (which are historical anomalies — truly close elections in the United States are rare) that it is state-level results that ultimately decide competitive elections.

What, then, of Ryan’s home state of Wisconsin? For a state won by Obama by 13 per cent in 2008, recent polling looks surprisingly good for Romney there. But will future polls, if they show the same thing, really be attributable to Ryan? Possibly, but polling in Wisconsin has been a lot closer than thirteen points for the last couple of months. It seems not out of the question to credit its continuing closeness, and indeed potential for what on 2008 results would be a boil-over, to events happening already inside the state, rather than an announcement made outside of it.

The difference between the 2008 election result and the polling in Wisconsin in 2012 is surely significant, but the polls can’t tell us why. We can only guess. It may be that Ryan will be a factor, but his popularity in his district might already have been reflected in a general trend towards the Republican Party before the announcement.

We shouldn’t jump to any conclusions, because there may be other possible explanations that explain our observations just as well. Romney may well have picked Ryan to capitalise on momentum in the state picked up by his team’s own internal polling, rather than believing that Paul himself would move the numbers. And major parties have the money to do the polling that can pick up such sentiments. We consumers of the media don’t get those numbers unvarnished; they are usually leaked to gain media advantage or support an agenda.

For a polling outfit not funded by the deep pockets of a political party, polling is already expensive enough when it just asks for voting intention. To tack on questions about why people vote the way they do becomes unmanageably expensive to collect, code, and analyse. Nobody can afford to do that as a public service, and no media outlet will pay for it. Not when people have become quite accustomed to pollsters, journalists, and even their friends extrapolating qualitative conclusions from a couple of headline numbers.

Who needs the expense?