Page last updated at 12:26 GMT, Friday, 26 March 2010

Why opinion polls are like soup

Michael Blastland
GO FIGURE
Different ways of seeing stats

With an election in the offing we can expect to hear more about how the parties are doing in the polls. But how accurate are they at guessing the outcome, asks Michael Blastland in his regular column.

You can tell they have the whiff of power in their nostrils, like dogs scenting bacon. Politicians - of all three main varieties - feel the voters might yet favour them with the big buttie of high office.

Tasting soup
Poll give you a flavour of public opinion

Why? The polls, of course, erratic enough of late to give different measures of hope all round.

A switch of 1% of the vote could be the difference between a hung parliament and a Conservative majority of 30 or more. And 1% in the polls is nothing. Such differences come and go from week-to-week, pollster-to-pollster.

Hence the frenzy of uncertainty and hope. Though politicians say they're not swayed by the polls, does anyone believe it? I'd sweat on them. Wouldn't you?

And I say this, knowing that there's a chance they're moonshine. That's not for want of trying: political pollsters are smart, honest people, but the job is daunting. The miracle is that their polls are often close to the real results. Though not always. At the UK General Election in 1992 for example, they called the wrong winner.

Are they better now? Here is why we just don't know.

Snapshot Sheffield

Pollsters treat the electorate like soup: one small spoonful gives the flavour of the whole. Ask 1,000 what they think and you'll have a good idea of the rest.

Well, roughly. Because people don't stir themselves, at least not properly, and you never know if your spoonful contains lumps. Picking 1,000 voters by dialling random phone numbers - one technique - might, by some freak of chance, give you a sample that includes 200 anarchists or 300 Natural Law yogic flyers. Unlikely, but you see the point: there could be accidental bias in that sample.

Another way of seeing the problem is to use another of the pollsters' metaphors, the "snapshot", as in "this is just a snapshot of voting intentions".

Let's use it to sample Sheffield. This is facetious, admittedly, but fun - so here are some snapshots of Sheffield.

Take one snapshot, one sample, and you might be surprised to learn how rural Sheffield is; look at another and who would have guessed how dark?

Snapshots of Sheffield
Different snapshots of Sheffield can capture different polling results

So snapshots might not capture the full variety. And then what? Then comes the real business of polling. Because if you know your sample might be biased, you can try to correct it.

This curious art is known as weighting. If your sample contains 500 women - but to mirror the electorate it would need 550 - you multiply the votes of each of those women in your poll by 1.1 or an extra 10%.

Weighting arts

You can do the same for social class or age, for example, and thus hope to make this small spoon of soup more like the whole.

A shame that people sometimes lie, or won't answer, or aren't available. Because then you have to worry whether these people, the ones left out, are more likely to be supporters of one party or another. You also need to know who is likely to actually turn out and vote, and whether, among those who say they will, supporters of some parties are more likely to be true to their word.

After all this, at the last five general elections, Labour support has still been consistently exaggerated in the opinion polls and no one is quite sure why.

So the latest of the weighting arts - a controversial one - has been to adjust for previous voting preference, to try to ensure that the past voting of the polled sample mirrors the past voting of the whole electorate.

Except that people don't remember how they voted, or change their minds about how they remember having voted, or didn't vote, and depending on the recent performance of the government might want to be more or less associated with having voted for it.

On top of all that, this is the first time since 1979 that an incumbent Labour government has been seriously threatened with losing office, according to the polls. There is also a real possibility of a hung parliament, according to the polls. People's recollections and intentions, determination to vote, enthusiasm to talk to pollsters, propensity to late decisions, might be quite different to any recent election and might vary from the past, according to which party they support.

When the polls say there is a margin of error of plus or minus 3% - meaning that a difference of six percentage points could be no difference at all, or could be a difference of 12 - they don't say that this ignores most of these other potential sources of inaccuracy.

Lumpy soup

Humphrey Taylor, of one of the biggest pollsters, Harris, has written: "When the media print sentences such as 'the margin of error is plus or minus three percentage points,' they strongly suggest that the results are accurate to within the percentage stated. That is completely untrue and grossly misleading.

"There is only one honest and accurate answer to this question - which I sometimes use to the great confusion of my audience - and that is: 'The possible margin of error is infinite.'"

In the circumstances, the pollsters' record is pretty good, most of the time. And they are open about their methods. None of this is a secret. But do we know if they will be close this time?

No.

There are new ingredients in that soup. Even as they make continual adjustments to their weighting techniques, trying to improve the blend, the electorate might be becoming lumpy in new ways. We'll know better on polling day.

Will that discourage the politicians? I doubt it. For even better than hoping that the polls are right is hoping that they are wrong, in your favour.

Thanks to Dr Mark Strong and Jenny Freeman of Sheffield University for the Sheffield sampling analogy.

Below is a selection of your comments

Dialling a sample of voters at random is biased - you have excluded potential voters without landlines. These would tend to be younger adults who rely on their mobiles. A truly random selection of 1,000 voters should give a fairly accurate prediction - provided all voters are included in the field from which the sample is selected.
Embra

I remember one of Margaret Thatcher's elections particularly illustrated this. After she got in there was a got of talk about "I didn't vote for her", in fact only one person out of a hundred or so admitted voting Tory. But according to the voting results at least 1 in 3 people voted Tory, so what about the sample? Well, either it was one of the 'lumps', or more likely people didn't want to be associated with an unpopular vote so they lied about it. The trouble is that you can't get rid of the 'lumps' by weighting, because you never know (without sampling a high proportion of the population) whether you just happened to hit a 'lump', no matter how unlikely.
Chris C, Aylesbury UK

What you don't add is that the goalposts are constantly moved to provide the required result. When our town does a poll and finds that 80% of people don't want a particular supermarket our district council tells us our poll is unrepresentative. They don't do a poll at all and say that the (opposite) decision of 16 councillors on the Development Control Committee (none from our town) know better than we do and what that 80% figure really represents is " people who haven't been properly informed or asked the wrong question"!
Sandra, Devon

I thought polling organisations had a carefully chosen list of 1200 or so people they could refer to to be statistically significant?
Dave Hartley, York, UK

My father - who was interested in politics and never failed to vote, but was not a supporter of any one party - answered the door to a pollster once. Upon being asked how he intended to vote in the upcoming election, he responded: "In a secret ballot!"
Megan, Cheshire UK

I thought this article would be about soup!!

Maybe people that eat/drink "cream of..." (no lumps) soup vote one way, and people who go for more broth based soup (big lumps) vote another way. Then there's the tinned vs. fresh. and as for those who make their own soup, they could vote for anyone!
Mel, Oxford

No one ever asked for my opinion or that of any other person I know - over many years I've never known anyone who was canvassed for opinion. The only genuine result will be on polling day.
Rhoda Hamer, Tansley, Matlock, Derbyshire, UK

Humphrey Taylor's assertion that the 'The possible margin of error is infinite.'" is false, the result is bounded by upper and lower limits of 0% and 100%, so if a poll says 28% will vote conservative the true margin of error is -28% or +72%, his false assertion is very revealing of peoples inability in general to recognise the difference between a theoretical statistical model and the reality of applying that model into the world.
Stu, Brighton

Dialling a sample of voters at random is biased - you have excluded potential voters without landlines. These would tend to be younger adults who rely on their mobiles. A truly random selection of 1,000 voters should give a fairly accurate prediction - provided all voters are included in the field from which the sample is selected.
Embra



Print Sponsor


COMPLETE MICHAEL BLASTLAND ARCHIVE
 



FEATURES, VIEWS, ANALYSIS
Has China's housing bubble burst?
How the world's oldest clove tree defied an empire
Why Royal Ballet principal Sergei Polunin quit

BBC navigation

BBC © 2014 The BBC is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.

Americas Africa Europe Middle East South Asia Asia Pacific