top of page

The polls & the 2019 election

Murray Goot

Everyone who believes the polls failed — individually and collectively — at the last election has a theory about why. Perhaps the pollsters had changed the way they found respondents and interviewed them? (Yet every mode — face-to-face interviewing, computer-assisted telephone interviewing via landlines and mobiles, robopolling, and interviewing online — produced more or less the same misleading result.) Perhaps the pollsters weighted their data inadequately? Did they over-sample the better-educated and under-sample people with little interest in politics? Perhaps, lemming-like, they all charged off in the same direction, following one or two wonky polls over the cliff? The list goes on…

But the theory that has got most traction in the post-election polling is one that has teased poll-watchers for longer than almost any of these, and has done so since the advent of pre-election polling in Australia in the 1940s. This is the theory that large discrepancies between what polls ‘predict’ and what voters do can be explained by the existence of a large number of late deciders — voters who don’t really make up their minds until sometime after the last of the opinion polls are taken.

In 2019, if that theory is right, the late deciders need to have either switched their support to the Coalition after telling the pollsters they intended to vote for another party, or shifted to the Coalition after telling the pollsters that they didn’t know which party to support. It was, after all, the Coalition that the polls underestimated, and Labor that they overestimated. On a weighted average of all the final polls — Essential, Ipsos, Newspoll, Roy Morgan and YouGov Galaxy — the Coalition’s support was 38.7 per cent (though it went on to win 41.4 per cent of the vote) and Labor’s 35.8 per cent (though it secured just 33.3 per cent of the vote). Variation around these figures, poll by poll, wasn’t very marked. Nor was there much to separate the polls on the two-party-preferred vote: every poll underestimated the difference between the Coalition’s and Labor’s two-party-preferred vote by between 2.5 and 3.5 percentage points.

The most recent, and most widely reported, research to have concluded that late deciders made the difference is a study of voters interviewed a month before the election and reinterviewed a month after it, published recently by the ANU Centre for Social Research and Methods. According to Nicholas Biddle, the author of the study, the group that determined the result was comprised of those who were ‘undecided in the lead-up to the election and those who said they were going to vote for the non-major parties but swung to the Coalition.’ At the beginning of July, the Australian’s national affairs editor, Simon Benson, also argued that those who were only ‘leaning’ towards Labor ahead of the election had ‘moved violently away from Labor’ once they entered the polling booths and had a pencil in their hands; the ‘hard’ undecided, those who registered in the opinion polls as ‘don’t knows,’ also decided not to vote Labor. At the beginning of June, an Essential poll, conducted shortly after the election, had presented evidence for much the same point.

Over the years, the idea that polls may fail to pick the winner because they stop polling too early had become part of the industry’s stock-in-trade. Especially in the period before 1972, when there was only one pollster, Roy Morgan, the argument had been difficult to refute. By 2019, it was an oldie — but was it also a goodie?

The ANU study: For the Biddle report, two sets of data were collected from the Life in Australia panel, an online poll conducted by the Centre for Social Research and Methods. The first was collected between 8 and 26 April, with half the responses gathered by 11 April, five weeks ahead of the 18 May election; the second between 3 and 17 June, with half gathered by 6 June, three weeks after the election. The analysis was based on the 1844 people who participated in both.

In the first survey, respondents were asked how they intended to vote. Among those who would go on to participate in the June survey, the Coalition led Labor by 3.8 percentage points. In the second survey, respondents were asked how they had voted; among those who had participated in the April survey, the Coalition led Labor by 6.4 percentage points. These figures, based on first preferences, included those who said they didn’t know how they were going to vote (April) and those who didn’t vote (June).

Although Biddle says that the data ‘on actual voting behaviour and voting intentions’ were collected ‘without recourse to recall,’ this is misleading. While the data on voting intentions were collected ‘without recourse to recall’ — this is axiomatic — the same cannot be said for the data on voting behaviour. The validity of the data on voting behaviour, collected well after the election, is wholly dependent on the accuracy of respondents’ recall and their willingness to be open about how they remember voting. It can’t be taken for granted.

Among those who participated in both waves and reported either intending to vote (April) or having voted (June), support shifted. The Coalition’s support increased from 38.0 to 42.2 per cent, Labor’s increased from 34.1 to 35.4 per cent, while support for ‘Other’ vote fell from 14.4 to 8.7 per cent. Only the Greens (13.6 per cent in April and 13.7 per cent recalled in June) recorded no shift.

The panel slightly overshot the Coalition’s primary vote at the election (41.4 per cent) and, as the polls had done, also overshot Labor’s (35.4 per cent). More importantly, it overshot the Greens (10.4 per cent), and undershot the vote for Other (14.9 per cent), and did so by sizeable margins. It overestimated the Greens by 3.3 percentage points, or about one-third, and underestimated Other by 6.2 percentage points, or more than a third. These are errors the polls did not make. A problem with ‘Australia’s first and only probability-based panel,’ as the ANU study is billed, or a problem with its respondents’ recall of how they really voted? None of these figures — or the comparisons with the polls — are included in Biddle’s report; I’ve derived them from the report’s Table 3. Of course, the total shift in support across the parties was much greater than these numbers might indicate.

From his data, Biddle draws three conclusions: that ‘voter volatility… appears to have been a key determinant of why the election result was different from that predicted by the polls’; that part of the ‘swing towards the Coalition during the election campaign’ came ‘from those who had intended to vote for minor parties,’ a group from which he excludes the Greens; and that the swing also came from those ‘who did not know who they would vote for.’

None of these inferences necessarily follows from the data. Indeed, some are plainly wrong. First, voter volatility only comes into the picture on the assumption that the polls were accurate at the time they were taken. And before settling on ‘volatility’ to explain why they didn’t work as predictions, one needs to judge that against competing explanations. Nothing in the study’s findings discounts the possibility that the public polls — which varied remarkably little during the campaign, hence the suspicions of ‘herding’ — were plagued by problems of the sort he notes in relation to the 2015 polls in Britain (too many Labour voters in the pollsters’ samples) and the 2016 polls in the United States (inadequate weighting for education, in particular), alternative explanations he never seriously considers.

Second, while positing a last-minute switch to the Coalition among those who had intended to vote for the minor parties might work with the data from Biddle’s panel, it cannot explain the problem with the polls. Had its vote swung to the Coalition, the minor-party vote would have finished up being a good deal smaller than that estimated by the polls. But at 25.3 per cent, minor-party support turned out almost exactly as the polls expected (25.7 per cent, on a weighted average). In its estimate of the minor-party vote — the Greens vote, the Other vote, or both — the ANU panel, as we have seen, turned out to be less accurate (21.4 per cent).

Third, in the absence of a swing from minor-party voters to the Coalition, a last-minute swing by those in the panel ‘who did not know who they would vote for’ can’t explain the result. That’s the case even if the swing among panel members, reported by Biddle, occurred entirely on the day of the election and not at some earlier time between April and 18 May, the only timeline the data allow. In the final pre-election polls, those classified as ‘don’t know’ — 5.7 per cent on the weighted average — would have had to split about 4–1 in favour of the Coalition over Labor on election day in order to boost the Coalition’s vote share to something close to 41.4 per cent and reduced Labor’s vote share to something close to 33.3 per cent (unavoidably rendering the polls’ estimate of the minor-party and independent vote slightly less accurate). In the ANU panel, those who had registered as ‘don’t know’ in April recalled dividing 42 (Coalition), 21 (Labor) and 36 (Other) in May. That is certainly a lopsided result (2–1 in favour of the Coalition over Labor) but nowhere near as lopsided as would be required to increase the gap between the Coalition and Labor in the polls (roughly three percentage points) to eight percentage points, the gap between the Coalition and Labor at the election.

Biddle wasn’t the first to argue that there was a late swing — a swing that the polls couldn’t help but miss — and to produce new data that purported to show it.

 

Murray Goot is an Emeritus Professor of Politics at Macquarie University. Read the rest of this analysis at Inside Story

bottom of page