Inadequate sample sizes, failure to tap social media sentiments for Oz opinion poll debacle: Experts

PTI Updated - May 20, 2019 at 09:24 PM.

Polling companies in Australia are facing the heat after the forecast debacle, with data analysts putting the blame on unrepresentative samples, inability of pollsters to keep up with technology and inadequate monitoring of real-time sentiment on social media.

Defying exit poll prections, the ruling Liberal-National conservative coalition led by Prime Minister Scott Morrison claimed a shock victory in Saturday’s general election. It was a stunning turnaround after every opinion poll over the campaign predicted a Labor Party victory.

A Galaxy exit poll had put Labor Party led by Bill Shorten at 52 per cent of the vote compared to 48 per cent for the Liberal National coalition, according to Nine News.

The Federal opinion poll aggregate BludgerTrack 2019 - which draws from Newspoll, Galaxy, Ipsos, YouGov, Essential Research and ReachTEL polls - also had Labor at 51.7 per cent and the Coalition sitting at 48.3 per cent of the vote on a two-party preferred basis when it was last updated on Friday.

In the wake of the Labor Party’s shocking loss, many on social media have railed against the results, Australian news site news.com.au reported.

Political scientist Dr Andy Marks, who said earlier in the campaign that a Labor victory was “virtually unquestionable” based on polling, told SBS News that the result shows how “worthless mainstream polling has become“.

“I think this is really a cataclysmic era of polling in this country,” he said.

“We’ve seen surprises with Brexit (in the UK) and with (Donald) Trump (in the US) in recent years, but generally Australia, due to compulsory voting and other more stabilising factors, hasn’t really been exposed.”

Tasmanian electoral analyst Kevin Bonham also described the events as a “massive polling failure”.

“Pollsters will have to look at whether their sampling was unrepresentative.” He said he suspected that polling companies tinkered with the raw numbers and made adjustments to stop polls swinging wildly from poll to poll.

“I don’t have direct evidence of that ... nobody wants to be pushing polling that bounces around too much,” he was quoted as saying by the Financial Review.

As of Sunday, Bonham said there seemed to be a three per cent error across every poll in the past two weeks, which is far outside the usual margin for error.

“It’s like one poll can be three per cent out and that’s what you would sort of expect now and then by random chance. But all the polls being out by that amount in the same direction and getting all the same results is something that absolutely cannot happen by random chance,” he said.

According to Bonham, a number of factors might have been at play, including unrepresentative samples, oversampling people who are politically engaged and herding (when polling firms adjust their results to more closely match competitors out of fear of being wrong).

And while compulsory voting may have protected Australia against inaccurate polling in the past, some experts believe it was also a contributor to what happened this time.

Writing in The Conversation on Sunday, University of Melbourne statistician Adrian Beaumont said people with higher education levels are more likely to respond to polls, potentially skewing the results.

When it comes to voluntary voting systems, this factor does not have as much of an effect as educated people are also more likely to be vote, he argued.

Political scientist Dr Andy Marks, however, said the issues were largely due to the fact mainstream polling companies have not been able to keep up with technology. The mobile age has affected pollster ability to generate random samples, he said.

“The old idea of ringing up somebody on the landline and asking them who they’ll vote for is redundant and has kind of been redundant, I think, for the last three or four years,” he said.

According to Marks, polling companies also failed to take into account the real-time sentiment displayed on social media when calculating their results.

But Griffith University data scientist Professor Bela Stantic predicted the conservative coalition would sweep the victory.

Stantic threw millions of tweets through his programmes to find out what people were thinking about and feeling the election in Australia.

He said there were a number of factors at play, including sample sizes, the unpredictability of mobile phones, and a fundamental difference in how upfront people choose to be on social media versus in a phone poll.

“Firstly, my samples are much bigger,” he told news.com.au. “I collected about two million relevant tweets from about half a million accounts. It’s a very big sample size from all around Australia.”

He said his research had shown 5 per cent of social media data equated to a 95 per cent accuracy, if you have enough data.

He said the two million tweets he analysed in just a few days made up a greater representation of people than polls of 1,000 people. He estimated his tweets equated to half a million people.

He also said the shift in opinion polling from landlines to mobile phone can make for inaccuracies.

“These polls are actually using phone lines which are not landlines anymore. People on mobile phones move around, so it’s not a clear prediction,” Stantic said.

He also said people are more “honest” on social media or talking to their friends about their sentiments on elections, but might be more “hesitant” when it comes to a phone-based poll.

Published on May 20, 2019 15:34