A series of surprise voting results this year and last has led to much speculation as to the cause of these high-profile prediction failures. They include the election of Donald Trump in the US and of the Conservative Party in Britain, the UK referendum to leave the EU, and the re-election of Israeli Prime Minister Benjamin Netanyahu.
Each has its own particular circumstances, of course, but a common denominator has been finger-pointing at opinion polling, with suggestions that the industry is facing a crisis and potential demise. Such thinking is premature, and is largely based on over-simplification, mischaracterisation, and a degree of scapegoating.
It is nothing new for opinion polls not to reflect actual voting outcomes. However, people forget that such polls are not predictions, but snapshots of sentiment at the time they are undertaken. Prominent polling organisations have a good track record, but they make headlines only when they are wrong, giving the impression that they are wrong more often than they actually are.
They are even criticised when they get the result correct, but do not accurately reflect its extent. For instance, opinion polls correctly suggested that last year's referendum on Scottish independence would result in a vote to remain part of the UK, but they got flak for suggesting the outcome would be tighter than it turned out to be.
Opinion polls considerably underestimated Trump's following, but his rival Hillary Clinton did win the popular vote, albeit by a far smaller margin than they suggested she would. The problem here lies with the US Electoral College, which does not necessarily correlate with the popular vote (hence Trump's victory despite getting fewer votes nationally).
He won a few key states by a very slim margin - had they gone Clinton's way, she would be the next president, and pollsters would not be facing anywhere near the level of opprobrium they are facing now.
Opinion polling faces some significant shortcomings that are difficult to overcome, and this should be recognised by those who place great stock in them. Changes in communication technology are complicating the ability to get truly representative samples, and there are suggestions that in the cases of the US election and Brexit, opinion polls underrepresented the white working classes.
It is difficult to predict how people will vote if they tell pollsters they are undecided, or if they do not want to express their political view. "People … can change their minds, they can decide to not share their opinions or they can flat-out lie," wrote Mona Chalabi, data editor at Guardian US.
This has been exacerbated by the vitriolic nature of certain elections and referendums, among them the US election and Brexit. There are suggestions that some people feel ashamed to tell pollsters how they will vote, either out of fear of being judged, or because they are struggling with their decision.
It is also difficult to predict voter turnout, and which candidate or movement will get more people to the polling booths. Opinion polls may even inadvertently influence this. If they show one side in the lead, this may encourage complacency and galvanise the losing side to maximise turnout.
"Polling data led the Clinton campaign to feel quietly confident of a victory in Wisconsin and Michigan, and to therefore air few advertisements in those states," wrote Chalabi. "Both ended up voting for … Trump."
Keeping up to date with changes in technology, communication platforms, demographics and other relevant factors is often necessarily a reactionary process for pollsters, rather than a proactive one.
Social media bubbles
Newer methods of prediction based on data from social media have been successful, but they have not been around long enough for their flaws to be highlighted by prediction failures. These methods include measuring which side has more tweets in its favour, and measuring engagement data from sites such as Google, Facebook and Twitter.
However, there are obvious shortcomings to this. Not all tweets can be pigeonholed as for or against a particular candidate or party, and they do not necessarily indicate the tweeter's voting intentions. In addition, reading a tweet or Facebook post does not necessarily mean the reader endorses the view expressed.
Furthermore, thousands of automated Twitter accounts campaigned for Trump and Clinton. "If measuring social media sentiment becomes a more established way of predicting elections, there will be a great incentive for each side to create bots to give the impression they will win," wrote Patrick Evans of the BBC's user-generated content and social news team.
As social media proliferates and becomes a primary news source for a growing number of people, the way it is used is key to explaining the shock over certain voting results. On these platforms, people tend to be connected with like-minded others, and tend to read and share content with which they agree, aided by algorithms that ensure you receive the kind of content you like.
This creates bubbles from which opposing views are minimised if not absent, giving the impression that those views are less popular. As such, shock results are usually only shocking to those whose bubbles have been burst.
Blame is assigned and accusations made, because that is easier and more comforting than facing the fact that your view - the one echoed for so long in your self-made chamber - is not the prevailing one, or that there is a whole other narrative with which you are unfamiliar.
Sharif Nashashibi is an award-winning journalist and analyst on Arab affairs.
The views expressed in this article are the author's own and do not necessarily reflect Al Jazeera's editorial policy.