Adolf Hitler once said: “If you tell a big enough lie and tell it frequently enough, it will be believed.” In this hyper-political year, a chorus of critics is loudly proclaiming that you can’t trust polls anymore. And many people seem to believe it.
Can you trust surveys to make accurate predictions?
Let’s examine the evidence. The critics of public polls invariably cite declining response rates as a reason not to trust survey research. Yet, the proof is in the pudding, and the pudding is predictive accuracy, not response rates.
According to the American Association for Public Opinion Research and Nate Silver’s FiveThirtyEight, the average error in candidate preference polls conducted by leading survey organizations was less than 3 percentage points in 2014, which is within the margin of sampling error for a sample size of 1,000. Furthermore, the accuracy of election polls has been virtually unchanged over the past 20 years, even as response rates have declined.
How can polls with low response rates make accurate predictions?
There is a tendency in the media to equate declining response rates with increasing non-response bias and, thus, decreasing accuracy. But that is a simplistic view. Non-response bias is the product of two factors: (1) the non-response rate and (2) the degree to which respondents differ from non-respondents. Because multiplying any number by zero always results in zero, there is no non-response bias if respondents do not differ from non-respondents. Pollsters have developed sophisticated weighting algorithms to compensate for demographic differences in survey response, thus, minimizing non-response bias despite declining response rates.
Why Do We See So Much About the Failures of Political Polls to Predict Elections in the Media and on the Internet?
There are several reasons.
*Pundits and politicians who are not faring well in the polls are often quick to criticize them. Any well-publicized poll that fails to predict an election is likely to be publicly called out while those that succeed are ignored.
*News organizations are not funding as many high-quality polls as they used to. Not only has the cost of conducting a valid and reliable poll increased, but also news organizations are more likely to be strapped for the cash they need to conduct them. As a result, more low quality polls are being conducted. Nate Silver of 538 has conducted analyses of polling accuracy that clearly show that polls based on sound methodologies, predict elections much better than do those based on questionable methodologies. For example, some organizations are using “Robopolls,” which use automated scripts, because they are much cheaper than polls using live interviewers. However, they cannot – by law – call mobile phones and, thus, they exclude a very large percentage of the population. The increasing use of Robopolls and other less-than-rigorous methodologies make it inevitable that some public polls will fail.
*The media and public may have false expectations about polling accuracy. An election may turn on a very small difference that lies well within the margin of sampling error of even the most ambitious polls. Also, it is important to remember that polls are a snapshot of vote intent at a given moment in time, and other factors (e.g. snowstorms, family or peer pressure) can intervene between vote intent and the actual vote. And then, of course, there is the difficult process of predicting voter turnout, upon which many an election has turned.
The Iowa caucuses are a case in point for having unrealistic expectations of the polls’ ability to predict the results. The caucuses are social events where the voting takes place in public. There is plenty of opportunity for bandwagon effects, especially for Democrats who assemble at their favorite candidate’s table, while the undecided voters stand back until they see how things are going. If the candidate you have chosen is not getting enough votes, you can move to the table of another candidate. Social pressure can also change vote intent at the caucuses. Suppose you are married to a Democratic party activist who has been campaigning for Hillary Clinton, but you harbor a desire to vote for Bernie Sanders. You would have to go to a separate table from your spouse to vote for Bernie. What do you do – vote your heart and mind, or support your spouse?
What is the profession doing to improve the quality of public polls?
Years ago, Philip Meyer, who virtually invented journalism based on polling, created a list of criteria – such as the margin of sampling error, the sponsor of the study, the number of interviews – that major newspapers used to provide information to their readers so they could judge the credibility of a poll. Today, polling is much more complicated.
The American Association for Public Opinion Research (AAPOR) has created a election resources page on its website. This page is designed to help journalists, the general public, decision-makers, and – hopefully – purchasers of polling services learn about the contributions polling makes to the election process, the limitations of the polling process, the importance of quality and how to evaluate it.
This information help journalists and members of the attentive public evaluate polling information more realistically and should provide important ammunition for those who argue for placing polling quality above cost in news organizations’ decisions about the purchase of public polling.
Another strategy is for news organizations to place less emphasis on prediction of election results and more emphasis on understanding the real concerns of Americans and their positions on issues. The Gallup Organization, which is staying out of the election “horse race” this year and focusing on shedding light on the dynamics of public opinion, has taken an important step in that direction.