The Problem With Polling

I have gotten a lot of questions about political polls lately and I have found myself having the same conversation over and over about the reliability of polling in general. Those conversations have centered around the concept of Total Survey Error (TSE), or all the different ways that a survey can go wrong. So, I thought I would take a break from what I should be doing and write about the five basic forms of TSE.

Tallying the results of the presidential election on Nov. 2, 1948.
PHOTO: CBS PHOTO ARCHIVE/GETTY IMAGES

Coverage Error – This form of error is when your sampling frame (i.e. your list of people to potentially poll) does not accurately represent the population you are measuring. For example, if you want to conduct a poll by phone and your list only includes landlines, then you are leaving out everyone who does not have a landline. Dewey Defeats Truman is an example that fits this kind of TSE. They only surveyed people with telephones that year (1948), who were typically far wealthier and more likely to vote for Dewey, rather than Truman, the eventual winner. This coverage issue led to non-response bias (see below)

Specification Error – This error occurs when what is being measured isn’t clear. Typically, this is reserved for psychological constructs, which are oftentimes multidimensional. A political example of this would be ideology. We know that most people’s political beliefs lie along a spectrum, and those beliefs may be nuanced and context dependent. The Pew Research Center has an excellent example of measuring ideology as a construct. Fortunately, there is an easy way around this for political polls: ask them specifically which candidate(s) they are voting for.

Response Error – This form of bias has to do with who responded to the poll, and relatedly, who didn’t respond to the poll. This can be unit response (i.e. someone refuses to participate) or item response (i.e. someone refuses to answer a specific question). Again using a phone poll example: if you had a list of all numbers (cell phones and landlines) that you use to call on your poll, people with caller ID are less likely to pick up. Well, almost all cell phones have caller ID built in. This means that people with landlines – which are typically older people – are more likely to answer; younger people, less so.

Measurement Error – This form of error is probably the most well studied in the world of survey methodology, because it has so many parts to it. The order of the questions being asked, the tone of the interviewers voice or appearance, the wording of the questions themselves may unintentionally cause someone to answer a certain way. For example, I have seen many projections based solely on party identification, which does not account for people who plan on voting for one party in every race except one (i.e. “ticket splitters“). I imagine there will be a large number of people who cast their votes for all but one member of their preferred party this election. If you want to see an example of how not to predict an outcome, I humbly submit this one as an example of both specification error and easement error.

Processing Error – Processing error is all the ways that things can go wrong with the data AFTER it is collected. Some forms of this occur in encoding, editing, and weighting. The weighting piece is especially tricky, because it adjusts results based on known population parameters. For example, if we know that a poll had 80% of its respondents to be female, we would need to adjust the weights of the males in the survey to account for the fact that population parameter is known to be roughly 50%. Now, imagine that we are also accounting for race, income, education level, and age; you will see that things can get complicated in a hurry. One strategy to account for this is an iterative approach, known as “raking

Supporters of presidential candidate Hillary Clinton watch televised coverage of the U.S. presidential election at Comet Tavern in the Capitol Hill neighborhood of Seattle on Nov. 8. (Photo by Jason Redmond/AFP/Getty Images)

So, what does all this mean? There are lots of ways things can go wrong, and good surveys are incredibly expensive. They take time to construct and a shocking amount of money and manpower to collect. Also, many political polls are collected to drive media viewership, which means they are often more concerned about expediency rather than accuracy. That right there should be enough to give you pause.The 2016 election gave polling – and to a certain extent, statistics – a bad name. However, people don’t realize that the national polls (i.e. popular vote) were right on the money. The popular vote is one model. The electoral college tally is 51 models (all 50 states plus DC), which may take different strategies for collecting and analyzing, depending on the state. Lots of room for mistakes. If we want to predict who will likely win the popular vote, the statistical evidence that Biden will win that is pretty solid. Does that mean it is a certainty? Objectively no. Of course the election is decided off the electoral college, which again is 51 separate models. Some of those states are pretty clear. Others, not so much.

“A margin of error of plus or minus 3 percentage points at the 95 percent confidence level means that if we fielded the same survey 100 times, we would expect the result to be within 3 percentage points of the true population value 95 of those times.”

5 key things to know about the margin of error in election polls

Finally, it appears that we are headed for record levels of turnout due in part to enthusiasm, mail in voting, COVID-19, etc. The unprecedented nature of these factors only makes polling even more fraught for potential error. I would encourage anyone following the polls closely to lower their expectations considerably. That doesn’t mean the polls are wrong, but they should be viewed with a healthy amount of circumspection. With that being said, if you are like me and cannot help yourself, look at Nate Cohn and Nate Silver’s stuff. It is typically the most robust and transparent. Not surprisingly, they are the often times the most accurate predictions.

Tl;dr – Ignore the polls. We won’t really know much of anything until we see actual vote totals being counted. The rest is just theater.

Author: Scott Atchison

I am a Research Project Manager and Data Analyst for the Center for Pedagogy Arts & Design at Penn State. My research interests within my cognate focus on Open Educational Resources (OER), online learning, and instructional design.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s