Home 2020 President Inconclusive studies of pre-2020 polling problems could be good for the industry: Sabato’s crystal ball

Inconclusive studies of pre-2020 polling problems could be good for the industry: Sabato’s crystal ball

0
twitterredditpinterestmail

KEY POINTS OF THIS ARTICLE

– After another presidential election in which pre-election polls often underestimated support for Donald Trump, the polling industry is once again trying to figure out what went wrong.

– A task force from the American Association for Public Opinion Research noted a lack of weighting for education in its post-2016 assessment, but that didn’t fix the problems with the 2020 polls.

– That the AAPOR has not identified a specific problem with the 2020 elections may be a good thing for pollsters.

Evaluation of pre-election polls after 2020

At the 2021 virtual conference of the American Association for Public Opinion Research (AAPOR), a task force presented the findings of its official assessment of the 2020 pre-election polls. [1] The findings confirmed what general suspicions and initial analysis had shown: that the 2020 polls collectively overstated support for Democrats in every race and generated the highest voting errors in “at least 20 years.”[2] However, the working group could not determine what caused the error from the available data, only that it was “consistent with the lack of systemic response.”

The conclusions, or lack thereof, of the task force are disappointing on one dimension. The fact that a stellar group of hardworking industry researchers has not provided concrete answers to what went wrong is somewhat disheartening. In the same way, though, that could be good for the broader industry in two ways: It could help reset expectations for pre-election polls because there is no single identifiable “fix” that can be applied, and it is likely to drive innovation among various methodologies to identify and address underlying problems.

Survey error in 2016 vs 2020, and how not knowing what is wrong can be good for expectations

After 2016 pre-election polls underestimated support for Donald Trump, a similar working group of AAPOR went to work in early 2017 to investigate why. The findings of that task force pointed to two specific sources of error that skewed Trump’s polls. First, the 2016 pre-election polls had unusually high proportions of undecided voters, among whom the majority ended up voting for Trump. Additionally, the worst-performing polls tended not to properly adjust their polls to get enough voters with less formal education than a four-year college degree, a group that also leaned heavily toward Trump.

In the run-up to the 2020 election, there were far fewer undecided voters in the polls, leaving the issue of education weighting the main point in discussions of poll accuracy. Weather pollsters often warned to set the weight of education didn’t mean 2020 would be bug-free, what the caution usually came after a statement on making corrections and adjustments based on specific problems identified after the 2016 elections. Fairly or not, the perception emerged that correcting the education weighting deficiency, the pollsters had solved the problem (despite some warnings otherwise). The 2020 task force poured cold water on that theory, noting that the problems identified in 2016 had mostly been resolved. outside as the main drivers of voting error in 2020.

The silver lining of the lack of concrete responses is that the narrative of fixing polls by adjusting this one thing cannot take hold in the wake of the 2020 voting errors. This time, instead of focusing on how to make polls perfectly predict election results, as the education weighting finding inadvertently did, the group’s report The 2020 job market looks set to focus on the unknown sources of uncertainty that exist in the polls. If this is used to foster better communication and understanding of uncertainty, it will be a positive result.

No more “gold standard” and innovation opportunities

It also follows that, because the AAPOR working group did not identify easily corrected flaws in pre-election polls, individual pollsters must innovate and solve problems on their own. However, the findings point to areas that need innovation: how we contact people and get them to poll, and how we determine who the “likely voters” we want in our polls are.

It is becoming increasingly clear that the way a survey contacts people, once a key heuristic for assessing survey quality, no longer tells us what it used to say about accuracy. The 2020 primary pre-election survey task force report found that whether the survey was conducted online or over the phone had nothing to do with accuracy, and the submission of the task force’s new report indicated the same finding. As a result of its own analysis showing the same, FiveThirtyEight has retired the survey of live calls from landlines and cell phones as the “gold standard”. The fact that the field abandons its attachment to one source as being more precise than others will allow other methodologies to become more prominent and encourage further experimentation with new methodologies.

The second key place we need to innovate, or at least focus more energy, is determining who is a “likely voter.” The task force seemed to dismiss the likely voter model somewhat as a reason for failing the polls in 2020 based on the limited information they had available. That came with a big caveat that the task force had no information on possible voter models for most polls. That is not surprising; most pollsters regard voter selection or modeling as their proprietary “secret sauce” and do not disclose it. Without more information to analyze, there is no way the task force could actually rule out potential voter models as part of bias. We need to raise awareness that unless details are provided, anything labeled “likely voters” is essentially a pollster’s best guess as to what the electorate will look like, nothing more.

An instructive illustration of the importance of voter selection comes from a 2016 article in the New York Times in which Nate Cohn had four different sets of pollsters fit the same data using weights and likely voter determinations, and they got results ranging from Clinton +4 to Trump +1. That exercise quite clearly demonstrated that likely voter model: made by rational and intelligent people! – may result in a significant error in the survey. Of course, this has always been true, but likely voter patterns will be much more important in elections won or lost with very narrow margins in some states. The best action AAPOR could do is to continue to promote transparency in methods, including likely voter models.

Looking towards 2024

There will still be many presidential horse racing elections in 2024, and prior to that in contests to be held in 2021, 2022, and 2023. Demand for elections in the early 2021 Georgia Senate elections illustrated that the elections they are still a desirable part of campaign coverage. Polls are also still the best way to find out what the general public thinks.

However, when 2024 rolls around, it seems pollsters won’t be able to say “we fix X as the AAPOR report said, we should make up for what happened last time. “The most likely scenario in the absence of any kind of community consensus is that individual pollsters will modify their processes here and there, and those adjustments will be different for each organization. Some will be at the sample level, working hard to make sure those people they don’t trust are recruited in some way for the polls. Some will be in other parts of the process, including likely voter role models. The AAPOR working group doesn’t tell us how to do it, but it leaves the field open for innovation and learning, which makes it a difficult, but exciting time to be a pollster.

Natalie Jackson, Ph.D., is Director of Research at the Public Religion Research Institute (PRRI). Previously, she was senior editor for polls and responsible for election forecasting efforts at the Huffington Post from 2014-2017. The opinions expressed in this document are his own and do not represent any employer, past or present.

Footnotes

[1] At the time of writing, the written report has not been published. The information in this article on the report of the working group is based solely on the presentation at the conference. Any misinterpretation is the sole responsibility of the author.

[2] In the absence of a public report, the citations are taken from the conference presentation slides and the presentation recording, last seen on June 21, 2021.

LEAVE A REPLY

Please enter your comment!
Please enter your name here