Stand up for the facts!
Our only agenda is to publish the truth so you can be an informed participant in democracy.
We need your help.
I would like to contribute
Over the past year, political professionals have been picking over the pre-election polling data to figure out whether the polls failed to predict Donald Trump’s upset victory over Hillary Clinton.
In many people’s minds, the polls were flat wrong in 2016. Actually, it’s more complicated than that.
When the the American Association for Public Opinion Research asked a blue-ribbon panel of polling specialists to critique the industry’s performance in 2016, they found some important oversights and misfires. But the panel also emphasized that, contrary to popular belief, the national polls weren’t that far off.
After interviewing almost a dozen pollsters and experts, here are the key points we learned about how the election influenced polling, and what to look for in the future.
National polls vs. state polls
The panel concluded that the national polls, which had Clinton up by about 3 percentage points, were "basically correct," as she won the popular vote by 2 percentage points.
The problem is that the popular vote is not the one that determines the winner -- the Electoral College does. And Trump won the presidency by narrowly carrying the battleground states of Michigan, Pennsylvania and Wisconsin, where the polling was less accurate.
"Polls showed Hillary Clinton leading, if narrowly, in Pennsylvania, Michigan and Wisconsin, which had voted Democratic for president six elections running," their report said. "Those leads fed predictions that the Democratic ‘blue wall’ would hold. Come Election Day, however, Trump edged out victories in all three."
The association pointed to several issues that led to underestimated support for Trump, such as late-deciding voters choosing Trump too late to be captured in the polling.
The idea that polls in individual states were more problematic than national polls in 2016 is now widely accepted.
"The most obvious lesson of 2016 is that the national polls did well," said Steven S. Smith, a political scientist and polling specialist at Washington University in St. Louis. "The problem, as usual, was in the smaller electorates -- states and districts -- where there are small samples, infrequent surveys, and variation in turnout from election to election."
Traditional polls vs. nontraditional polls
Earlier this year, Jon A. Krosnick, a professor of communication and political science at Stanford University, analyzed every publicly available poll in the closing days of the campaign for a paper he presented at the American Association for Public Opinion Research national conference.
Krosnick found that polls that used the traditional method -- having live persons call telephone numbers chosen at random, and multiple times if necessary -- worked well in 2016, even for polls on the state level.
This included polls run by the major networks and newspapers, as well as Quinnipiac University, Marist College and several others, Krosnick said. Other types of nontraditional polls, such as those using recorded questioners or online surveys, did less well.
One polling outlet that uses traditional methods is the Arkansas Poll at the University of Arkansas. Its researchers are happy with using the traditional phone method, even if it is more expensive.
"I’m not doing anything different" as a result of the 2016 election "other than continuing to incrementally boost the proportion of interviews conducted by folks reached by cell phone -- we’re at 40 percent," said Janine A. Parry, the poll’s director. "These are practices that have been widely used and have worked well."
Weighting for education
One of the most troublesome problems in 2016 was most polls’ decision not to "weight" for educational attainment among a poll’s respondents.
"Weighting" means adjusting the results so that the demographics of the sample approximates the demographics of the state being tested or, for a national survey, the country as a whole. This helps straighten out the results of a poll that happened to have an unrepresentative sample. Weighting is common for some basic factors such as race and ethnicity. But it was not commonly done for education.
One of the signature aspects of the 2016 presidential race is the degree to which voters with lower educational attainment voted for Trump and voters with higher educational attainment voted for Clinton, But "many polls, especially at the state level, did not adjust their weights to correct for the over-representation of college graduates in their surveys, and the result was over-estimation of support for Clinton," the association’s report found.
A case in point was the University of New Hampshire Survey Center’s 2016 polling, which its director called "the worst we ever had."
"We did not weight by the level of education," said director Andrew E. Smith. "It had never been an issue."
After the fact, Smith applied the proper weighting, and "everything snapped back to being accurate," he said. "So, going forward, we will have to include education in our weighting."
Mark Blumenthal, the head of election polling with SurveyMonkey, agreed that the industry has taken the educational weighting issue seriously. He said that his company is digging even deeper.
"We’ve always weighted by education, but our review of nearly a million interviews we conducted last year demonstrated that we should have been even more granular in our approach," said Blumenthal, who also co-founded the website now known as HuffPost Pollster. "In 2016, there was an abnormal gap between the vote preferences of those with bachelor’s degrees and those with postgraduate degrees. Had we broken out these two different groups of ‘college graduates,’ our estimates would have been even closer."
Words of wisdom
The experts offered some parting advice for reading polls after 2016.
• Don’t cherry-pick the results you prefer. "I’m using the same strategy today that I’ve used since I️ started in this business," said Amy Walter, national editor at the nonpartisan Cook Political Report. "Take the highest and lowest polls, throw them out, and the result will be somewhere in the middle."
• When you have a series of polls over time, "pay attention to the trend, not the margin," Walter said. In other words, if a candidate is getting stronger or weaker over time in a series of polls, that’s a pattern worth watching.
• "If you see a poll from a pollster you’ve never heard of, be skeptical," Walter said.
• Consider various scenarios for voter turnout scenarios. This became especially important in the recent Alabama Senate race between Republican Roy Moore and Democrat Doug Jones. With Moore an unusually polarizing figure and accused of sexual misconduct, and with Jones running as a Democrat in a state that hadn’t elected one statewide in years, the dynamics of who might turn up at the polls was unclear right up through Election Day.
"The 2018 midterms are still far away," said Margie Omero, a Democratic pollster who serves as a partner at the firm GBA Strategies. Given that, it’s important to consider "different assumptions about the composition and size of the electorate."
• "Pay attention to undecided voters," Walter said. "Undecideds almost always break toward the challenger. It happened in 2016 to Trump. In midterms they break away from the party holding the White House." In the recent Virginia gubernatorial election, she noted, Republican Ed Gillespie was at 44 percent. He ultimately took 45 percent, with Democrat Ralph Northam taking most of the undecided voters.
• Don’t just look at "horse-race" polls -- the polls showing head-to-head matchups between candidates. "Campaign pollsters have long argued there's far more to assessing a race than just the ballot question," Omero said. "Awareness of the candidates, engagement and enthusiasm, candidate image, and external news events can all fluctuate and change a race." Focus groups, she added, can also be effective in gauging voters' feelings -- especially in less populated areas, where they are not done as frequently.
The professional forecasters said they are keeping their alarm systems on alert in today’s confusing polling universe.
"I’ve always been skeptical of surveys (taken by machines rather than live callers), and I’m not sure that Internet polls at the state and local level have been perfected," said Jennifer Duffy, senior editor at the Cook Political Report. "Some public pollsters have figured out how to do it. Others haven’t. I still have great faith in polls conducted for campaigns, campaign committees and super PACs, but like most of these pollsters will admit, their job gets harder by the cycle."
American Association for Public Opinion Research, "An Evaluation of 2016 Election Polls in the U.S.," 2017
American Association for Public Opinion Research, national conference, May 18-21, 2017
New York Times, ""After a Tough 2016, Many Pollsters Haven’t Changed Anything," Nov. 6, 2017
Email interview with Karlyn Bowman, polling analyst at the American Enterprise Institute
Email interview with Kyle Kondik, managing editor of Sabato's Crystal Ball at the University of Virginia Center for Politics
Email interview with Geoffrey Skelley, associate editor of Sabato's Crystal Ball at the University of Virginia Center for Politics
Email interview with Jennifer Duffy, senior editor at the Cook Political Report
Email interview with Amy Walter, national editor at the Cook Political Report
Email interview with Margie Omero, partner at the firm GBA Strategies
Email interview with Mark Blumenthal, the head of election polling with SurveyMonkey and co-founder of Pollster.com
Email interview with Steven S. Smith, political scientist at Washington University in St. Louis
Email interview with Janine A. Parry, Arkansas Poll Director at the University of Arkansas
Interview with Andrew E. Smith, director of the University of New Hampshire Survey Center
Interview with Jon A. Krosnick, professor of communication and political science at Stanford University