Stand up for the facts!
Our only agenda is to publish the truth so you can be an informed participant in democracy.
We need your help.
I would like to contribute
In the Republican presidential nomination contest, polls have been the outsiders’ best friend. Billionaire Donald Trump and physician Ben Carson gained early momentum and critical debate exposure thanks almost entirely to their high rankings in public opinion surveys.
What those poll numbers say about Trump and Carson’s chances in the Iowa caucuses and beyond is far murkier. Iowa conservative radio host Steve Deace said the polls are meaningless.
"The record of polling in Iowa is a joke," Deace said on MSNBC’s News Nation on Oct. 8, 2015. "You’ve got the pollsters now coming out in numerous articles in recent days saying ‘don’t actually believe our own data.’ You have Gallup yesterday made the announcement, they’re pulling out of polling in this primary. They may not even poll the general election because they don’t trust their methodology."
A lot of people wonder about the accuracy of the polls, and it would be a big deal if the pollsters themselves were just as skeptical. We decided to drill down on Deace’s points that pollsters are warning people not to trust their data, and that Gallup stopped its primary polling because it doesn’t trust its methodology.
A cluster of articles
Deace told us he relied on several stories that came out recently. Bloomberg wrote about systematic flaws in the data. High on the list: -- the rise of cell phone-only households and a growing percentage of people who refuse to answer pollsters’ questions.
The best public opinion operations employ banks of callers who are fed randomly picked phone numbers. Cell phones make it tougher to determine where the interviewee actually lives, which matters a lot for state-level polls. Plus, federal rules about calling cell phones mean you must hand-dial each one, rather than use a faster automated dialer. That’s a significant cost factor.
And then there’s the response rate. When more people hang up, that raises questions about whether the people who participate are unrepresentative of the public at large.
"As response rates have declined, the need to rely on risky mathematical maneuvers has increased," the Bloomberg article said.
There was also a report from Politico that carried a warning from pollsters. They said they shouldn’t be the ones "to winnow the GOP field."
"Don’t trust polls to detect often-tiny grades of opinion in a giant field," the Politico writer summarized.
The point was, news organizations sponsoring debates are counting on polls to decide who to bring on the stage and who to cut. With over a dozen Republican candidates, many of them clustered in the single-digits, the differences between any two are nearly certain to be smaller than the polls’ well-published margin of error.
If just 1 percentage point separates Candidate A from Candidate B, and the margin of error is 3.5 percent, you have a problem.
But it’s not a problem that stems from the polling itself. As the Politico article noted, it’s a problem with how debate organizers are using the polls.
What’s up with Gallup?
So those two articles are the grounds for the first part of Deace’s statement. Then there was his comment about Gallup. The bombshell in the polling world, as these things go, was Gallup’s decision to pull back from tracking the presidential primaries. This came out in another Politico article headlined, "Gallup gives up the horse race."
"After a bruising 2012 cycle, in which its polls were farther off than most of its competitors, Gallup told Politico it isn't planning any polls for the presidential primary horse race this cycle," Politico wrote. "And, even following an internal probe into what went wrong last time around, Gallup won't commit to tracking the general election next year."
Gallup is one of the granddaddies of polling and had done a lot of polls in past elections. It was no small thing for the primaries to lose such a prominent player.
Frank Newport, Gallup’s editor-in-chief, told Politico that the polling firm thought it could have more impact by asking people what they think about issues, rather than who they want to see in the White House.
While the article drew a connection between Gallup’s move and its polling methods, Newport himself didn’t say that. And afterward, he sent around a statement rebutting the story’s broader implication.
"Unlike some critics, we at Gallup remain very strong in our belief in the accuracy of polling today, even with the new challenges that are in front of the industry," Newport said. "Our post-mortem work in 2012/2013 and our experimentation in the 2014 midterms leave us with little doubt that polling, including our own, can be accurate in 2016."
Taking Newport’s words at face value, worries about Gallup’s methodology did not drive its decision.
Deace told us he had not seen Newport’s statement.
Charles Franklin, co-founder of Pollster.com and director of the Marquette University Law School Poll, told us he thinks this was more a business decision, but the picture is complicated. Gallup does a lot of contract work for businesses, and in that context, political polling comes with a risk.
"You can be judged by how you do on the presidential vote in November," Franklin said. "If you have troubles with that, it creates problems for the company."
Franklin said he thinks Gallup does as good a job as anybody, but he can see why the firm might decide it makes more sense to pull back from election tracking.
Polling in changing times
There’s no question that it has become trickier and more expensive to be accurate. It’s also clear that the better financed pollsters adapt. Statistics researchers Ole Forsberg and Mark Payton at Oklahoma State University focused on the cell phone issue. Forsberg told us the pollsters have come up with a variety of approaches. He said while we won’t know which ones work best until this election is over, he’s optimistic.
"The current polling methods are most likely more accurate than in 2012," Forsberg said. "They may be more accurate than in 2008. They are most likely worse than in 2004."
Polling in 2004 was more reliable because landlines were more common back then, and landlines are inherently easier to work with than cell phones.
As for the hurdle of rising refusal rates, Franklin said the survey firms, including his, spend more time getting full interviews.
"We have to call many more numbers to get a big enough sample size, but there’s plenty of evidence that we’re reaching enough," he said.
Historical trends collected by the National Council on Public Polls, a trade association of pollsters, track the gap between polling and actual voting results in presidential years. In 2012, the average error was 1.46 percentage points. That was slightly higher than in the previous three elections in 2008, 2004 and 2000, but less than the error in 1996 when it was 2.1 percentage points.
Jon Krosnick at Stanford University has assessed polls for many years. Krosnick distinguishes between polling organizations that dot their I’s and cross their T’s, and those that don’t.
"Our evaluations of survey accuracy indicate that surveys done using best practices, which is expensive, continue to be remarkably accurate," Krosnick said. "If you look at the average of the final pre-election polls done by the neutral news organizations that used high quality methods just before the 2008 and 2012 presidential elections, they predicted the national popular vote within one percentage point."
Krosnick said the danger lies in the proliferation of firms that do polling on the cheap. Those results, he said, could give the impression that polling in general is less accurate.
All that said, Cliff Zukin, a Rutgers University political scientist and former president of the American Association for Public Opinion Research, warns that no one should underestimate the challenges polling faces today. Zukin said there’s a huge issue of figuring out who will actually vote and all the problems only get worse at the state level where much of the attention is focused during the primaries.
"We are less sure how to conduct good survey research now than we were four years ago, and much less than eight years ago," Zukin wrote in an op-ed.
A particularly subtle form of error seems to have crept into the picture. Nate Silver, a statistician who grew famous for his accurate election predictions and is now editor-in-chief of 538.com, has written about the "herding" phenomenon. That’s when polling firms fear to publish results that don’t track with other polls.
"Pollsters have admitted to suppressing polls they deem to be outliers but that would have turned out to be just right," Silver wrote in 538. "In the U.S. last year, at least two polling firms declined to publish surveys showing a tight Senate race in Virginia, which in the end was decided by only 18,000 votes in what was almost a historic upset."
Early polls in a class by themselves
From what we found, most research into polling accuracy looks at surveys done in the final weeks of a campaign. Right now, we are nearly four months before the first ballot is cast in Iowa. Deace said he thinks the polls are a joke because he doesn’t think they reflect who will carry the caucuses.
Franklin at Marquette University told us for pollsters, that’s the wrong question.
"This is very much like midway through the second quarter of a football game," he said. "It is not telling you with great certainty who will win the game. But it is telling you who's playing well and who isn't. And we still have the second half to play."
Franklin said, if you don’t think the score at halftime tells the whole story, you shouldn’t expect any more from polls at this stage of the primary process.
Deace said pollsters are warning people not to trust their results and that Gallup doesn’t trust its methodology. The articles he cited actually were more specific than he suggested. Pollsters did say that surveys can’t be used with surgical precision to separate the wheat from the chaff in the crowded Republican field. But that’s in response to broadcasters who have counted on polls to decide who should appear in debates.
As for Gallup, a news article implied that concerns over methodology drove its decision to stop tracking the primaries, but the editor-in-chief at Gallup didn’t say that and in fact, he expressed confidence in the firm’s techniques.
It’s costing more and more to do polling well, and state-level polls, which are the ones Deace had in mind, face the biggest challenges. But overall, Deace exaggerated what pollsters have said.
We rate this claim Half True.
MSNBC, News Nation, Oct. 8, 2015
Steve Deace, tweet, Oct. 7, 2015
Politico, Gallup gives up the horse race, Oct. 7, 2015
National Council on Public Polls, Election reports
538.com, Gallup Gave Up. Here’s Why That Sucks, Oct. 7, 2015
Bloomberg, Flaws in Polling Data Exposed as U.S. Campaign Season Heats Up, Sept. 29, 2015
Politico, Pollsters: Don't trust us to winnow GOP field, Oct. 5, 2015
Oxford University Press blog, Improving survey methodology: a Q&A with Lonna Atkeson, Aug. 9, 2014
Fordham University, Which Pre-Election Poll was Most Accurate?, Nov. 6, 2012
American Statistical Association, Analysis of Battleground State Presidential Polling Performances, 2004–2012, June 2015
Columbia Journalism Review, How polling data can be dangerous for political journalists, Nov. 6, 2014
538.com, How FiveThirtyEight Calculates Pollster Ratings, Sept. 25, 2014
New York Times, What’s the Matter With Polling?, June 20, 2015
538.com, Polling Is Getting Harder, But It’s A Vital Check On Power, June 3, 2015
538.com, The Polls Were Skewed Toward Democrats, Nov. 5, 2014
Emailed statement, Frank Newport, editor-in-chief, Gallup, Oct. 9, 2015
Email interview, Ole J. Forsberg, visiting professor of statistics, Oklahoma State University, Oct. 8, 2015
Interview, Charles Franklin, director, Marquette University Law School Poll, Oct. 9, 2015
Email interview, Jon Krosnick, professor in humanities and social science, Stanford University, Oct. 9, 2015
Read About Our Process
In a world of wild talk and fake news, help us stand up for the facts.