David Chura: No Bail? Go to Jail.
Adele Barker: Disaster’s Aftermath

David W. Moore & George F. Bishop: 2010 Top Ten Dubious Polling Awards

Today's post is from David W. Moore and George F. Bishop. Moore is a Senior Fellow with the Carsey Institute at the University of New Hampshire. He is a former Vice President of the Gallup Organization and was a senior editor with the Gallup Poll for thirteen years. He is author of The Opinion Makers: An Insider Exposes the Truth Behind the Polls.  Bishop is Professor of Political Science and Director of the Graduate Certificate Program in Public Opinion & Survey Research at the University of Cincinnati. His most recent book is The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls.

Book cover for The Opinion Makers Every year, poll watchers are confronted with poll results and commentary that defy either logic or science, often raising questions about the very utility of polls. Typically, the problems are not with the method of conducting polls, but with the pollsters themselves -- as they focus on what they believe is entertaining and appealing to the audience rather than an accurate reflection of public opinion. In the process, pollsters can manipulate public opinion or write commentary that makes a mockery of what the public is really thinking.

With this article, veteran pollsters, authors and political scientists George F. Bishop and David W. Moore issue their Second Annual Top Ten "Dubious Polling" Awards. These awards are intended to mark for posterity some of the most risible and outrageous pronouncements by polling organizations during the previous year.

Each award is ranked, from a low of one set of crossed fingers to a high of five sets. Pollsters generally know in their hearts when all is not right with their polls, but they (figuratively) cross their fingers and hope that no one notices anything amiss. The five crossed-fingers icon is the ultimate in wishful thinking, perhaps the equivalent of football's "Hail Mary pass" for the truly untrustworthy poll.

The "FUZZY MATH" Award


WINNER: Fox News Network, for its creative presentation of polling numbers, showing what 120 percent of the public was thinking -- and this did not include an additional 15 percent who weren't thinking! Who knew the public could be a third larger than itself?

BACKGROUND: Only two weeks after declaring a "zero tolerance for on-screen errors," Fox News had to scrap this draconian policy and instead argue that one of its charts was actually an accurate representation of a Rasmussen poll.

 Fox News Chart of Rasmussen Poll

The chart shows 59 percent of Americans who believe it was "somewhat likely" that some scientists had falsified their research, 35 percent who said "very likely" and 26 percent "not likely" -- with another 15 percent (not shown in the chart) who had no opinion.

Most third grade math students would no doubt conclude on the basis of these numbers that 120 percent of the public had an opinion, with the public constituting a grand total of 135 percent of itself. That may sound bizarre to some, but apparently not to Fox.

After Media Matters brought this error to Fox's attention, the network responded to Politico's Michael Calderone by denying there was a problem. Lauren Petterson, executive producer of Fox & Friends could see no error in the graphic, claiming "we were just talking about three interesting pieces of information from Rasmussen."

Really? Host Steve Doocy on air noted the two top numbers and mentally added them, saying "you get 90 - you get a lot of people right there thinking it is likely, although 26 percent say 'not very likely'." He stumbled on "90" because even he could see that 90+ percent added to the 26 percent figure exceeded 100 percent. Now, how could that be?

How indeed? Better ask those math wizards at Fox.


WINNER: Frank Newport, Editor-in-Chief of the Gallup Poll, for his persistent and admirable faith in the American public’s attention to policy details, all polling evidence to the contrary notwithstanding.

BACKGROUND: Right before Christmas, President Obama's senior advisor, David Axelrod, appeared on Meet the Press hosted by David Gregory and on This Week with George Stephanopoulos to discuss the pending healthcare legislation, arguing -- among other things -- that many people are not aware of the specifics of the proposals, and that much public opposition measured by polls is thus misleading. Newport took issue with Axelrod’s assertions about public ignorance, by saying: "I'm not aware of current data measuring how well Americans understand what's in the healthcare bill."

Wow! He'd been polling for weeks on what people think about the healthcare proposals, and he couldn't say what they even knew about them? If people didn't know anything, or were confused about the issue, wouldn't that be relevant to understanding public opinion? That, at least, was the argument Axelrod was making.

Pollster Newport, however, argued passionately in favor of assuming that people know the issues rather than asking them (now, is that any way for a pollster to act?). His argument: "Healthcare is one issue which is not highly abstract or abstruse for Americans... It is, instead, an issue that is near and dear to most Americans' daily lives... It would seem that Americans would be able to understand the ramifications of proposed remedies for this particular policy issue if for no other[sic]."

Well, yes, healthcare itself may not be highly abstract or abstruse for Americans, but the proposed healthcare bill is another story, as Axelrod's focus groups had shown. Even one of Gallup's own polls affirmed that point. Newport admitted that last July, a (rogue) Gallup poll (that actually addressed what people knew) showed "about half of Americans said they understood the issues in the healthcare debate." (That's a positive way of reporting that 48 percent said they understood the issues, 51 percent said they did not.)

Other polls also showed large proportions of the public confused or uninformed. In mid-September, CNN reported that while 43 percent of Americans said they knew "a great deal" or "good amount" about Obama's healthcare proposals, 57 percent said they knew "only some" or "not much at all" about them -- exactly Axelrod's point. The same month, CBS reported 59 percent who found the proposed healthcare reforms "confusing."

It's a quaint, romantic notion to believe that Americans are fully informed of pending policy proposals and express their considered views through our eminent pollsters, providing what Newport calls "enlightened input" to our legislators. As Axelrod points out, however, confirmed by CNN and CBS (and even Gallup), the truth, alas, is far different. But kudos to those who deny their own polls and believe otherwise.

The "OOPS!" Award


WINNERS: The Marist, Quinnipiac, and SurveyUSA polling organizations for their relentless campaign portrayal of an easy re-election campaign for New York City's Mayor Michael R. Bloomberg, and their final predictions of a double-digit victory for the mayor, who actually won by just 4.4 percentage points. Whew! Shades of Dewey/Truman!

BACKGROUND: For weeks leading up to the election, the conventional wisdom was that Bloomberg would win handily, reinforced by polls from these three organizations showing the incumbent with large, double-digit leads. When he actually won with 50.7 percent of the vote to William C. Thompson, Jr.’s 46.3 percent, the city was stunned. So many what ifs...the biggest one being, what if the polls had given an accurate picture of the electorate?

Ben Smith at Politico noted that Bloomberg's close victory "left Democrats pondering what might have been if New York's Democratic donors hadn’t turned their back on Thompson, if its politicians had worked for him, and most of all if President Barack Obama had offered anything more than the lamest words of praise." Thompson was outspent by a 14-1 margin, the city's top politicians kept their distance, and Obama barely acknowledged Thompson's candidacy until very late in the campaign.

But why waste the money, the effort, and the President's political resources when the polls say the race was essentially over before it began?

OK! So, the pollsters didn't predict the wrong winner. But they predicted the wrong margin, and apparently mischaracterized the electorate throughout the campaign.

Hey, guys! How about a few mea culpas, along with those heavy sighs of relief?

And, by the way, what the @#$% happened?



WINNER: Fox News' favorite pollster Scott Rasmussen and his eponymous polling organization, for his consistently more negative evaluation of President Obama compared with other polls, all under the guise of being an "independent" pollster. Truth is Rasmussen is a partisan pollster (yikes!). Guess which party?

BACKGROUND: A recent comparison by the Atlantic's Andrew Sullivan showed that Rasmussen's reports on Obama's approval ratings form quite a different pattern from what all other polls (collected by Pollster.com) reveal (see pictures below - the blue line is approval, the red line is disapproval):

President Obama's Job Approval Ratings: Rasmussen vs. Other Polls


Rasmussen had a net negative rating for Obama way back in August 2009, compared with a substantially positive rating by the rest of the polling organizations.

At the end of 2009, the average of all polls on pollster.com, except Rasmussen, showed Obama with an approval to disapproval rating of 51.3% to 42.2% -- a 9.1 point positive margin. By contrast, Rasmussen at the same time had approval at 45.2% to 54.3% disapproval, for a 10.1-point negative margin -- a swing of 19.2 percentage points. You can see why Fox finds Rasmussen so pleasing to interview.

Some people, of course, have gone after the messenger for his out-in-right-field message, but the pollster argues on his website that "Scott Rasmussen, president of Rasmussen Reports, has been an independent pollster for more than a decade. Like the company he started, Scott maintains his independence and has never been a campaign pollster or consultant for candidates seeking office."

Never?! Well, tell that to the Washington, D.C.-based Center for Public Integrity, which shows "Scott Rasmussen Inc" as a "consultant" for the Republican National Committee, which paid him $95,500 in 2003-2004; and for George W. Bush, who paid him $45,500 in the same year. (Double Yikes!)

Scott, you sure pulled the fur over our eyes.



WINNERS: Peter D. Hart Research Associates and John McLaughlin & Associates for their wildly contradictory polls on the "Card Check" bill (the Employees Free Choice Act - EFCA), polls which just happened to agree with the positions of the groups who paid for the polls. Hmmm...how about that for a coincidence?

BACKGROUND: Some critics argue that polls do not reflect so much what the public is thinking, but what pollsters want the public to think. Well, the Hart and McLaughlin polling organizations are here to prove the critics right.

Polling on the question of the "card check" bill, Hart found about three-quarters of American adults in support of allowing employees to have a union once a majority of workers signed authorization cards. By contrast, McLaughlin found three-quarters of American voters opposed. The slight difference in samples -- "adults" vs. "voters" -- is clearly not enough to account for this titanic difference in opinion measurements.

The important factor to understand is that Hart was polling for the AFL-CIO, which supports the card check bill, while McLaughlin was polling for the anti-card check organization, Coalition for a Democratic Workplace (CDW). No doubt the clients were satisfied. They got what they paid for -- the illusion of a public overwhelmingly supportive of their positions.

And what was the public really thinking? Don't ask. (Hint: Most people haven't the foggiest idea what this bill entailed -- which is why pollsters can make Americans appear to say whatever the clients want.)



WINNER: The Gallup Organization for its admission that its Six Decade Old Annual "Most Admired" Lists of Living Men and Women do not include the "most admired" people in the world after all. (Say it ain't so, ghost of George! Have you been lying to us all these years?!)

BACKGROUND: The latest Gallup list of the "Most Admired" Men and Women of the past year was reported as showing Barack Obama the runaway winner among men, with Hillary Clinton edging out Sarah Palin among the women. The controversial ranking, which sent the polling world into an uproar, was conservative talk show host Glenn Beck's fourth place finish among men, ahead of Pope Benedict in fifth place.

Perhaps no one in the world, except possibly Beck himself, while sobbing in the privacy of his mirror-lined bedroom and lamenting that he alone in the world will "fear for my country," finds this ranking plausible. But Dana Milbank of the Washington Post appeared to take Gallup at its word, launching his scathing attack of the Fox News host by claiming, "It's official: Americans admire Glenn Beck more than they admire the pope."

Pollsters immediately found fault with Gallup's methodology, claiming that any question that directly asked a representative sample of Americans who they admired more -- the pope or Glenn Beck -- could not possibly find Beck the winner. Even Beck and his colleagues on Fox News seemed to agree (at least publicly) that there was something fundamentally wrong with the question.

Shockingly, Frank Newport of the Gallup Organization concurred, essentially admitting (though in a rambling obfuscatory way) that while "there are a number of ways to measure admiration...blah...blah...blah," the way Gallup did it, would not -- despite the ranking on the Gallup website - "allow one to conclude that Americans admire Glenn Beck more than they admire the pope."

Well, what did the poll allow one to conclude? According to Newport, "The question basically measures 'top-of-mind' brand awareness." Ah! It measures salience - someone who has most recently been in the news; it doesn't measure admiration after all.

So, does that mean that other Gallup findings can't be trusted either? Sarah Palin may not be twice as admired as First Lady Michelle Obama? Tiger Woods’'beleaguered wife, Elin, may not be tied with Germany's Angela Merkel as the ninth most admired woman in the world?

And what will Gallup do now with more than sixty years of its admittedly bogus data? Our guess: Next year, Gallup will once again announce its list of "most admired" men and women, pretending that this year's admission never happened. After all, as Newport argued, the question "is based on a historic Gallup precedent." (Aren't some precedents just plain bad?)

(Note: Gallup's founder, Dr. George Gallup, died more than a quarter of a century ago, so he is not forced to witness his old polling firm 'fessing up to its sixty plus years of playful mendacity.)



WINNER: Strategic Vision, LLC, a national, "Atlanta-based" polling firm whose numbers are suspect, whose location is not in Atlanta, and whose actual polling process (the interviewing) may be bogus. O Strategic Vision, Where Art Thou?

BACKGROUND: Last September, the American Association for Public Opinion Research (AAPOR) censured Strategic Vision for not releasing relevant information about polls it says it conducted in several primary states back in 2008.

The polling firm was co-founded in 2002 by David E. Johnson, described in a 2008 press release as a "veteran Republican pollster and strategist" who "worked on Bob Dole's 1988 presidential campaign and has overseen numerous campaigns across the nation."

The New York Times reports the firm was initially intended to be a public relations company, though two years later it branched off into election polling. Its "polls have been cited by numerous news organizations, including The Associated Press, The Washington Post, MSNBC, Fox News and, on at least three occasions, The New York Times, even though the company has repeatedly failed to provide supporting data and the methodology for its surveys." (emphasis added)

Whoa! Are the media falling down on the job, or what?

After the AAPOR censure, Nate Silver of FiveThirtyEight.com noticed something suspicious about the polling firm's numbers -- disproportionately they ended in "8" more than "1" (such as 58 vs. 51 percent). Silver initially estimated such a pattern could occur by chance only one time in 86 million, though a guest statistician subsequently calculated better odds, but still a daunting one chance in 5,000. Silver concluded there was an overwhelming probability that Strategic Vision had simply made up the numbers without doing the polling. (Mamma Mia!)

Then the situation got mysteriouser and mysteriouser. Ben Smith of Politico reported that all of the offices that Strategic Visions listed on its website -- its main office in Atlanta, and several other offices in Madison, Seattle, and Tallahassee -- all "match the location of UPS stores, rather than actual offices."(Double whoa!)

Eventually, Nate Silver, with the help of a blogger on pollster.com, tracked down the location of one office, in the small town of Blairsville (pop. 650), Georgia, about a two-hour drive and 110 miles north of Atlanta, which Silver noted "appears to match the listing for the Seasons Inn Motel & Plaza."

Was this the "Atlanta-based" firm's actual location? Apparently, yes. Strategic Vision’s CEO Johnson said the difference between Atlanta and Blairsville was "semantic." (Really? Tell that to the Blairsville Falcons.)

Another question: Who did all of the polling work and where did they do it? Johnson claimed that his firm either employed or had a central call office in Florida, but declined any additional identifying information. (Why all the secrecy? Did this guy used to work for the CIA?)

To allay charges that his firm had simply made up the numbers (perhaps by issuing results that were similar to polls that had already been conducted), Johnson promised to produce crosstabs of his polling data. These are tables that show how respondents' answers to questions compare among demographic groups, such as men and women, racial groups, people with different levels of income, and so on. While it may be easy to fake the overall answers to questions, it is difficult to fake such responses among all the demographic subgroups and not get caught.

But so far, no crosstabs for those errant polls.

So, what will happen now? Will Strategic Vision sue AAPOR and Nate Silver as Johnson intimated? Will Strategic Vision actually produce its crosstabs? How will it ever explain its one-in-5,000 numerical pattern? Will we ever discover the location of its many offices? Does it have real interviewers? Is this a phantom or a real polling firm?

Stay tuned. The soap opera isn't over until the thin man sings!



WINNERS: The CBS News, ABC/Washington Post, CNN, and USA Today/Gallup polls - for their seemingly random findings on the public’s reaction to President Obama’s health care speech in September. Like a bobblehead doll, the public appeared to nod first in one direction, then another, then back again - at least according to these esteemed media organizations. (Can we really believe these guys?!)

BACKGROUND: On Wednesday, September 9, 2009, President Obama gave a nationally televised speech before the Congress on the subject of healthcare, outlining in some detail the kinds of provisions he wanted in a healthcare reform bill. Two days later, a CBS poll declared the public had rallied in favor of the president with an astounding 21-point swing in approval rating on the issue. (But wait!)

The following Monday, an ABC/Washington Post poll came to the opposite conclusion: The "bottom line views on health reform," according to that analysis, "failed to improve since President Obama addressed the nation." (Wait wait! There's more!)

At 3:00 PM that afternoon, CNN reported a 13-point favorable swing in Obama's approval rating since the president's speech. (Don't go away!)

The very next day, a USA Today/Gallup poll bobbled in the opposite direction -- Obama's speech "didn't change minds" after all and "for the first time" a majority of Americans actually disapproved of the way the president was handling health care policy.

What's a bloke to think? Is the public a bobblehead? Or are those pollsters manipulating their respondents? (You decide - we've already made up our minds!)



WINNERS: The John Hopkins Bloomberg School of Public Health and one of its professors, Dr. Gilbert Burnham, for stonewalling in the face of serious questions about a flawed survey project, which reported more than 600,000 Iraqi deaths from 2003 to 2006. The head researcher was formally censured by the American Association for Public Opinion Research (AAPOR) for covering up his data collection efforts, but the Bloomberg School refuses to investigate the methodology. (Ah, the wisdom of the three monkeys: "See no evil, hear no evil, speak no evil!").

BACKGROUND: In 2006, the British medical journal, The Lancet, published the results of a survey, designed and supervised by Dr. Gilbert Burnham of the John Hopkins Bloomberg School of Public Health, and his colleagues.* The survey purported to show that about 600,000 Iraqi deaths occurred in Iraq by July 2006, as a consequence of the invasion of Iraq.

A lot of people were against the war, but jacking up the body count with bad studies is not a good tactic for anyone. According to economics professor Michael Spagat of Royal Holloway College, these results were anywhere from seven to 14 times as high as other credible estimates, including those made by the non-partisan Iraq Body Count, a consortium of U.S. and U.K researchers, also concerned about the human toll of the war.

Such large differences in estimates led other researchers to question the methodology of the study. But contrary to scientific norms, Burnham refused to provide details about how the survey was conducted. When a complaint was lodged with AAPOR, its standards committee also tried to obtain such details, but was rebuffed. That led to the censure.

What exactly were John Hopkins Bloomberg School, and Burnham, et. al., hiding? AAPOR asked for the kind of information that any scientist doing this type of work should release: a copy of the questionnaire, the consent statement that interviewees have to see, a full description of the selection process, a summary of the disposition of all sample cases, and how the mortality rate was calculated.

John Hopkins Bloomberg School initially stood behind the study, but then eventually concluded that Burnham had made some unauthorized changes in his methodology, and thus "the School has suspended Dr. Burnham's privileges to serve as a principal investigator on projects involving human subjects research."

But the Bloomberg School has not come clean with the problems of the research project. Their press release admitted that their internal review "did not evaluate aspects of the sampling methodology or statistical approach of the study." Instead, Bloomberg asserts, "It is expected that the scientific community will continue to debate the best methods for estimating excess mortality in conflict situations in appropriate academic forums."

Let's see: The Bloomberg School will not attempt to evaluate what experts believe is almost certainly a faulty methodology, saying the scientific community should make the evaluation. But then the school advises Burnham not to release details about his methods, so the scientific community can't have the information it needs for a definitive assessment.

Sounds like a cop-out and a Catch 22, all rolled into one!

And we thought Richard Nixon was tricky.

* Burnham G, Lafta R, Doocy S, Roberts L. 2006a. 'Mortality after the 2003 invasion of
Iraq: a cross-sectional cluster sample survey'. The Lancet 368:1421-1428. It can be accessed online at http://brusselstribunal.org/pdf/lancet111006.pdf.



WINNERS: Zogby, Ipsos/McClatchy, ABC/Washington Post, Associated Press/Stanford University, Pew Research Center and CNN/Opinion Research Corporation polls, for their valiant efforts to manufacture a "public opinion" on "cap and trade" legislation, though most people haven't a clue.

BACKGROUND: Recent poll suggest that so few people are aware of what “cap and trade” programs are, it is pure fantasy to think of a "public opinion" on the issue. Yet, somehow, many pollsters have been able to fabricate the illusion of a public that is so highly engaged and informed, the vast majority of people have a position with respect to the issue.

But let the people guess what "cap and trade" means among three possible answers -- healthcare, banking reform, or the energy and environment -- and less than one in four can accurately point to the latter. Or ask them how much they've heard about "a policy being considered by the president and Congress called 'cap and trade' that would set limits on carbon dioxide emissions," and only one in seven can say "a lot," while more than half say "nothing at all."

So, how do pollsters create the illusion of an engaged public? Simple: They feed their respondents information, and then immediately ask what respondents think, using "forced-choice" questions to get an answer.

The key point here is that none of these polls can any longer represent the American people! Once pollsters give information to their respondents, their samples no longer represent the larger population, which has not been fed that same information.

Of course, different polling organizations feed different information to their respondents, which is how the polls run the gamut of opinion. At one end of the spectrum is a Zogby poll (conducted for a global warming skeptic), showing almost a two-to-one opposition to cap and trade legislation, while at the other end of the spectrum is a CNN poll finding about a two-to-one level of support. Other polls fall somewhere in-between. So, fellow pollsters, fabricate away! Just stop pretending you report what the American public is thinking. You know better. We (should) know better, too.

This post originally appeared at StinkyJournalism.org.