Tomgram

Schwartz on polling as a political narcotic

Posted on

One August night when my son was not yet four, we returned home late. Because his bedtime was around 8 p.m., and because eleven months a year he was a city boy, he did not normally see the stars. This night, as we trooped from the car to the door, with the Milky Way overhead, and the stars shimmering high in the darkness, he stopped before a small pine tree. He seemed to study it for a moment and then a single star that glimmered just above it. Finally, he looked up at me. “Dad, that’s a very, very big tree, isn’t it?”

I was charmed, as one invariably is by such comments from small children, no less one’s own small child. I smiled in the darkness and agreed. “Yes, that’s a big tree.” For him, it was true.

He paused another moment, considering the matter or perhaps following some untranslatable stream of thought who knew where, and then added, “And that’s a very, very small star?”

Noticing the star for the first time, I agreed.

“It’s very, very small and very, very far away, right Dad?” Each sentence was both a statement of fact and, in the upward lilt of its ending, a questioning of fact, a search for a confirming assurance from an adult that what seemed to be, was.

“Very far away, indeed,” I assured him.

“That tree,” he said, “is much bigger than that tiny, tiny star.” And in this statement for the first time there was almost no question at all.

I restrained my urge to laugh at the way the logic of what seemed visibly self-evident led so directly to an absurd conclusion. In this lay the very charm of childhood. And yet his statement called out for response — agreement, assurance or, of course, correction. And correct I did. I began, as parents will, to discourse on the vastness of stars and the smallness of trees, but somewhere along the way, I caught the tone of my voice, which had fallen into that slightly syrupy timber in which adults regularly patronize children for their amusing “wisdom,” magical “innocence,” and lack of grounding in scientific reality.

What took me back, however, was the realization that my own barely suppressed laughter was actually meant to reassure myself at my son’s expense; or rather, I was struck by how slender was the thread of knowledge that allowed me to laugh at him. I had, after all, never taken a course in astronomy, or talked to an astronomer, or even as an adult read a book on the stars. I knew what every child of a certain age knew, that stars were vast beyond imagining, and I knew it the way most people, not just children, in our world know most things scientific or technological — ninth hand.

In my personal experience of the stars, my mindset was little short of Medieval. Those glittering dots might as well have been distant crystalline spheres in fixed orbits around our planet or, for that matter, holes poked in the scrim of the heavens, revealing points of light from another world. Only my unspoken faith in a knowledge I but faintly possessed told me otherwise — that and my son’s ignorance. If I was patronizing my son, it only followed that I was by extension — like some relative “twice removed” — patronizing myself. As long as I could laugh at my son’s attempts to grasp our world, that world in some small measure appeared a sturdier place and I, a more controlling presence.

This incident came to mind yesterday while I was reading former pollster Michael Schwartz’s discussion below of our societal poll addiction. I’ve certainly become one of those addicts. Remembering my own shaky grasp of the most basic astronomical facts, I had to suppress the urge to laugh at all of us “political junkies” for our increasing devotion to an election process underpinned by mathematical methodologies so abstruse (not to say questionable) that few of us would likely grasp them. This is but another version of faith-based politics, as Schwartz makes clear. The very idea that, in a term Jonathan Schell first used sometime in the 1990s, we would conduct — via our media (which loves the continual “horserace” of politics) — an endless “serial election” based on the mathematical manipulation by various private polling companies and media outfits of the opinions of relatively small numbers of possible or potential or registered voters seems, on further thought, absurd and undemocratic, not to say, as Schwartz indicates, dangerous.

For the anti-Bush camp in particular, a poll-based politics (despite the present “bounce” for Kerry) offers special dangers this year. After all, a whole series of possibly unprecedented voter mobilizations seem to be taking place in swing states across the country. There are evidently significant upsurges in minority registration and new voter registration generally, especially in urban Democratic areas. I’ve personally never known so many — perhaps any — friends and acquaintances who headed for swing states to register voters (sometimes just on a weekend off) or are planning to head for swing states (New York, where I live, being more or less a presidential non-event) to help ensure Election Day turn-out elsewhere.

Readers write me regularly at the Tomdispatch website of such efforts, as one California resident did recently, telling me that she and her friends were hoping to contact 1,500 voters in Nevada by cell phone over several days. By the way, youthful cell-phone parties, utilizing all those free late night and weekend hours and organized around calls into swing states, have hardly made it into the political news and yet they too may be significant — just as the cell phones of the young haven’t yet made it into political phone polling, another way in which what’s measured by our pollsters may be less than real. Polls can’t measure all sorts of things and yet trusting the polls is almost an item of faith by now. What the constant polling does, though, is to continue the transformation of our political system — once so sturdy –into a strange house of cards.

For those of you who are already poll-addicted, however, let me at least offer you some of the better tools in the on-line polling trade: You can start by going bananas with anxiety checking out the Rasmussen Reports presidential tracking poll posted every day (along with various state polls posted less regularly). If you want to see the almost bizarre range of the latest presidential polls, the best site to go to is Pollingreport.com where they’re simply piled one atop the other like a skyscraper of impossible to sort out information. I find the Zogby polling site an interesting one to poke around in — with its news on polls, Zogby’s own polling (only some of which is available to non-subscribers), and John Zogby’s interpretative pieces, the latest of which explains why, this presidential election is still John Kerry’s to lose.

For swing state polls, check out the rolling polling map at the Los Angeles Times. If you want to be overwhelmed, visit the Presidential Election News and Election Polls page at the Better World Links website, scroll down to the polling section and go berserk. If you prefer to see, what polls can do best (as described by Schwartz below), check out the Bush approval ratings chart from 2001 to the present at Professor Poll Katz’s Pool of Polls site. Finally, Ruy Teixeira’s Rising Democratic Majority website offers perhaps the most sophisticated polling analysis around on a day by day basis. Now see if you can kick the habit. Tom

The Opiate of the Electorate
By Michael Schwartz

If your anti-Bush sentiments have turned into electoral passion, then you probably restrained your exhilaration after last Thursday’s debate until you got a sense of how it played to the American electorate; which means, how it played in the polls that began to pour out only moments after the event ended. The first “instant” polls seemed to indicate a Kerry victory, and by Sunday the Newsweek poll (considered notoriously unreliable by the pros) had appeared with the news that Kerry had pulled even or might be ahead in the presidential sweepstakes. If it was then that the real rush of excitement hit you, face it, like a host of other Americans, you’re a polls addict.

Opinion polls are the narcotic of choice for the politically active part of the American electorate. Like all narcotics, polls have their uses: they sometimes allow us to function better as political practitioners or even as dreamers, and don’t forget that fabulous rush of exhilaration when our candidate shows dramatic gains. But polls are an addiction that also distort our political feelings and actions even as they trivialize political campaigns — and they allow our political and media suppliers to manipulate us ruthlessly. The negatives, as pollsters might say, outweigh the positives.

But let’s start with the good things, the stuff that makes people monitor polls in the first place, relying on them to determine their moods, their attitudes, and their activities. The centerpiece of all that’s good in the polls lies in the volatility of public opinion, a trait that polls certainly discovered. The scientific consensus before World War II had it that political attitudes were bedrock, unchanging values.

Take, for example, Bush’s “job rating”, as measured by that tried and true polling question: “How would you rate the overall job President George W. Bush is doing as president?’ The Zogby Poll’s results are typical; until September 11, 2001, the President had low ratings — about 50% of Americans rated him “excellent” or “good.” Then his approval ratings surged to a stratospheric 82%. This makes sense; people rally around a president during a time of crisis.

What happened next is harder to explain. Despite the fact that wartime presidents almost always have huge support for the duration of the conflict, Bush’s approval rating began a sustained decline, losing 20 points in the next 12 months (leading up to the first anniversary of 9/11) and another 12 points the following year. By September 2003, his approval rating had hit the 50% level again.

Virtually every group of political activists quickly grasped the significance of this decline: Something surprising was happening to our “war president.” In this case, the polls helped to inspire peace activists to rebuild a quiescent anti-war (or at least anti-Bush) movement, because they knew (from the polls) that the decline in his approval rating was largely due to the war. The same figures convinced a whole host of important Democratic politicians to declare for the presidency, bringing well-heeled financial backers with them. And they triggered a campaign by Karl Rove and his posse of Bush partisans to discredit Bush’s attackers.

Poll results can be a boon to informed and effective politics; they alert activists and others to the receptiveness of the public on important issues. But the key fact that makes polls valuable — that public opinion is a volatile thing — also turns polls into an addictive drug that distorts and misleads. Once the addiction forms, we all want to know (immediately, if not sooner) the “impact” of every event, large or small, on the public’s attitudes, so that we can frame our further actions in light of this evidence. And this responsiveness means that instead of sustained organizing around important issues that can have long-lasting impact on political discourse, we increasingly go for the “quick fix,” especially attention-getting gimmicks that can create short-term shifts in the public-opinion polls which then, of course, feed more of the same.

Blunt Instruments

The use of polls to determine the immediate impact of less-than-monumental events is a fruitless — and often dangerous — enterprise. There are two interconnected reasons why this is true. First, polls are at best blunt instruments. They can measure huge changes over time, like the enduring shifts of 30%, 20% and 12% in Bush’s ratings, but they are no good at measuring more subtle changes of opinion in, say, the 3-5% range. As the famous (and much ignored) “margin of error” warning that goes with all polls indicates, this incapacity is built into the technology of polling and cannot be eliminated by any means currently available. One sign of it is the often-used phrase in news reports that a 3% difference between candidates is a “statistical tie” (which everyone promptly ignores and which in any case might actually indicate a 6% difference in the candidates). And that 3% “margin of error” is only one of five or six possible inaccuracies. The sad fact is that even a 15% difference between two candidates might not exist, unless it is replicated over time and/or across several different polls.

Let’s take an example that, for most people, no longer carries the emotional weight it once did — the 2000 election. If you had consulted the Gallup poll on most days late in that campaign, you would not have known that the vote would prove to be a virtual dead heat. On October 21, with a little more than two weeks to go, Gallup did show Gore ahead by 1%. Three days later, Bush had surged in the same poll and was ahead by a staggering looking 13%. The election appeared to be over.

We now know that this surge was a blunder by Gallup. For one thing, other polls simply did not record it. But more important we know that, as volatile as public opinion can indeed be, it is not nearly this volatile, except under the stimulus of events like 9/11. This “surge,” like virtually all such surges, actually reflected the fundamental inability of polls to measure day-to-day changes in attitudes — especially voting intention. This is so because of all sorts of arcane polling problems which would take a semester of graduate school to fully review. But let’s look at just two examples.

Consider, for instance, the fact that many young adults party on Thursday, Friday, and Saturday. Since the trends recently have been for young singles to be Democratic, you can expect fewer Democrats and more Republicans to be home during polling hours on those days. And that’s but a single example of changes in polling audiences. Daily polls, in other words, often record large fluctuations in attitudes because questions are being asked of very different audiences. Even time of day can make a big difference. (Think of who is at home on Sunday afternoons during football season.) This, in turn, forces pollsters to make all sorts of adjustments (with fancy scientific names like “stratified sampling” and “weighted analysis”). And these adjustments are problematic; in the context of daily electoral polls they often add to that margin or error instead of reducing it.

No One Knows Who is Going to Vote

There are lots of other problems, but the big kahuna, when it comes to an election, is that we only want to interview people who are actually going to vote (a little over 50% of all eligible voters in a typical presidential election — and possibly closer to 60% in this atypical year). One way to eliminate the non-voters is by looking only at registered voters, but that is just a partial solution, since in most elections fewer than 80% of registered voters actually vote. What pollsters need to find out is: Which of those registered voters are actually going to vote. This is made particularly crucial, because while there are a great many more registered Democrats than Republicans, the Republicans usually narrow that gap by being more diligent in getting to the polls.

But there is no way to figure out accurately who is going to vote. Going to the polls on Election Day is a very complicated phenomenon, made even more so this year by the huge number of new registrations in swing states. It is almost impossible for pollsters to know who among these new voters will actually vote. While many potential voters have a consistent track record — always voting or rarely voting — many others are capricious. For these “episodic voters,” factors like weather conditions and distance to the polls mix with levels of enthusiasm for a favorite candidate in an unstable brew that will determine whether or not they get to the polling station. In fact, who is “likely to vote” actually varies from day-to-day and week-to-week and there’s just about no way of measuring (ahead of time) what will happen on the only day of the only week that matters, November 2.

Pollsters, in fact, are really in a pickle. If they rely on previous voting behavior (as many polls do), they’re likely to exclude virtually all first-time voters. Since the preponderance of newly registered voters are young singles (who, we remember, tend to be Democrats), they will be underestimating the Democratic turnout. So many polls (including Gallup) ask episodic and first-time voters about their enthusiasm for their candidate and their commitment to voting, in order to weed out those who have little real interest and very little energy for dragging themselves to the polls.

But this creates new distortions. For example, a big news story, including a polling-influenced one like the recent Bush “surge,” can suddenly (but usually briefly) energize potential new Bush voters, turning them into “likely voters”; at the same time, it may demoralize Kerry backers, removing some of them from the ranks of “likely voters.” Two days or two weeks later another event (the first Presidential debate, any sort of October surprise, or you name it) may create an entirely different mixture. And come election time, none of this may be relevant. On that day the weather may intervene, or any of a multitude other emotions may arise. So “likely voter” polls are always extremely volatile, even though the underlying proportion of people who support each candidate may change very little.

What this means is that a large proportion of all dramatic polling fluctuations –this year and every year — are simply not real in any meaningful sense. But this does not stop election campaign managers and local activists from developing or altering their activities based on them, which only contributes to a failure to mount sustained campaigns based on important issues, while focusing on superficial attention-getting devices.

You Can’t Tell Which Poll is Right

This leads us to the second huge problem with polls: Different polls taken at the same time often produce remarkably different results. Fifteen percent discrepancies between polls are not all that rare. If a group of polls use just slightly different samples (all of them reasonably accurate), slightly different questions (all reasonable in themselves), and slightly different analytic procedures (all also reasonable), the range of results can be substantial indeed. If, in addition, they call at different times of the day or on different days of the week, the differences can grow even larger. And if they use different definitions of “likely voters,” as they almost surely will, the discrepancies can be enormous.

To see how such a cascade of decisions really screws up our ability to rely on polls, consider the now famous “bounce” that Bush got from the Republican Convention. The media, using selected opinion polls, conveyed the impression that Bush surged from a “statistical tie” to a double-digit lead. Many of my friends — Kerry supporters all — felt the election was lost. (Some of them would certainly have fallen from the ranks of Gallup’s “likely voters”). Things got so bad that Michael Moore sent a letter to all the Kerry supporters he could reach, telling them to stop being crybabies and get back to work.

This is a prime example of the polls having a profoundly detrimental effect on public behavior, because the bounce for Bush was moderate at best. In fact, the most reasonable interpretation of the polls as a group suggests that there may have been a shift in public opinion from slightly pro-Kerry (he may have had as much as a 3% advantage) to slightly pro-Bush (perhaps as much as 4%). A plausible alternative view, supported by a minority of the reliable polls, would be that the race was a “statistical dead heat” before the convention and remained so afterward, interrupted only by an inconsequential temporary bounce.

To see why a moderate interpretation is a reasonable one, you need to consider all the polls, not just the ones that grabbed the headlines. I looked at the first 20 national polls (Sept 1 to Sept 22) after the end of the Republican convention, as recorded by PollingReport.com, the best source for up-to-date polling data. Only three gave him a double-digit lead. Two others gave him a lead above 5%, and the remaining 15 showed his lead to be 4% or less — including two that scored the race a dead heat. In other words, taking all the polls together, Bush, who was probably slightly behind before the convention, was probably slightly ahead afterward. Certainly the media are to blame for our misimpression, but before we get to the media, let’s consider how various polls could disagree so drastically.

Fortunately there are some energetic experts, especially Steve Soto and Ruy Teixeira, who have sorted this discrepancy out. The bottom line is simple: the double-digit polls far overestimated the relative number of Republicans voters. Gallup, the poll that has been most closely analyzed, had 40% Republicans in their sample of likely voters, and only 33% Democrats along with 27% Independents. This might seem okay to the naked eye, but it turns out that in the last two elections, about 4% more Democrats than Republicans trooped into the voting booths; and this, logically enough, was the proportion that the other polls used. Since 90% of Republicans right now claim they will choose Bush and 85% of Democrats say they will choose Kerry, this explains the gross difference between Gallup and most other polls; Gallup, that is, would have given Bush a 4% lead if it had used the same party proportions as the other polls.

How, then, could Gallup do such a thing? Though Gallup’s explanation is complicated, it relies on the fact that, until Election Day, nobody can actually know how many Republicans and Democrats are going to show up at the polls. All polling agencies are actually predicting (or less politely, guessing) how many Democrats and Republicans will vote. Scientific and journalistic ethics might seem to dictate basing your present guesses closely on past elections, but Gallup can always simply claim that their information suggests a shift toward Republican affiliation and/or a much higher Republican turnout. In this case, the lack of any substantiating evidence for such a claim has led to accusations that Gallup’s decision was politically motivated.

But in some ways, those exaggerated Gallup results are only a side issue when it comes to polls and this election. Don’t lose track of the fact that even the “good” polls show a startling range of results that renders them almost useless in accurately determining the relative position of the candidates. Remember the post-convention non-corrupt polls still ranged from zero to 8% in favor of Bush. That spread may sound modest, but in real-world terms its extremes represent the difference between a dead heat and a landslide. And there is really no way to tell who is right. In addition, because the media are under no obligation to report all of them, they can select the poll or polls that come closest to their predilection (or that simply offer the most shock or drama) and present them as the definitive results, ignoring or suppressing those that offer a contrasting portrait of the situation.

To see how pervasive this problem is, consider this sobering fact: The media have been reporting that the first debate pulled Kerry back into a “statistical dead heat.” This is a source of exhilaration in the Kerry camp and (if we can believe media reports) significant re-evaluation in the Bush camp. It has certainly affected the moods of their supporters. But there is a good chance that this Kerry bounce was inconsequential. According to Zogby and Rasmussen — two of the most reliable and respected polling agencies — the Bush lead had already devolved into a “statistical dead heat” and the debate had no significant impact on the overall race.

Granted, these two polls are a minority, but in polling, unfortunately, the minority is often right. For a vivid example, consider the polls taken the last weekend before the 2000 presidential election. Since the election itself was a virtual dead heat, well conducted polls should have called it within that 3% margin of error — with some going for Gore and some going for Bush. But that is not what happened: PollingReport.com reports scientifically valid polls taken in the last weekend before the 2000 presidential election. Fully 17 gave Bush a lead, ranging from 1% to 9%, while only two predicted that Gore would win (by 2% and 1%); one called it a tie. Even if you remove the absurd 9% Bush advantage, the average of the polls would have been a Bush would win by 3% — which in our Electoral College system would have translated into something like a 100 vote electoral majority. In other words, even in a collection of the best polls doing their very best to predict an election, the majority was wrong and only a small minority was right.

Consider then that there are three extant interpretations of what has happened since just before the Republican Convention. In one rendering, promulgated almost unanimously by the media, Bush experienced a double-digit convention surge and held onto most of this lead until Kerry brought the race back to even with his sterling debate performance. This widely held interpretation is almost certainly wrong, but two plausible interpretations remain. The first, supported by the preponderance of polls, tracks a modest post-convention bounce for Bush and an offsetting modest bounce for Kerry after the initial debate. The second, supported by at least two respected polling agencies, finds no real bounce after either media event. We don’t know which of these is correct, but it would certainly be refreshing if the American electorate was making up its mind on the basis of real issues and not staged media circuses that center on essentially unreadable polling results.

Kicking the habit

Three things are worth remembering, if you can’t kick the poll-watching habit:

(1) Any individual poll can be off by 15%.

(2) Any collection of honestly conducted polls, looked at together, will show a very wide range of results and you won’t be able to tell which of them is right.

(3) Even the collective results of a large number of polls probably will not give you an accurate read on a close election.

From these three points comes the most important conclusion of all — don’t let the polls determine what you think or what you do.

Watch out for the pushers

Finally, let’s look briefly at the way the mass media — the pushers of this statistical drug — use the polls to build their ratings or sales and advance their political agendas.

The Gallup double-digit lead after the Republican convention was certainly an attention-getter: Bush supporters couldn’t hear enough about their winner and Kerry supporters compulsively began to view their campaign as a train wreck. After the first shock, everyone — addicts all — came back for more just as the media might have desired. Bush supporters were ready to hear more good news and Kerry supporters were waiting for better news.

So why not the same in reverse? Based on subsequent polls, the media could easily have claimed that Kerry was on his way to a remarkable comeback — a number of polls seemed to indicate this within days — which would have triggered the same pattern in reverse. They didn’t do it, however, and as a result created an ongoing pattern of demoralization among Kerry supporters and confident enthusiasm among Bush supporters for the better part of a month.

This political favoritism was, in fact, part of a larger pattern in which even the “liberal media” give the administration a “pass” on certain issues. (The New York Times and the Washington Post have even admitted that they did this on the run-up to the war.) Such favoritism is by no means inevitable, as the exposure stories on Abu Ghraib demonstrate and as the present post-first-debate Kerry “bounce” makes clear enough. Driven by poll-addicted reporters, that “bounce,” based on no less reliable polling procedures than the original “Bush Convention Bounce,” is getting a full measure of media attention, belatedly but effectively reversing the exhilaration-demoralization equation.

The emotional roller coast that results from misleading fluctuations in poll results, managed by manipulative media outlets is the most dramatic symptom of the larger problem. They keep us riveted on the minutia of the debates (in this case, “presentation and demeanor” are the major foci of the analyses of why Kerry won), while distracting the electorate from the underlying issues that have animated people’s discontent with the Bush administration in the first place. Lost in the excitement over the Kerry first-debate victory are his promises of more troops and a more aggressive foreign policy. The rise in the polls makes this belligerent posture acceptable, and even dedicated anti-war activists end up suspending their politics in the excitement over the return of the Presidential race to a “statistical dead heat.”

Our reliance on polls for political validation combines with unscrupulous press coverage of these polls to create a lethal threat to our political sanity and our political effectiveness. Our addiction to polls has done more than enhance the already unacceptable power of the media; it has also redirected our attention and efforts away from policy and toward trivial personality contests at a time when much is at stake.

Isn’t it about time we began to think about how to kick the habit?

Michael Schwartz, Professor of Sociology at the State University of New York at Stony Brook, has worked for 30 years measuring and analyzing public opinion. Once upon a time, he was also a founding partner of MarketCast, where he pioneered the use of multivariate analysis in measuring attitudes toward movies while designing and executing over 1000 attitude surveys for major movie studios. He writes regularly for Tomdispatch.com. His email address is [email protected].

Copyright C2004 Michael Schwartz