Mark Baldassare, preeminent pulse-taker of public opinion in the state, has sampled the views of Californians on every conceivable issue-including how they feel about polls. “When we do polls on polls,” he said, “people tell us that polls really matter, that they value them, and they like to read about them.”
Baldassare is president and CEO of the Public Policy Institute of California (PPIC), an endowment-rich, San Francisco-based nonprofit think tank that produces gold-standard research about what the people of California think and believe. The institute’s main mission is providing “independent, objective, nonpartisan” polling data to decision makers. Baldassare regularly briefs Sacramento and Washington officials on the surveys, which also get heavy media coverage.
Polls have the power to move political markets: A few weeks ago, a survey on the environment detected an unexpected shift in statewide views about offshore oil drilling; within a few days, Republican congressional members wielded it during a noisy House floor protest on behalf of offshore drilling, as Democratic presidential contender Barack Obama backed off his previous categorical opposition to drilling. “It surprised me,” Baldassare said of the poll. “But events moved quickly, and political events adjusted to public opinion.”
In an election season when polls themselves make news and when print, web, and broadcast outlets trumpet new surveys nonstop, I asked Baldassare how voters could best sift through the polls streaming through the media. He highlighted some key factors for voters to assess:
Sponsor: Who paid for the poll, and who stands to benefit from it? Independent organizations like PPIC, the Field Poll, Gallup, and Pew Research Center may be trusted far more than a survey released or leaked by a campaign or a political consulting firm. For example, a recent flurry of stories about Dianne Feinstein’s strength in a future race for governor was set off by a poll done by a Sacramento consultant and leaked to a political gossip column that presented the findings as fact, with no indication about the identity of the consultant’s client or who financed the research.
Wording: The precise language of a poll question is crucial to know in weighing the worth of its answers; a stray adjective or overemphasized phrase can produce a biased result. Nonpartisan researchers work to frame questions with the most neutral words possible, and often test them before taking them into the field. Hypothetical example: “To reduce America’s dependence on foreign oil and also fight Islamic terrorism, would you support offshore oil drilling?” will elicit a very different set of responses than, “To reduce America’s dependence on foreign oil and also protect pristine beaches, would you support offshore drilling?”
Sample Size: Political polls are done through phone interviews with a random sample of people, selected and weighted to reflect age, gender, ethnic, and other demographic characteristics of an overall population. Because a poll is a statistical extrapolation in which attitudes expressed by a group stand for the whole, it is crucial to know how many people were interviewed. Reliability of results decreases with the number of respondents. PPIC polls typically have large samples of 2,000 respondents; a poll with fewer than 400 people surveyed is suspect.
Margin of Error: A poll’s margin of error indicates a range of possible results within which it is accurate. If Candidate X is ahead of Y by 52-48 percent, in a poll with a margin of plus or minus four points, it could potentially mean that X leads Y by as much as 56-44 on one extreme, or is losing, 48-52 on the other.
Timing: It’s key to know the dates of a poll to know whether some news or other event cast a short-term influence on results. For example, Obama only lagged behind Hillary Clinton in the Democratic primary when there was heavy coverage about his controversial former pastor, the Rev. Jeremiah Wright.
Languages: In California, particularly, it’s important to know whether a poll was conducted in English only or also includes responses from some Spanish-only speakers, as a slice of the population does not speak English. In a similar way, a poll that only contacts people with landline phones, but not those who have only cell phones, would be sampling a skewed universe of respondents.
A final note: Under the Electoral College system, presidential races are really a collection of 50 state ballots, not a national election, so ubiquitous national presidential polls have limited value. “National polls tell us about the national mood, but don’t tell us how close the election is,” said Baldassare. “It’s important for people who closely follow the election to go to sources of reliable information about separate state elections, particularly the battleground states.”