Join the Email List




« How this one weird trick can help Florida Democrats win more elections | Main | So, about Tuesday night... »
Monday
Dec102018

Let's talk about polling, again.  

Let’s huddle up for a second and have a little chat about polling.

In 2018, I spoke to 25-30 groups and without fail, there were two questions always asked:

  1. How is Blake Bortles still in the NFL?
  2. And some variation of Why is the polling always wrong?

Now that the Jaguars have benched Bortles, we can dispense with the first question and focus on the second one.

There are basically two kinds of polls:  the ones you don’t see, and the ones you do.  Candidates who are spending millions and millions on television need good data, and that is the polling you don’t see – or at best rarely see.  In the public line, these are the dreaded “internal polls” – polls that when they see the light of day must be wrong, because they are released with an agenda.   However internal polling is typically pretty spot on – for two very connected reasons:  political pollsters stake their reputation on good numbers, because candidates must have the best information to make decisions – and candidates spend a lot of money for those polls to be accurate.  For example, during the 2018 Democratic primary for Governor, while the public polling missed it, our internal Graham polling was pretty clear that Jeff Greene's negative attacks on Phil Levine and Gwen created significant downward movement for the frontrunners in the race and space for Andrew Gillum to rise -- just as in the same way, the DeSantis private polling, which was released, showed his surge over Putnam to be earlier, more significant, and more sustained than the public polling reported.

Then there are the public polls.

First, longtime readers of this blog will know my issues with public polling did not start in 2016 or 2018.  Longtime political analyst Charlie Cook once called public polling “dime store junk,” a phrase that sometimes, is charitable.  I’ve been particularly harsh on Quinnipiac (I believe I've called their polling a "dumpster fire" and once suggested they couldn't count the topings on a pizza), not because I have any particular dislike for the school, or their beloved mascot Boomer the Bobcat, but because their polling is often cited as a benchmark.   When organizations like Quinnipiac publish polls, given the brand they have created, people take them at face value -- despite the fact their polling in Florida has often been a disaster – like the Jaguars football season.

Let me give you an example.  The 2012 Romney/Obama Florida race was one of the most stable races I’ve ever been around.  Both candidates started with a pretty high floor, and while there was movement, there were never any big shifts in the race, and the race never moved far from even.  Yet, over a four-month period, Quinnipiac had the race go from +6 Romney to +9 Obama – then within a month, back to Romney +1.  Over that period, the RCP polling average moved a couple tenths of a point.  Another university pollster called the race for Romney just weeks after they had Obama +3.    In reality, the race was always very close.

Public polling in the Governor’s race here in Florida in 2018 was, to quote noted linguist Deion Sanders, a total “shibacle.”  Back to the Quinnipiac, I felt like I spent most of October dealing with texts/emails/tweets from activists/donors/supporters wondering why I kept saying the Senate and Governor’s race were close, when the Q poll kept saying it wasn’t.  And they weren’t alone – the bulk of public polling lived in a reality that was separate from the real one.

So where is the disconnect?  Let’s explore a few things:

  1. First, most public polling is done at a fraction of the cost.  That alone will diminish the quality – in a lot of university polling, live callers are students, not trained call centers.  Robo polls are cheap and can’t be used to reach cell phone users (though some use internet panels to supplement).  It isn’t a hard and fast rule about everything, but generally in life, if you spend $1,500-2,000 versus spending $40-60,000, the former product will be inferior.
  2. Florida’s voter file is public, but many pollsters still use random digit dialing off phone lists, which will always lead to a survey that is broader than the electorate at large.
  3. Florida’s electorate is exceptionally stable, yet many public pollsters don’t weigh their models.  For example, some just let party ID “float” – meaning it pegging the turnout model to wherever the random sample lands it, making a race seem fluid, when in fact, the only thing really moving is the public pollster’s model.
  4. Florida is a hard state to poll, particularly with ethnic minorities.  Our state’s Hispanic and Black populations are both exceptionally diverse and missing the mark here can really mess up a survey.  For example, take a survey of 800 Florida votes, and you probably will get about 120 Hispanic respondents.  If that sample is too Puerto Rican, the whole thing will be too Democratic – and if it is too Cuban, it could make it look too Republican. 

And keep in mind, beyond this, polling that is done well typically has a 95% confidence rate – meaning that 5% of polls that are done are going to be off.

The problem with these issues is they create a lot of “noise” in polling – as in the 2012 example of Quinnipiac showing a 15 point move in a race over four months that maybe moved 2 points.

And here’s where it starts to go south. The media reports public polling as fact, typically with very little context, and often with no regard for a pollster’s record – particularly as news coverage looks more and more like sports coverage, with the focus on who is winning or losing.  Put numbers next to a name, and add a little paragraph about the poll, and someone will at least tweet it.  And supporters of different campaigns latch on to one poll or another, to bolster their own arguments. 

I do think there is a place for public polling, and a lot of groups do a ton of fascinating work.  For example, you should read everything the Pew Trusts puts out – not because of their great horserace numbers, but because they engage in fascinating surveys about the political and cultural fabric of America.

But since I don’t think public horserace polling is going away, here are a few ideas on how we should consume it going forward:

  1.  Every poll that is reported should have a very specific methodology statement that breaks down their sample in very specific terms.  All good science – and polling is a science – should be replicable. It shouldn’t take high level calculus to re-engineer a poll.  Who did they call?  What was the party split, ethnic split, gender split, and regional breakdown?  And how was the data collected? None of this is too much to ask.  If this isn’t available, don’t report it.
  2. Individual polls should be reported next to polling averages.  The averages themselves aren’t perfect, but at least provide some context of when a poll is outside of reality – therefore when CNN releases a poll showing the Governor’s race in Florida at a 12-point margin, the consumer can see this is an outlier.
  3. To this last point, journalists need to do a better job of filtering this stuff.  Journalists don’t have to report every poll that comes across the email.  My friend Tom Eldon takes it a step further, and suggests, just like the College Playoff Committee only updates its poll once a week, maybe journalists should aggregate polling, and release it once a week.
  4. Finally, it is not news that Florida is a very close state – top level races are going to be inside the margin of error – so when someone releases numbers that show someone winning by a margin that is outside the norms of political reality, even if they come from an organization with “university” in the title, there is no requirement to publish them.

There are some journalists who are wise in how they report polling, and not every public poll is a mess, but overall, the incentive is for groups to create public polls, because public polls create news – and news creates interest in the organization doing the polling.  And while I don’t believe organizations intentionally publish suspect data, again, there is little incentive to tighten up their internal controls and try to get things closer to right.  Until the news media decides to collectively be more careful in reporting this data, it is up to the rest of us to be skeptical.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>