Recommended Links

Republican Derangement Syndrome
Twitter Files Coverage Misses FBI Involvement

Twitter Feed

Blog

Friday, October 11, 2024

What's the Matter with Michigan?

The majority of election-hawks seemed to have settled on election forecast models being the ideal approach to understanding elections and what polls are saying. These models undoubtedly have several characteristics that commend them--weighing polls based on several factors such as historical accuracy, introducing error to make the predictions more probabilistic (75% chance of winning for candidate X instead of predicting a winner outright), and using non-poll data to fill in some gaps from polling such as a convention bounce or undecideds.

What people should be wary of, though, is being overly confident in these models and attributing to them an objectivity or accuracy that doesn't exist. Partisanship has led to an over-confidence in these models. It's worth bearing in mind how the modelers' choices can affect the ultimate outcome and how accurate the models are, at the end of the day.

A good way to evaluate a model is by comparing to a simple average. RCP has fallen way out of favor for most election-hawks, particularly those who lean left, but it's much simpler and can be a good measuring stick even if it's not as data-driven. Also, it performed better in 2016 and 2020 than 538 did.

Why is Michigan so Different?

Right now, Nate Silver has Harris up by 1.1% (48.4 to 47.3). RCP, on the other hand, has Trump up 0.9 in Michigan (48.5 to 47.6). How can they be getting such different results? I presume most reading will reflexively think "Because RCP sucks" or the more diplomatic "Because RCP is run by conservatives who bias the results." Actually, that's not the case with Michigan.

Difference in Timing

The first difference is that RCP has a more recent cut-off. RCP considers only the 10 most recent, non-repeated polls. This effectively, for Michigan, goes back to 9/19, as of today, so 3.5 weeks. Silver's model considers polls going back to 8/30--6 weeks, double RCP. The effect of this is to add weight to older polls. If trends have moved since then, it will produce biased results. This has definitely been the case. Taking all of the polls in Silver's model, and estimating a 5 poll moving average shows movement toward Trump recently.

If Silver cut off the polls at the same time as RCP, it would move the polls about 0.2% toward Trump.

Differences in Poll Choice and Weighting

One of the smart decisions Silver made was to weight polls according to their historic accuracy along with other factors. Contrarily, RCP has a more subjective and simple weighting method. RCP's editors choose which polls to include, which usually are the most high profile ones. After that decision is made, all polls are weighted equally. On the other hand, Silver includes all polls and weights them using historic data, a measure of transparency, and sample size.

For the time period that RCP is using, it includes 10 polls, while Silver includes 16. Of the 10 that are in both 6 of them have Trump winning. Of the 16 Silver includes, Trump is winning in 7 of them. Meaning RCP is disproportionately leaving out polls which have Harris ahead. The ones in Silver's average are disproportionately less famous.

The table below shows which polls made it into the averages. It also shows Silver's letter grade for each pollster. For the most part, the cut-off for RCP corresponds to a letter grade of B-. The only exception is Mitchell Research (Trump +0.5), Morning Consult (Harris +3.5), and some combinations of polling firms.

Blue denotes polls that were included both in Silver's average and RCP's average. Taken 10/11/2024.

If all firms were included in RCP's average, weighting them all equally, however, Trump would be up by 0.9 instead of 1.0, their poll choice doesn't have much effect on the outcome.

What Else?

Beyond these explanations, admittedly, I can't figure out what's driving the difference. I don't know exactly how Silver calculates his average, but I tried accounting for house effects and weights, and can't reproduce it. I don't have the ability to plug in the non-poll fundamental effects, so it's some combination of that and his secret sauce on polling calculations that I just can't crack.

Using the weights that he provides, before accounting for house effects etc., I calculate, from his data in Michigan, that Trump is winning by -0.2%. Then adding in the house effects, I still have him winning by -0.2. (It's not that they didn't change anything; they certainly did. It's just that they cancelled each other out.

At the moment, I cannot explain how Silver's result is so pro-Harris. It's hard to believe that the fundamentals give her a 1.2% bump, but that's the only thing I can think of.

Thursday, June 27, 2024

Less is More Even in Election Polling

Key Takeaways

  • Election forecast models provide less value than they seem.
  • They include unnecessary and often counterproductive elements.
  • Simple averages have been more accurate in recent elections.

Overview

538 pioneered the election forecast model under the assumption that polls, particularly early in an election cycle, could be augmented with historic voting data and economic factors. This additional information could plug the holes in still sparse polling that prevailed months before the election. Additionally, it could make projections for the voters who were still undecided or who might change their minds in the ensuing months when the campaigns ramped up and drew more attention from the voters.

Now, there are multiple election forecasting models. The Economist, notably, offers one, and Decision Desk HQ and The NYTimes Upshot both operate one as well. There are also several political bloggers who operate their own. These outfits pride themselves on basing all their analysis on data--historic voting patterns based on geography and economic factors. They are not to be confused with places like The Cook Political Report or CNN which use experts' opinions to guide their predictions.

What seems to get lost in the political news frenzy, is whether or not these forecasts actually add value to the analysis. Do they in fact do a better job predicting the outcomes of elections than simply relying on polls. That is the claim on which they should be tested. The question is not "did they predict the winner nationally?" but "did they predict the winner better than polls would have?"

The question is not "did they predict the winner nationally?" but "did they predict the winner better than polls would have?"

Each of these models starts with the polls and then favors some over others depending on their quality. Then, they apply a special sauce which takes account of the demographics of the state or district, the economics, and how voters have responded to these factors historically. Lastly, they adjust their poll average to predict an actual voter outcome. If polls differ from projections, it's because of what goes into their secret sauce.

After 2016, the world was in an uproar because the polls and the models all predicted Trump would lose. 538 argues that it was closer than the other models and that they predicted some probability of Trump winning, but that should not persuade anyone. What did persuade everyone was 2020. All the models were right; they predicted a Biden win, and that's what they got. However, when you look more closely at the predictions compared to the results, it seems likely that 538 got lucky, because, like in 2016, 538 consistently overestimated Biden's vote share.

Like in 2016, 538 consistently overestimated Biden's vote share.

Now, in 2024, I've become very skeptical that these models provide any value. The issue might be particular to Trump, but in both 2016 and 2020, the models did not provide any additional information than polls themselves. In fact, they performed worse than the polls. And now, for the 2024 election, 538 has the race at almost 50/50, despite Trump being ahead in the polls, which he never was before in 2016 or 2020. Historically, not only have polls underestimated his vote share, but the last two elections have shown that the Democratic candidate needs a margin of 4-5 points to beat Trump in the electoral college, because Democratic voters are concentrated in fewer states.

In 2016 and 2020, models performed worse than polls alone.

538, like many other organizations, has lost sight of what their ultimate goal is, to provide better predictive power for an election. It has become a full-fledged politics blog. It is a blog different from most in that is more focused on data analysis, but like Slate said in 2020, "The polling gurus portray themselves as objective number-crunchers, unswayed by human bias or emotion. But in truth, they are in the reassurance business....[T]hey're hawking a false sense of certainty—and, presumably, racking up earth-shattering levels of web traffic in the process."

2020 Election

In 2020, on election day, 538 projected an 89% chance that Biden would win the election, and hey, they were correct; Biden did win the election, so they must have done well, right?

The thing is, if you look more closely at the state-by-state projections, there was a consistent, over-projection for Biden. The "Path to Victory" graphic is a good jumping off point. In this graphic, states are sorted from most pro-Trump to most Pro-Biden, according to the model. A line is drawn through the state that would put either candidate over the 270 threshold. In this case, that state was Pennsylvania, which was colored blue, meaning Biden was likely to win it.

And further, Arizona, Florida, North Carolina, and Georgia were all colored blue. Ohio was the "bluest" state that was expected to go Trump (or alternatively) the least-red state expected to go Trump. 538 projected that Ohio would end up 49.8% Trump and 49.2% Biden (margin of 0.6% for Trump). The election though, ended up 53.4% Trump and 45.2% Biden (margin of 8.2% for Trump). 538 gave Biden an extra 7.6% boost.

Of 13 purplish states, every one of them had a pro-Biden bias in 538's model. Ohio tied Wisconsin for the highest--7.6%. The lowest was Georgia at 0.7%. On average, 538 gave Biden a 4.3 percentage point boost.

But perhaps 538's poor results were a consequence of polls themselves being biased. Perhaps 538 took biased polls and actually de-biased them back towards actual results. In fact, the opposite happened, 538's adjustments pushed their predictions farther from the actual results.

Compounding that, 538, like RealClearPolitics takes the raw polls and averages them. Unlike RealClearPolitics, though, 538 tries to average the polls in a way that favors "high quality" pollsters using a proprietary weighting approach. Theoretically, this should lead to poll results more illustrative of reality than a simple poll average, like RealClearPolitics's, would.

Again, this is not so. The RealClearPolitics simple average performed substantially better than 538's more sophisticated average. RCP's average favored Biden (compared to actual election results) by 2.3%, on average for these states, compared to 4.3% for 538. RCP's average got Pennsylvania and Nevada exactly right, and actually gave Trump a bump in Georgia and Minnesota.

The RealClearPolitics simple average performed substantially better than 538's more sophisticated average.

2020 RCP and 538 Poll Results vs. Actual Results

It's hard to not conclude, given the state-by-state projection results, which were much more favorable to Biden than the election results, which he won relatively narrowly, that 538 was more lucky than accurate, and definitely that its more sophisticated approach did not make the model any better at predicting the outcome than a simple average would. Adding more variables to a model, might make it more sophisticated, but it doesn't make it better at doing what it is meant to do.

It's hard to not conclude that 538 was more lucky than accurate, and definitely that its more sophisticated approach did not make the model any better at predicting the outcome.

2016 tells much the same story. On average RCP's simple average was biased towards Clinton by 2.1%, and 538's weighted average of polls was biased towards Clinton by 3.8%, on average. Contrary to 2020, however, the "fundamentals" (which 538 uses to adjust the polls), favored Trump, slightly. So, that average bias for 538 dropped from 3.8% pro-Clinton, to 3.3% pro-Clinton, still more biased than RCP.

2016 RCP and 538 Poll Results vs. Actual Results

So what do these models provide? For one, in the 2024 web economy, content is king. A model that has a million levers that are pulled in different directions every day provides more topics and discussion for our 24/7 news cycle's professional pundits and the universe of amateur pundits across the social media landscape. Each day, these models provide a week's worth of information to dissect and comment on.

It also gives more people the ability to find positive news amid the bad. Your candidate can be down in the polls, but if the models are saying he's going to win, you need not despair. And on top of that, it's science--based on regressions and data, which you may not fully understand, but is more reliable than opinions and polls.

Recent Posts

What's the Matter with Michigan?
FTC, Break Up the Longshoreman
The ACA Achieved None of Its Goals
Democrats Implicitly Admit Corporations Are People
Why the Child Tax Credit Should Not Be Expanded
Musk's Case Against Advertisers
Less is More Even in Election Polling
More Spending is Never Enough
Republicans Should Be Party of Law Enforcement
Let He with Reservations Cast the First Vote

Tags

| media | Trump | Biden | ACA | Social Cost of Carbon | bias | healthcare | election | Supreme Court | climate | IRS | inflation | FTC | student loan | IRA | AI | rcp | EPA | politics | CO2 | Social Security | journalism | environment | policy | Musk | competition | mid-term | central planning | loan forgiveness | Silver | Romney | anti-trust | regulation | abortion | Harris | primary | ports | elections | polling | vote | economy | overpopulation | union | discount | non-compete | Swalwell | covid | moderation | COLA | Medicaid | sowell | Citizens United | market | budget | law | shortage | retirement | 538 | government | precedent | population | Vance | double standard | Omar | Twitter | deficit | NetChoice | Congress | model | standing | Child Tax Credit | loans | SNAP | Bidenomics | Schiff | artificial intelligence | Inflation Reduction Act | Republicans | Hayek | projections | 2022 | governance | poverty | polls | Yglesias | ehrlich | vaccines | discretionary | nuance | CTC | supply | spending | McCarthy | social media |

Archive

Site Tools:Add Post | Site Statistics \ Update