Voters are not fools
Issue positioning isn’t everything, but it’s not nothing either
G. Elliott Morris1 has a piece up criticizing my friend Simon’s Deciding to Win report. Here’s the crux of his critique:
“The report suffers from something I’m going to call the “Strategist’s Fallacy” in politics — the tendency for campaign consultants and political strategists, especially on the Democratic side (where quantitative analysts are overwhelmingly focused on policy positions and ideological point-positions), to map their mental model of how they make political decisions onto voters. They implicitly assume all voters make choices and select candidates the same way elites do.”
While strategists are “constantly consuming news and sorting politicians into buckets based on their positions and comments,” most voters are “unaware of the minutiae of everyday political news,” and “many don’t know basic facts.” Instead, voters use a “variety of inputs” when deciding how to vote, which may not be limited to candidate or party policy platforms.
His argument is essentially that these other inputs play such a big role in determining vote choice that issue positions don’t really matter. He’s wrong. Here’s why.
A Deep Dive Into The Social Science
The next few paragraphs get pretty in-depth, but if you want a TL;DR, it’s that a large number of studies using a wide variety of methods show that politicians’ stances on issues do influence voter behavior. When those positions change, voting changes. To take one example, when the Democratic Party shifted to the left on race, they won Northern Black voters and lost Southern whites (that’s from the Fowler paper). But it didn’t happen overnight — it took decades for Republicans to start reliably winning Southern voters.
In particular, Morris references Phillip Converse’s 1964 paper “The nature of belief systems in mass publics,” which argues that only about 10% of voters have coherent mental models of where the parties stand on issues. In Converse’s analysis, the remainder of the electorate thinks about politics in either “group interest” or “nature of the times” terms; that is, based on their membership in or sympathy towards particular social groups or based on how the economy is doing.
Morris writes that “much political science finds that when it comes to their attitudes on the issues, voters are very fickle.” Policy preferences often vary “seemingly at random across surveys” and “moderates in the electorate have a mix of liberal and conservative views (sometimes very extreme ones) across issue domains, not moderate issues across all of them.”
The finding that policy preferences seem to move randomly across surveys is from the Converse paper, and the bit about people who appear as moderates when you aggregate responses over multiple issues having a mix of liberal and conservative views is from David Broockman’s 2016 paper “Approaches to Studying Policy Representation.”
The problem is that Morris is only citing some of the literature. Here’s what he leaves out.
In their 2008 paper “The Strength of Issues: Using Multiple Measures to Gauge Preference Stability, Ideological Constraint, and Issue Voting,” Stephen Ansolabehere, Jonathan Rodden, and James M. Snyder Jr. argue that Converse’s conclusions are “driven largely by measurement error associated with the analysis of individual survey items” and that once you aggregate multiple survey items, “approximately half of the variance in responses across individuals within the typical issue item”—the change in responses to the same question from the same respondent over multiple survey waves “is attributable to measurement error.”
Here’s a sketch of their model. When you ask someone a question on a survey (for example, about their support for a single-payer healthcare system), their response consists of their true opinion plus some measurement error. The error might come from the respondent not properly understanding the question itself or from clicking the wrong button on the survey or something else. As you increase the number of questions that are asking about the same issue, the average of your error terms shrinks—you get a closer and closer estimate of the respondent’s true belief.
In a 2018 paper, “The Importance of Knowing ‘What Goes with What’: Reinterpreting the Evidence on Policy Attitude Stability,” Sean Freeder, Gabriel L. Lenz, and Shad Turney look at the same question—how many citizens hold meaningful views about policy?—from a different angle. They replicate the finding that adding more survey items on the same topic increases the stability of respondent views in surveys. But they also find that voters only have stable opinions on issues when their views align with those of their party, lowering their “estimated share of the public with stable opinions on a given issue in the United States to…20%-40%.”
In a 2020 paper titled “Partisan Intoxication or Policy Voting,” Anthony Fowler argues that it is often very difficult to distinguish purely partisan voting from issue voting in many empirical tests. “If a voter forms her attachment to a party based on her policy preferences and then that [partisan] attachment influences her voting behavior,” Fowler asks, “is she not a policy voter?” Towards the end of the paper, Fowler discusses a survey experiment where respondents are presented with “two hypothetical candidates and randomly vary both the characteristics of the candidates”—such as party, candidate education level, age, income, gender and issue positions on abortion, gay marriage, or healthcare—“and also the number of those characteristics that we reveal to respondents.” The results?
“And across more than 22,000 hypothetical votes cast, 74 percent were cast in line with the respondent’s party. In other words, respondents are more likely than not to vote in line with their party, but more than one-quarter of the time they are willing to deviate from their partisanship based on other information. Fewer than 1 in 5 respondents cast a partisan vote in all hypothetical contests they considered, and fewer than 1 in 3 cast a partisan vote more than 90 percent of the time. The presence of additional, randomly assigned information about candidates is enough to induce most respondents to deviate from their party some of the time.”
In particular, Fowler’s experiment finds that candidates are rewarded for breaking with their party on big issues: “when a voter learns that the Republican candidate is more liberal than the Democrat on abortion, health care, and gay marriage, they are more likely than not to vote against their party.”
Of course, this experiment won’t map perfectly to the real world. A pro-choice registered Democrat might say they’d rather vote for a pro-choice Republican over a pro-life Democrat in a survey, but in a real election, they might find other reasons (taxes, candidate biography) to vote for the Democrat anyways. Still, Fowler argues that the experiment shows that “29 percent is an upper bound on the share of intoxicated partisans in our sample, and 31 percent is a lower bound on the share of policy voters.”
What Does All This Political Science Mean?
The implication of these studies is that politicians need to take care not to take policy positions outside of where the median voter is. They can’t just rely on the strength of party identity to define them. Politicians like Susan Collins and Joe Manchin, who take positions outside of their party’s stance, can often win over voters who tend to vote for the opposing party.
Morris writes that “the list of Democratic policies” polled by Deciding to Win “is full of examples of things no national Democratic candidate supported in 2024,” like “abolishing the police, abolishing prisons, expanding the Supreme Court to 13 members, and banning fracking.” According to Morris this is “at best, poorly explained by WelcomePAC, or at worst, an example of how the organization is basing its analysis of the Democrats’ left flank on the type of biased arguments popular with Republicans and in echo-chambers online, particularly on Elon Musk’s X. Frankly, if I didn’t know anything else about this group, I’d think this research were coming from a pro-Trump Super PAC.”
Sorry, but that’s bullshit. Here’s a bill from 2023, sponsored by Ed Markey, Tina Smith, Elizabeth Warren, Jerry Nadler, Hank Johnson, and Adam Schiff, that expands the Supreme Court to 13 members. Here’s a 2020 bill from Bernie Sanders called the “Fracking Ban Act.” Here is a blog post from Planned Parenthood explaining why the organization supports defunding the police.
Morris’s cop-out is that these policies are things that “at best, a handful of Democratic elected officials voiced support for in the years before 2024.” His own logic undermines this point. Morris argues that voters are more ambiently aware of parties’ policy goals than they are explicitly knowledgeable about entire policy platforms. To the extent that’s true, it’s fair game to poll things that are associated with the Democratic Party—proposals from prominent elected officials and positions taken by influential advocacy groups—but not written out in the DNC platform.
Summing It All Up
Nobody is saying that issue positions are everything, or that charisma or media coverage don’t matter. Clearly, they do. It’s why it’s important for people like Jared Golden to not just take moderate votes, but to have a working-class feel. But the argument that issue positioning doesn’t matter at all, that it’s all just vibes or having a favorable media environment, just isn’t backed up by the data or by the academic literature. There are enough issue voters and these days elections are close enough that issue positioning is important, even if it’s not the whole ballgame.
That can be unsatisfying sometimes. It’s frustrating when policies you believe would do a lot of good for the country aren’t popular. But it’s important to acknowledge the tradeoffs instead of pretending they’re not there.
Nate Silver caught some backlash for focusing on Disney having to defend the 2024 presidential model. That’s not our sweet spot. But we do send a lot of time looking at GOP-held House seats, and the WAR model GEM created has some odd results. The model has Mariannette Miller-Meeks (a Republican who won a Trump+8 district by less than 800 votes) as an overperformer and Sanford Bishop (a Democrat who won a Harris+8 seat by 13) as an underperformer. I’d take the results of that model with a grain of salt. Or several grains.




