AI Governance: What Americans Really Want
Results from Fathom’s latest polling on the public’s vision for a future with AI, and what they are willing to trade for it
Two years ago, Fathom first began asking the American public how they felt about artificial intelligence. We asked baseline questions: were people using AI, how were they using it, and how did they feel about it? The headline findings were stark: in each iteration of our polling, we found that people were using AI more, but remained highly ambivalent about the technology. On governance, our polling showed that the public wanted guardrails (overwhelmingly, across party lines), but had little confidence in the government or industry to deliver them.
This spring, we asked a harder question: Americans want governance, but what does the public’s preferred AI future actually look like? And what are they willing to give up to get there?
Standard opinion polling tends to surface areas of easy agreement. Ask whether child safety matters and you get near-universal support. Ask whether accountability is important and the numbers look similarly strong. But broad agreement on abstract “good AI future” principles does little to inform actual policymaking. To surface genuine preferences, we need the kind of real-world friction that real-world policy choices entail.
Our latest national survey was designed to introduce exactly that. We explored the public’s vision for the AI future they want with three different sets of questions. First, we had respondents rate twenty principles for a good future with AI: children’s safety, augmentation of human reasoning, long-term risks, based on how important they thought each one was. We then reframed these principles in terms of costs: accountability “even if this creates liability risks for companies,” verification “even if it slows innovation.” Finally, we asked respondents to make either/or choices between principles that might come at the expense of one another.
The result is a nuanced portrait of the AI future Americans want and of the tradeoffs that policymakers, candidates, and civil society will need to navigate as AI governance begins to appear on the ballot.
The full report can be accessed here.
Americans want trust and accountability, even if it comes at the cost of speed or innovation
Across every question format, every demographic group, and every level of AI familiarity, the public prioritizes trust and accountability in AI. That’s true even when the costs are made explicit.
When asked to rate twenty principles for a good future with AI, the top priorities are unambiguous: child safety at 90% “very important,” followed by accountability for AI-caused harms, verifiable standards, and maintaining human reasoning rather than replacing it.
These aren’t isolated preferences. The same people who rate child safety highly also cluster around trust, accountability, and guardrails, making them the dominant cluster in AI governance opinion. For policymakers, this means trust and accountability are the organizing principles of public opinion, while access and public investment remain clearly secondary.
But the real test comes when costs are made explicit. When people were presented with the same priorities but with tradeoffs attached—accountability even if it creates liability risks, verification even if it slows innovation—support barely moved. Trust and verification held at 86% total importance. Accountability held at 87%. The margins are wide, and they hold consistently across partisan lines.
When American leadership is on the line, consensus thins
Not all governance priorities share that level of strong support. When trust collides with American competitiveness, agreement begins to fracture. The tightest splits in the entire survey all involve U.S. leadership, national control, or commercial competition.
International AGI coordination versus the U.S. keeping full control of its AI trajectory: 49–41. Public AI infrastructure versus private companies: 50–37. Shared decision-making versus leaving it to companies: 53–35. Americans want governance and American dominance, and when the two conflict, governance loses ground.
The sharpest example is frontier AI development. A strong majority (83%) supports slowing AGI development until risks are better understood, when presented without trade-offs. But when asked if they support international coordination to manage that slowdown, support splits nearly down the middle. And when the question makes explicit that coordination means the U.S. giving up some control over its own AI decisions, support drops further. People want AI they trust, but not at the cost of American leadership.
An emerging partisan divide reinforces the pattern. Republicans are the only group that favors national control over international AGI coordination. These patterns track closely with the usage divide: super-users, who skew slightly more Republican, younger, and male, are consistently the most willing to trade governance for speed. The bipartisan agreement on trust holds for now, but the competitiveness argument—that governance slows America down—could begin to erode it.
The window for workforce intervention is open
Americans want workforce protection against AI-driven disruption, and they are open to a wide range of policy interventions. Every single policy we tested received majority support, from familiar measures—retraining programs at 84%—down to lesser-known ones, like a sovereign wealth fund that shares AI-generated wealth with the public, at 71%. The signal is not a preference for any specific mechanism, but rather that the public is worried about AI-driven disruption and demands intervention in some form.
This aligns with what we found in policy preferences: when forced to choose, Americans overwhelmingly prefer ensuring work retains meaning and economic security over letting market forces decide, 66 to 23. They prefer active worker support over market adaptation, 59 to 32. The public decisively rejects a “let the market sort it out” approach to the AI transition.
Democratic, equal access to AI: agreed in principle, deprioritized in practice
Americans broadly agree that AI’s benefits should be shared—but it’s not a high priority. Statements around equitable access and public investment sit at the bottom of every ranking in this survey, and the gap between these items and the strong support that trust receives is one of the most consistent patterns in the data.
Government building public AI infrastructure: 36% “very important.” AI at reduced cost for schools and nonprofits: 40%. Community input in AI decisions: 43%. These items still poll well in total importance but do not garner strong support. That makes “AI for good” policies a harder sell for policymakers.
The super-user data sharpens the picture further. On whether AI companies should be required to contribute to public benefit, super-users split at just 52–46. The people most familiar with the technology are the least convinced it should be directed toward public purposes.
The public wants new voices at the table
We now have a better understanding of what Americans’ preferred AI future looks like. But who do they trust in leading this transition? And who do Americans trust to govern AI?
Independent experts lead at 71%, ahead of tech companies at 61%, federal agencies at 51%, and elected officials at 37%. This hierarchy has held across all three waves of Fathom polling, and the trend is strengthening.
Takeaways for Policymakers
This survey reveals an American public with a nuanced vision of what a good AI future looks like. They want AI systems they can trust, clear pathways for accountability, protection for their children, a well-managed workforce transition, and caution against AGI, with America leading the way.
But how can policymakers deliver on this vision?
The durable consensus on trust and accountability is the place to start. Verifiable standards, clear paths for accountability when AI causes harm, human oversight of high-stakes decisions: any governance framework built on this foundation will have broad, bipartisan support. But early signs of partisan divergence on competitiveness mean the window for building on shared ground is now, not later.
The competitiveness tension, meanwhile, points to a framing challenge as much as a policy one. The public wants guardrails and American leadership. Frameworks that position trust and accountability as a competitive advantage, not a constraint on it, are the ones that will hold.
On workforce policy, the opportunity is uniquely open. The public believes that intervention is necessary and is willing to consider a wide range of mechanisms. That flexibility will not last indefinitely. As displacement becomes more visible, preferences will harden. Policymakers who act now will define the terms.
And on the question of who should lead, the public has given a clear answer: new voices. Independent experts and civil society are the institutions Americans trust most, and the ones with the least institutional power. Closing that gap and giving third-party experts a meaningful seat in AI oversight—is where the legitimacy lies.
The mandate is there. And as AI governance moves closer to the ballot box, the window is open. But it will not stay open indefinitely.
The full report can be found here. To schedule a briefing with our team, please contact us here.









