Building for the Thinking Machine Age
Fathom’s Vision for the AI Century
At the turn of the new year, many of us – as AI practitioners, parents, citizens – find ourselves asking the same questions: What kind of future are we building? What type of world will our children inherit? Are we doing enough? Can we even do anything at all?
At Fathom, we’ve been thinking intensely about what the age of AI means for our economies and societies, and what role we can play in shaping this trajectory. In this essay, we look to historical precedent, economic data, and social research to understand where we are, where we’re going, and what it will take to shape the path between the two.
A Walk through Revolutions Past
Past industrial revolutions have followed a general trend: an intensive period of technological progress precipitates widespread economic dislocation that spurs societal unrest, until the social contract is renegotiated and an equilibrium is reestablished.
The first industrial revolution saw the externalization of power from human and animal muscle to machines. This enabled the mechanization of textile production, steam as a power source, iron production at scale, and early factory organization. It also led to serious displacement of labor: skilled artisan laborers saw their wages collapse over a generation, prompting the Luddite uprisings against the destruction of workers’ economic security and place in society. At the same time, cities ballooned massively, without the infrastructure to support such growth; life expectancy in many cities did not top 30.
This intense social turmoil forced the birth of a new social contract: England passed reforms such as the Factory Acts and Public Health Acts and expanded suffrage, and workers demanded and won political representation. Political empowerment and working-class self organization led to a period of stabilization: real wages began to rise, mortality rates in cities fell, and the working class developed its own institutions and a (modest) political voice.
The second industrial revolution, a generation later, saw the systematization of science and innovation itself: electricity developed as a distributable power source, steel manufactured at scale, the emergence of the chemical industry, and the internal combustion engine. Again, labor unrest, deflation, wage pressure, financial panics, and other economic disturbances followed.
And yet again, new social movements emerged. Organized labor rose to its full power, with workers exercising their power through historic strikes. Populism and social democracy grew in influence. Progressive Era reforms, followed by the New Deal, gave rise to a broader social democratic consensus that created, for the first time, a genuine safety net for ordinary people, ushering in the Golden Age of Capitalism in the post-war period. Growth was broadly shared, living standards rose, the middle class expanded, and society was relatively stable.
The third industrial revolution, beginning in the 1950s, saw information and computing become the central organizing technologies of our time. Digital computing, semiconductors, manufacturing automation, the Internet, and the emergence of biotechnology transformed how we innovate and live. The consequent economic distress – the hollowing out of the Rust Belt, the stagflation of the 70s, for example – prompted social discontent. And again, state intervention brought something of a decades-long reprieve – for most.
The Fourth Revolution
Even as the effects of the third revolution continue to run their course, a fourth has already arrived with the emergence of AI. So far, it has followed a similar pattern – technological leaps lead to economic restructuring which precipitates social discontent – but it has moved faster and with far more intensity. It remains to be seen whether restabilization and a new socio-political bargain will follow.
Some, of course, are unconvinced that AI represents a real industrial revolution that will result in the same scale of disruption as in the past. These critics argue that previous revolutions had far greater impacts – on daily life, and in terms of GDP – than AI will. They point to the slowing of productivity growth that has accompanied recent developments in AI, and argue that the low-hanging fruit has already been picked. Therefore, while we may see impressive innovation, economic restructuring and social dislocation may not follow.
We are deeply skeptical of this argument for several reasons. AI is unlike any other technology that came before. Indoor plumbing and mechanization fundamentally changed daily life (and economic productivity), but their transformational capacities had a ceiling. Machine learning is different: it gets better and is starting to show that it can recursively learn, potentially infinitely. AI systems can be used to improve AI research, which leads to better AI; the tool and the product are the same. This recursive quality is different from, say, using better steel to make better steel mills. Just as human intelligence has bootstrapped civilization over generations, AI can build on its own progress – but on a dramatically compressed timeline.
The empirical record is striking. Over the past decade, AI performance has improved predictably with increases in compute, data, and model size. Capabilities that seemed impossible at smaller scales (in-context learning, chain-of-thought reasoning, code generation) have emerged at larger ones, in turn enabling greater scaling. No obvious ceiling has emerged. This doesn’t prove that scaling will continue indefinitely. Some researchers argue we’re approaching data constraints, diminishing returns, or architectural limits. They could be right. But we do not know where the ceiling is, or whether there even is one. And the asymmetric nature of the risks and rewards means we cannot afford to plan only for the world where AI capabilities plateau.
But boundless capability is not, by itself, a problem. Indeed, it offers the possibility of immense upside for society. The challenge is what that capability means for the structure of governance, work, and the distribution of its rewards.
The Great Hollowing Out
We must plan for a profound technological leap. Will economic dislocation follow?
Historical precedent suggests yes. In previous industrial revolutions, displaced workers could (eventually) transition to new, higher-order jobs. Artisans moved to factories, factory workers moved to services: a process that was disruptive, but ultimately largely beneficial for society. But artificial intelligence targets cognitive tasks, leaving workers with few places to go where they can maintain their income and status and be safe from future automation.
And unlike technologies of the past, AI is general purpose. A loom cannot do accounting. But an AI system can draft legal briefs, do your taxes, analyze medical scans, write code, and plan your day – meaning displacement may happen across the economy simultaneously. Once an AI is trained, the marginal cost of its “cognitive work” become increasingly trivial. The incentive to substitute low-cost AI for humans, which are expensive and have irreducible costs, will be enormous. AI does not need to be better than humans to be preferred – it just needs to be good enough at a lower cost. AI products that perform just as well as, or slightly better than, human workers can still drive massive job loss.
The traditional economist’s response to automation concerns – productivity gains create wealth, wealth creates demand, and demand creates jobs – may not hold if gains are not widely distributed. Fewer workers may produce the same output, productivity gains flow to capital owners, corporate profits rise but wages stagnate. We may end up with a productive economy with persistent unemployment, not because we cannot produce, but because we’ve severed the link between production and broad-based income.
We do not know whose vision for the economic future is correct, nor do we know the timeline for this potential economic dislocation. But it’s reasonable to expect that this transition will move faster than past transitions. AI scales extremely quickly; once a task is automated, it’s automated – everywhere. The question is whether our institutions and social contract can keep pace.
Societal Disruption, Supercharged
This upheaval will come at an extremely precarious time. The social fabric is already fraying, and AI-driven shocks will tear it further.
Institutional trust has collapsed precipitously, not just in government and media but in science and business, too. The rapid rise of AI-generated misinformation is poised to push trust even lower, threatening to paralyze the basic machinery of collective response.
Political polarization has shifted from “we have different values” to “we have different realities.” Social media algorithms optimize for engagement and create echo chambers. Political identity correlates with geography, education, and religion more sharply than ever. And the emotional intensity of partisan identity has risen dramatically: political violence is increasingly tolerated, especially by young adults.
Optimism about the future is at record lows. Majorities now believe their children will be worse off than they are. The material basis for optimism has weakened; the economic fortunes of Millennials and Gen Z face higher housing costs relative to income, heavier student debt levels, less job security, and wealth accumulation lower than previous generations at the same age.
A crisis of purpose pervades our society, in which the status and identity traditionally conferred by one’s work have eroded. Many people now feel their jobs are meaningless. The gig economy has turned workers into task-performers. The structure and social integration once maintained by stable employment is washing away. The technological ground is shifting, the half-life of skills grows shorter, and the credentialing effect of a college education matters less each day.
Finally, social connection has declined, creating a global epidemic of loneliness. Americans report fewer close friends and more isolation than in previous decades. More time on screens, longer work patterns, and the decline of “third spaces” have made us less happy, healthy, and resilient.
These crises compound. Low trust makes it hard to solve problems collectively. Polarization blocks policy responses. Despair erodes trust. Social fragmentation reduces collective capacity.
This is the context into which AI-driven economic change will arrive: not a healthy society with high trust, strong institutions, and shared purpose, but a fragmented, angry society already in a downward spiral of declining collaboration and resilience.
We Have Never Needed Our “A” Game More
No matter where you may stand on the economic and societal impacts of AI, almost everybody acknowledges that this transition will be technically and bureaucratically difficult to manage. And this assumes that the AI century will be one of disruption, rather than catastrophic or existential risk caused by misalignment.
We simply do not know enough about these systems, and how they work, to know whether full alignment with human-centered goals is possible. And therefore we must prepare for a misaligned future by beginning to build consensus around a real framework for maintaining human control in areas where intelligence, generality, and autonomy intersect.
Moving into a world with AI is complicated and potentially dangerous. And dealing with those dangers requires meaningful and thoughtful dialogue, as well as an acknowledgment that trade-offs will be necessary. Do we prioritize addressing present-day harms, or focus on guardrails against future AI risks? Do we slow down in the race to reach AGI, but risk another country getting there first? Should we invest in public AI literacy, or in a safety net for workers whose jobs might be displaced by AI?
These are real – and difficult – questions. And the worry is that our societies will enter the AI future too angry, too destabilized, and too uncertain to understand and grapple with these tradeoffs.
Sleepwalking into the Future
At the same time, the actors currently expected to steward the development of AI are not on a path to doing so. Democratic governments move deliberatively by design, and are not hubs of technical expertise. The leading frontier AI labs spend just a fraction of their investment in compute, research, and market position on tackling these societal-level questions. And insufficient meaningful coordination exists across labs, across borders, or between public and private sectors.
Most conspicuously absent from the conversations that do happen are the people whose lives will be most disrupted: the workers and communities who will bear the costs but have no voice in the decisions. The default path leads somewhere predictable: a chaotic patchwork of inconsistent regulations, inadequate protections, and people left to navigate a transformed world with no meaningful say in how it was shaped.
Fathom’s Role
At Fathom, it’s our mission to prevent this future from unfolding.
For those of you who don’t yet know us: Fathom is a new type of organization that exists to build the global architecture for the AI century: to help societies in America and around the world navigate the transition to a world where AI is fully integrated into how we live, work, and govern ourselves.
We do this by developing and scaling governance, technical, and policy solutions that increase AI trust, accountability, and beneficial adoption. And we work to expand the table where decisions about AI are made: ensuring that those most affected by this technology have a voice in shaping it. While we conduct and support research, our focus is on action.
The new bargains we help forge between governments and citizens, between markets and workers, between innovation and security, will be how we get to a future that benefits everyone, not just those building or deploying the technology. That means developing the policies, institutions, and frameworks that can rewrite the social contract for the AI age, just as previous generations built labor protections, social insurance, and public education to stabilize earlier technological transitions. That, above all else, is what Fathom is for.
Our Work
Fathom’s first major initiative targets AI governance – the most urgent and underdeveloped piece of the puzzle.
AI has broken the traditional approach to technology regulation. The pace of development far outpaces the speed of oversight, and the complexity of the technology exceeds the technical capacity of most government regulators. And the private sector is not sufficiently incentives to fully align its interests with that of the public. In short, neither government nor industry, acting alone or together, is equipped to meet this moment.
In our first year, Fathom has begun building and scaling a system of Independent Verification Organizations, or IVOS: a practical, adaptable model of AI governance designed to make AI products safe and more trustworthy. Under this framework, governments set outcomes around safety, privacy, security, accuracy, and goals in the public interest. A marketplace of independent, accredited evaluators then verifies whether AI products meet those goals. Products that pass earn a competitive market advantage and legal certainty: a rebuttable presumption of compliance or evidentiary support in litigation.
This approach can address a wide range of harms: self-harm and suicide risk, data privacy and security, content safety, accessibility, regulatory compliance, and more. It can also help prevent the tort nightmare that is building as AI products cause harm and courts struggle to assign liability for harms. But beyond mitigating specific risks, independent verification creates something more fundamental: a foundation of safety that enables trust – and trust enables adoption.
Fathom’s proposed model is gaining traction. Legislation has been introduced in California, Ohio, and Virginia, with additional states, red and blue, preparing to follow. We are working with partners in the UK, EU, and other jurisdictions to scale the model internationally. And while legislation is necessary to fully catalyze this market, we are simultaneously working with evaluation providers and demand-side organizations to build the infrastructure and capacity this system requires.
Where We Go From Here
Governance is foundational – but it is only the beginning.
As Fathom scales our governance work, we are also studying longer-horizon challenges. What does the advent of highly capable AI actually mean for labor markets, for education, and for democratic participation? Where are the gaps in current thinking? What interventions could make a difference, and at what scale? What are the principles that should guide us as we pursue policy and technical interventions? What trade-offs are we comfortable, as a society, with making?
In the coming weeks, we will be publishing more about our principles for a good transition and the domains and workstreams where they need to be explored. We will also be publishing polling that sheds light on the priorities and concerns of Americans as we strive to make a successful transition to the AI century.
Join Us
We are not naive about the difficulty of what lies ahead. The forces driving AI development are powerful, the interests are entrenched, and the timeline is unforgiving. But we also believe this is a profound inflection point: one of those rare moments in history where the choices we make will shape the trajectory of generations to come. And if we get this right, the upsides will be tremendous: for the prosperity of our countries, the health of our people, and the well-being of our societies.
If you are working in this space – whether in government, civil society, industry, or academia – and want to partner with us, please reach out. And if you want to dedicate yourself to the mission of genuinely bending the arc of this transition, consider joining us. We are building an organization that will need the most talented and committed people we can find.
The window for action is open for now, but not forever. We intend to seize it.
-Fathom


Exactly this. We're facing a wisdom gap, AI generates knowledge faster than we cultivate the wisdom to govern it responsibly.
I've just published research on nuclear governance and whether we're repeating the same catastrophic mistakes with AI: https://doi.org/10.5281/zenodo.18487639
The responsibility falls on all of us, developers, policymakers, society at large. Leadership can't delegate this one.