Fathom Turns One
A review of our first year, and a look-ahead at year two
One year ago, a small group of policy and technical experts came together in recognition of a shared and urgent concern: the world is not prepared for the AI transition.
AI is the most transformative technology of our lifetime, but we lack the policy solutions necessary to steer society gracefully and successfully through this transition. And equally concerning, our policymaking apparatus is not set up to create those solutions. AI has broken the way that we regulate technology, with government unable to keep up with the rapid pace of AI development and industry not incentivized to consistently prioritize safety over speed.
At the same time, the AI policy debates in Washington have been dominated by brawls over federal versus state leadership that miss the point: neither, alone, is equipped to effectively shape the future of AI.
Our origin story
It was from the belief that a better approach was not only necessary, but possible, that Fathom was born. We founded our organization to promote innovative approaches to AI governance that are rooted in the history of technology governance and – most crucially – explicitly designed to promote both safety and innovation. Fundamentally, our vision is to push the AI policy conversation beyond tired debates and toward solutions that meet the needs of American consumers, industry, and society more broadly.
Such a vastly different approach requires us to be different from other AI policy organizations. We are neither a think tank nor an advocacy organization. We are not ideological, and we don’t start with solutions in mind. We do not hail from a single political tradition. We don’t represent industry – in fact, we conduct extensive due diligence to maintain our rigorous independence from corporations, including frontier labs and the FAANG companies.
So what is Fathom? We see ourselves as solutions architects, who marry ambitious and creative policy ideation with a keen understanding of the art of the possible. In practice, that means we convene, listen, ideate, and engage, but also that we design, build, fund, and scale. We work just as comfortably with academics and researchers as with civil society and safety activists, and with technical experts and industry leaders. We don’t fit neatly into a box – and that’s the way we like it.
Our solution
We operate with a clear theory of change, and when we generate a solution that we and our diverse partners believe will work, we swing big. That approach led us to our first major policy effort: the Independent Oversight Marketplace for AI.
This policy framework – in which a state government authorizes a marketplace of independent expert-led verification organizations, which lead a voluntary process of certifying AI companies that meet heightened standards of care – was born out of extensive polling, focus groups, and conversations with hundreds of leaders from government, academia, industry, and civil society. Those conversations culminated in our inaugural convening event, The Ashby Workshops, which featured thought leaders such as Professor Gillian Hadfield, a pioneer in the field of regulatory markets for AI, and emerging technology governance expert Dean Ball.
We followed that process of convening, listening, and learning with extensive coalition building and socializing, designed to refine our concept and build the strongest tent of support for it possible, pulling from communities representing nearly every possible facet of the AI policy ecosystem. Technical researchers, AI skeptics, so-called AI accelerationists – to build effective solutions, we knew we needed to talk to and benefit from the insight of them all.
The concept of an Independent Oversight Marketplace for AI is now being explored in legislation in California, and we’re teaming up with legislators from other states to on legislation that could nationalize this model across the country. We recognize that this policy solution is bold and that we’re moving fast. But we believe that such urgency is what this moment – and the public – are calling for.
Our future
We are extremely proud of our work in this inaugural year. And buoyed by this success, we are laying the groundwork for an even more ambitious second year.
Thanks to the generosity of our donors, we are entering our second year focused on advancing each piece of our work – convening, building, and scaling – across the policy and technical spaces. In the coming months, we will look to bring our first policy wins across the finish line and launch complementary efforts that scale our mission to secure creative solutions to make AI more safe, trusted, and widely adopted.
Essential to this work are our “listening mode” efforts, and so we will continue to convene voices from across the AI ecosystem, conduct rigorous independent polling to better understand the concerns of the public, partner with legislators and policymakers from across the political spectrum, and facilitate tabletop exercises and workshops that help us refine and strengthen our governance concepts. So many share the core principle that animates us – that a path toward a better AI future can and must be co-created – and we’re committed to harnessing the creativity, expertise, and insight of this broad community for good.
Join us!
Over the next year, we intend to reshape the AI policy space for the better. To do that, we need to surround ourselves with the best minds and talent. If you represent a business, industry organization, think tank, government agency, or university and are interested in learning more about our work and partnering, we’d love to hear from you. And if you want to work in-house with an unsurpassed team of policy and technical experts to bring about a better AI future: please consider joining us.
Onward,
Andrew, Bri, and Blake (Fathom Co-Founders)

