In Whose Image
April 6, 2026

Beyond ego … What should intelligence do, who should control it, and whose values should sit inside the machine?
OpenAI launched in 2015 with Sam Altman and Elon Musk as co-chairs. Musk left the board in 2018. Dario Amodei was not an original OpenAI founder; he later left OpenAI and co-founded Anthropic in 2021. Demis Hassabis took a different path: DeepMind started in 2010, Google bought it in 2014, and Google folded DeepMind together with Google Brain in 2023 under Hassabis’ leadership.
AI keeps reorganizing into rival camps because its leaders hold rival theories of power, risk, and control. This, however, is problematic as AI becomes social infrastructure. AI companies are racing to build systems of extraordinary power while the public gets too little transparency, too little duty of care, and too little say over how that power is used. Social media shaped attention. AI reaches one layer deeper. It shapes judgment, language, perception, and agency.
Sam Altman: The Distributor
Altman’s public record points in one direction: get the machine into the world, then learn at speed. OpenAI says its nonprofit will retain control as the company evolves its structure, and Altman framed that future in striking terms: build “a brain for the world,” make it easy for people to use, and keep restrictions relatively narrow. The 2023 board revolt showed something else. He was pushed out, then brought back within days, which revealed how much of OpenAI’s political, employee, and investor coalition had come to run through him. (OpenAI)
Publicly, Altman reads as a coalition builder with a very high tolerance for institutional ambiguity. Nonprofit mission, public-benefit language, giant capital demands, consumer product instincts, and a rhetoric of broad access all sit comfortably in the same frame for him. His core impulse seems less priestly than infrastructural. He does not want AI kept in a lab. He wants it diffused through society fast enough that usage itself becomes legitimacy. That makes him the most fluent politician of the 4. It also creates the clearest risk: scale can start masquerading as consent. (OpenAI)
Dario Amodei: The Steward
Amodei built the clearest counter-model. Anthropic describes itself as an AI safety and research company working on reliable, interpretable, and steerable systems. It is a public benefit corporation, with a Long-Term Benefit Trust meant to represent the public interest alongside shareholder interests. In February 2026, Anthropic released version 3.0 of its Responsible Scaling Policy, a framework for catastrophic-risk thresholds and deployment decisions. In his own essay, Machines of Loving Grace, Amodei argued that AI’s risks are the main barrier standing between humanity and what he sees as a radically positive future. (Anthropic)
Publicly, Amodei reads as the steward. He is not anti-progress. His writing makes that plain. He believes AI could drive enormous gains in science and human welfare. But he appears far less willing than Altman to trust raw momentum. His instinct is to build the guardrails first, or at least alongside the capability. Make the model powerful, yes, but make it legible, governable, and harder to misuse. That mindset has a clear strength: it takes downside seriously before the downside scales. It has a clear weakness too: stewardship can become paternalism fast, especially when a small group starts deciding what level of risk the rest of society is allowed to live with. (Anthropic)
Elon Musk: The Sovereign
Musk’s AI story is the least subtle. He co-founded OpenAI with Altman in 2015, left its board in 2018, launched xAI in 2023, said its purpose was to “understand the universe,” and from the start tied it to his broader industrial stack alongside X and Tesla. xAI’s own site now says the company exists to advance scientific discovery and human comprehension. In February 2026, SpaceX acquired xAI, formally unifying Musk’s AI and space ambitions. (xAI)
Publicly, Musk reads as the sovereign. He does not seem interested in institutional separation. He does not want AI sitting alone in a carefully bounded lab. He wants it stitched into distribution, satellites, vehicles, compute, media, and space. His language often invokes truth and first principles, but the organizational pattern is vertical control. Own the stack. Collapse the distance between model, infrastructure, and reach. That model has brute-force advantages. It can move quickly and coordinate across layers. It also has the most obvious failure mode: when one person’s appetite for command stretches across media, mobility, communications, defense-adjacent infrastructure, and AI, the natural circuit breakers start disappearing.
Demis Hassabis: The Scientist-Architect
Hassabis represents a different founder type altogether. DeepMind started in 2010 with an interdisciplinary mission to build general AI systems. Google bought it in 2014. In 2023, Google merged DeepMind with Google Brain into Google DeepMind, with Hassabis leading the combined lab. In 2024, Hassabis and John Jumper shared the Nobel Prize in Chemistry for AlphaFold-related work on protein structure prediction. Inside Alphabet, Hassabis often favored “the profound over profits.” (Google DeepMind)
Publicly, Hassabis reads as the scientist-architect. He wants power, but he seems to seek it through research depth, scientific prestige, and long-horizon institutional autonomy rather than public combat. Compared with Altman, he looks less interested in mass-market improvisation. Compared with Amodei, less defined by formal safety machinery as identity. Compared with Musk, far less interested in rolling everything into one founder-centered command system. His core impulse seems to be: solve intelligence properly, prove it through science, then let the products follow. The risk here is quieter. Scientific idealism can still end up serving commercial and geopolitical imperatives once it lives inside one of the world’s largest platforms. (Google DeepMind)
The Problem
Put the 4 side by side and the field looks less like one race than 4 constitutions fighting for adoption. Altman wants intelligence distributed. Amodei wants intelligence constrained. Musk wants intelligence integrated under sovereign control. Hassabis wants intelligence understood and advanced through science. Each view contains a real insight. Each also carries a predictable blind spot. Diffusion can outrun governance. Stewardship can centralize moral authority. Sovereignty can collapse accountability into founder will. Scientific idealism can get absorbed by the larger machine around it. (OpenAI)
This is Tristan Harris’ point, pushed to its logical end. Social media already taught us that when private companies optimize systems at public scale, product choices turn into social conditions. Harris’ argument was never only about apps making people scroll too much. It was about a handful of design decisions quietly taking hold over how people think, focus, and make sense of the world. CHT’s AI roadmap extends that warning: the public needs safety, transparency, rights, duty of care, and balanced power before these systems become too embedded to challenge. (Center for Humane Technology)
That’s why the founder schisms matter. They are not gossip. They are previews. A founder’s temperament becomes a lab’s structure. A lab’s structure becomes product defaults. Product defaults become social reality. By the time the public notices, a theory of truth, freedom, risk, or obedience has already shipped.
The first alignment problem may not be the model. It may be the small circle of people trying to decide, in private, what alignment should mean for everyone else.