In January 2026, William MacAskill, a former Oxford philosopher and co-founder of the effective altruism movement, and Guive Assadi, currently Chief of Staff at Mechanize, challenged one of the most persistent ideas in longtermism: the so-called Maxipok principle. For years, Maxipok – the idea that reducing existential risk dominates all other moral considerations – has functioned as a trump card in AI governance debates. It can be used to justify extreme measures and centralised control in order to prevent runaway AI.
MacAskill and Assadi now argue that this is a mistake. They suggest we must look beyond the long-run survival of the human species to the broader category of “lock-in”, defined as moments where temporary choices crystallise into permanent power structures. This re-orientation of longtermist thinking is highly welcome. But it should be extended to account for the one discipline that has studied lock-in for a century: competition law.
Longtermist thinking about AI risk and traditional antitrust regulation currently operate in parallel silos. This separation, I argue below, is dangerous. If we accept that preventing the permanent entrenchment of bad values or power asymmetries is a priority, then the consolidation of the AI industry becomes not only an economic concern, but also a safety concern. Decentralized market structures, enforced by competition law, offer a more reliable defence against catastrophic lock-in than benevolent centralized institutions.
The End of the Binary
Nick Bostrom, a philosopher who founded the Future of Humanity Institute at Oxford, developed Maxipok as a moral heuristic: reducing existential risk should dominate all other considerations. However, this principle essentially rests on a binary view of the future. It assumes outcomes cluster into two groups: existential catastrophe, with a value of zero, or astronomical flourishing. If the future is truly all-or-nothing, then the only rational strategy is to maximize the probability of the “OK outcome”. Nothing else matters.
In their new paper, MacAskill and Assadi criticise this assumption. They emphasise that the space between extinction and utopia is vast. We face a spectrum of persistent, sub-optimal futures. For instance, a totalitarian regime could seize control of artificial general intelligence (AGI) and enforce a stable but oppressive order for millennia. Since AGI systems would be capable of performing any intellectual task a human can, they could also enforce institutional structures – such as property rights and legal frameworks – that become impossible to change once embedded in automated systems. They also note, rather speculatively, that space settlements offering military defence dominance could lock in the values of their founders for billions of years.
MacAskill and Assadi propose refocusing longtermist priorities on “grand challenges” to account for these “lock-in” scenarios. With this, they mean problems that shape civilization’s trajectory persistently – including power distribution, values, and governance models – but do not necessarily threaten complete extinction. These include: how we structure AI governance institutions, which values get encoded into automated systems, and how we distribute decision-making power over technologies that affect billions.
This conceptual shift is highly welcome, since it promises to connect longtermist reasoning more closely with empirical reality. In particular, it helps addressing a persistent criticism of longtermism, most recently articulated in “More Everything Forever” by Adam Becker: that this school of thought typically multiplies vanishingly small probabilities by astronomically large numbers to justify focusing on distant futures, despite the fact that both the probabilities and the projected impacts are based on thin assumptions. However, by focusing on lock-in mechanisms and persistent institutional effects that are observable in the present day, longtermist thinking could be grounded in phenomena that we can study empirically. Put simply: We do not need to imagine doomsday scenarios to see lock-in risks; we can observe them in the industrial organisation of the AI sector right now.
The Hardware of Control
Consider the physical reality of AI development in 2026. Until recently, the main bottleneck was finding engineers who could design transformer architectures or optimise training algorithms. Today, Anthropic’s own internal research shows its engineers now use Claude in 60% of their work, achieving 50% productivity gains. As Ethan Mollick notes, these tools represent „a sudden capability leap“ in AI coding. Some predictions suggest 90% of code will be AI-generated by 2026. Meanwhile, Google DeepMind’s AlphaEvolve system, introduced in May 2025, points to AI systems that autonomously refine their own training procedures.
This shift from software scarcity to abundance changes where AI development will actually be bottlenecked. Today’s industrial era of AI is symbolised by Stargate and xAI data centres, gas turbines and grid investments. In other words, the constraints that now matter are physical – and these are what MacAskill and Assadi’s lock-in argument should focus on. High-end GPUs, land for data centres, and above all, energy exhibit genuine scarcity. As the Center for Strategic and International Studies recently documented, electricity supply is the most acutely binding constraint on expanded compute. The chip supply chain shows similar concentration. NVIDIA controls 92% of the discrete GPU market and over 80% of AI accelerator hardware, TSMC produces 64% of advanced logic chips globally.
These are high barriers to entry. Only actors with sovereign-scale resources – tech giants or states – can compete at this level. In economic terms, these constraints might create “increasing returns to scale”, a dynamic where the leading actor gets the best chips, the most energy, and the handful of superstar software engineers, widening the gap with competitors. However, if a single corporate or state actor achieves decisive dominance in AGI development, they acquire the capacity to set standards and values for the entire trajectory of the technology. This is the lock-in MacAskill and Assadi warn against. Yet their paper does not mention competition as a potential remedy.
The Trap of Benevolent Centralization
This omission matters because longtermism has historically somewhat drifted toward centralization. OpenAI’s founding charter committed it to ensuring AGI “benefits all of humanity” through broadly distributed AI. Yet the company has since pursued partnerships that concentrate rather than distribute power over frontier AI development. Some scholars have proposed “singleton” scenarios, that is a single global project to develop AGI safely. Others advocate an “AI Manhattan Project” or international monopoly modelled on nuclear governance.
The standard safety argument, based on Bostrom’s “first mover thesis”, often ran like this: AGI is dangerous because the first superintelligence could obtain a decisive strategic advantage over all other intelligences by virtue of being first; therefore, we should centralize AGI development in a single, safety-conscious project. In this perspective, a “CERN for AI” or a dominant, responsible lab seems like the safest path. If your only goal is preventing humanity’s extinction in the long run, this logic probably holds. A single red button is indeed easier to guard than 50.
But if you worry about lock-in, the calculus inverts. Concentrating the power to shape the future into a single institution, no matter how high-minded its current leaders, creates a single point of failure for human values. We cannot know which moral frameworks will stand the test of time. We cannot predict how a “benevolent” monopoly will act once it is secure from challenge. In this perspective, centralization becomes the hazard.
Here, competition law offers a different mechanism for safety: decentralised markets with true, fair competition. By enforcing market structures that prevent any single entity from becoming too powerful, competition authorities essentially preserve optionality. A decentralized system with multiple independent power centres is messy, but it is resilient. It prevents any single actor’s error or malice from defining the future for everyone. This mirrors the logic of “small is beautiful”, the argument that decentralized, human-scale institutions often prove more adaptable and accountable than monolithic structures. In AI governance, as in economics, size and concentration create fragility disguised as strength.
Antitrust as Risk Mitigation
Importantly, both longtermists and antitrust thinkers would benefit from increased dialogue. This is particularly pertinent since competition regulators face a problem that mirrors the longtermist dilemma. They must act before harm occurs, because once a market becomes a monopoly, it is often impossible to rectify the situation.
Historically, regulators have often failed to predict which companies or products matter in the digital age. In 2012, Meta acquired Instagram for $1 billion, and in 2014 purchased WhatsApp for $19 billion. The U.S. Federal Trade Commission (FTC) filed suit in 2020 seeking to unwind both acquisitions, alleging they were anticompetitive. That case continues through 2026, with regulators treating these social media acquisitions as the defining competition issue. But the landscape has shifted beneath their feet. Since each generation of teens tends to migrate to platforms where their parents are not, TikTok supplanted Instagram among younger users regardless of Meta’s market power.
Similarly, Amazon’s announced acquisition of iRobot for $1.7 billion in August 2022 drew intense regulatory scrutiny from both the FTC and European Commission. The deal was ultimately abandoned in January 2024 amid regulatory pressure. But this consumer product market likely matters less for long-term power consolidation than dominant positions in cloud computing infrastructure for AI training. In general, the consolidation of foundational infrastructure, such as chips, cloud services, or energy contracts, received less regulatory attention. Yet these layers now determine who can build frontier AI.
Here, longtermist scenario modelling could help sharpen legal enforcement. Instead of looking at consumer price welfare, the traditional metric of antitrust, regulators should ask: which decisions now will have persistent effects on power distribution across multiple possible futures? In my opinion, this is longtermism’s distinctive contribution. It could reorient competition policy from reactive firefighting to more anticipatory positioning.
Conversely, competition policy brings concrete tools to the sometimes-vague governance debates in AI safety circles. For instance, the “essential facilities doctrine”, a legal principle requiring dominant infrastructure providers to grant competitors access, has been successfully applied to previous waves of general-purpose technologies such as railroads and telecommunications. This same principle could mandate that dominant AI infrastructure providers sell compute access to competitors under fair and non-discriminatory (FRAND) licensing terms. Stricter merger controls could have blocked Microsoft’s initial $13 billion OpenAI partnership, which vertically integrated cloud infrastructure with frontier models. If they are adequately resourced, competition authorities can use their existing tools to prevent the type of lock-in feared by MacAskill and Assadi.
Does competition not create dangerous race-to-the-bottom races? This conflates two types of competition. Product competition, i.e. racing to release consumer chatbots, does create these pressures. But structural competition, i.e. maintaining multiple independent power centres, does not. We can have a competitive market structure with strict safety regulation enforced across all actors. For instance, the U.S. Food and Drug Administration permits competition among pharmaceutical companies while enforcing safety standards. The same logic applies here. A decentralized industry with strong safety regulation is safer than a centralized monopoly with weak oversight or value drift over time.
Preserving Human Agency
Unchecked races to the bottom are a real phenomenon. However, the solution to these dangerous situations is not to declare a winner and hope that they will remain benevolent. In order to align AI governance with the reality of lock-in risks, three changes are required.
Firstly, we must treat the concentration of computing power as both a market dominance issue and a safety risk. Although compute thresholds alone are insufficient for estimating AI risks, computing caps could still help to ensure that no single company dominates. More generally, diversity in infrastructure ownership prevents a single choke point for global intelligence. Secondly, AI safety researchers should talk more about competition. A market with multiple players often reveals risks faster than a secretive monopoly. Thirdly, antitrust regulators must scrutinise the physical layer, ranging from energy contracts and chip fabrication capacity to data centre construction, rather than focusing solely on the models.
MacAskill and Assadi are right to move us beyond the Maxipok fixation. However, identifying the lock-in problem is only the first step. If we want to ensure the future remains open, we must ensure the market for building it remains open too. This means antitrust enforcement against AI infrastructure consolidation must begin now, before lock-in becomes irreversible.

Anselm Küsters is a digital policy expert at the Centre for European Policy (cep) in Berlin and Interim Professor for Digital Humanities at the University of Stuttgart. He regularly writes about “small is beautiful” and digital competition policy at anselmkuesters.substack.com.
Copyright Header Picture: shutterstock / Copyright: Herder
Könnte Sie auch interessieren
2. Februar 2026
The Transatlantic Partnership Is Not Dead
But Europe Must Stop Talking Itself Down to Ensure its Future
24. November 2025

