From the early warnings of Bletchley to the grand stage of Paris: As the third AI Action Summit begins in France, observers are wondering whether a continent dominated by regulation can finally help forge a global standard for AI governance. But: Amid domestic squabbles, ambitious funding gaps, and geopolitical pressures, will agreeing on a set of lofty ideals even matter for the EU’s vision of an AI continent that actually develops and deploys the technology at scale?

The story of the annual AI Action Summits begins in 2023 at Bletchley Park in the UK, where 28 influential AI countries and the EU first sounded the alarm about the threat of “frontier AI”. At the time, their greatest fear was the spectre of AI gone rogue, i.e. systems that twist human ingenuity into bioweapons or slip beyond our control. In a sense, this also reflected the location of the summit, as UK-based scientists, such as Oxford philosopher Nick Bostrom, have long argued for taking seriously the existential risks that might be posed by AI superintelligence. In the end, that first summit produced the Bletchley Declaration, a set of high-minded pledges focused squarely on security.

A year later, in Seoul, the summit had expanded in scope, in line with the growing hype around large language models being developed by frontier labs such as OpenAI. Accordingly, the conversation broadened to include not only security, but also innovation and inclusion. With voluntary standards from major technology companies, such as OpenAI, and a pledge to create a global network of AI safety institutes, there was a sense of cautious optimism. But real-world headlines at the time, about chatbots spewing misinformation or private labs developing large AI models with minimal oversight, also signalled some fault lines between the glossy summit rhetoric and the messy, profit-driven reality of technological progress.

Paris embraces implementation

By the time the Paris edition of the AI Action Summit convenes on 10-11 February 2025, recurring economic problems in the Western world, the emergence of Chinese frontier models such as DeepSeek’s R1, and a general European push for greater competitiveness mean that pontificating about theoretical AI threats is no longer enough. Accordingly, the new buzzword is “action”, and the meeting aims to drive real commitments: a €2.5 billion fund to accelerate open-source AI tools in developing countries, a set of 35 “convergence challenges” tied directly to the UN’s Sustainable Development Goals, and a pledge to incorporate AI considerations into existing environmental agreements. Almost 100 nations, supported by over a thousand stakeholders, have come together to finalise a truly global framework on AI.

But for all the talk of unity, the summit’s ambitious scope is fraught with tension. Budget woes in European countries such as France mean that domestic AI funds often have to be scaled back, fuelling broader scepticism about the plausibility of Europe’s massive AI pledges – at a time when US President Trump proudly announced the $500 billion Stargate project in the US. The shaky position of the French government, due to ongoing domestic political turmoil, is unlikely to help either.

Europe’s search for an AI strategy

It helps to see how the Paris summit fits into a broader EU competitiveness agenda. The so-called Draghi report and the new Commission’s competitiveness agenda have been beating the drum for AI sovereignty, digital infrastructure upgrades, and a massive boost to AI talent. In this context, the Paris Fund and the push for AI gigafactories dovetail nicely with the EU’s Digital Networks Act and the envisioned expansions under EuroHPC aimed at creating a robust, European-based supercomputing ecosystem. On paper, it looks like a perfect marriage of French leadership and the EU’s broader goals.

On closer inspection, however, there could also be some tensions on the frontline. Some fear that Paris’s “innovation first” mindset could circumvent the EU’s risk-based AI Act, which is currently being prepared for implementation through guidelines, consultations, and the drafting of a Code of Practice for general-purpose AI providers. To the contrary, others worry that Brussel’s regulatory caution might flatten the boldness needed to make Europe a true AI powerhouse.

Innovation meets regulation and political realities

This tension is on full display in the forthcoming implementation of the AI Act’s rules on “high-risk” systems, i.e. AI systems that, if misused or poorly designed, could upend lives or even threaten national security. The EU wants to impose robust safeguards and extensive testing protocols at the very same time that the French government is pushing for a faster track to commercial deployment. The real question, then, is whether the EU’s lofty – but nevertheless important – ideals can survive the day-to-day realities of political brinkmanship, especially when the next round of AI breakthroughs emerge from labs in the US or China.

Macron, for his part, has staked his reputation on making France the AI hub of Europe. His pronouncements often extol the AI start-ups and factories that will transform the French economy and make Paris a continental hub for research and development. But comparing the computing power of US AI labs with the figures for European investment in computing, as well as the different levels of human capital, venture capital, and attitudes to risk, suggests that the EU may be falling short of its vision and goals.

Culture, trust and the wider social fabric

A distinctive feature of the Paris Summit is its “Innovation & Culture” track, reflecting Europe’s desire to embed AI within its social fabric. As part of this track, a wide range of events will be organised across Paris as part of the “Action Week for Artificial Intelligence”. For example, events at the Bibliothèque nationale de France will explore the relationship between culture and AI. One might criticise this as a distraction from more pressing issues such as cybersecurity and innovation. But the summit organisers seem to have grasped an important but often forgotten aspect, namely that AI applications ultimately require social acceptance and a long-term cultural shift to be truly effective and productive additions to our lives and economies.

It is a gamble, but perhaps a necessary one. After all, European citizens are no strangers to scepticism about new technologies and there are significant worries about job displacement. By making AI a tangible part of public discourse, from panels on the future of work to live demonstrations, Paris might increase societal acceptance and perhaps even some sense of collective ownership. Similarly, the proposed training of 100,000 AI specialists a year is a Herculean but crucial challenge that will require joint action from universities, private labs, and states. Finally, the push for open-source AI development requires not only money but also a collaborative, transnational spirit.

A continent at a crossroads

At a macro level, the Paris summit is a reminder of Europe’s difficult position in the global AI race. Neither the top-down approach of China nor the market-driven fervour of Silicon Valley suits Europe’s decentralised nature and cautious but values-driven regulatory DNA. As soon as the summit’s final press conference is over, the world will turn its attention back to the hard facts: Who is writing the cheques? Which labs are leading the development of AI reasoning models? How will any new AI safety standards be enforced?

However, it would be a mistake to see this as just an internal EU problem. Some developing countries have pinned their hopes on the promised open source fund for AI tools, eager to leapfrog their infrastructure challenges and avoid dependence on US or Chinese oligopolists. At the same time, these US and Chinese tech giants are watching from the sidelines, sceptical about Europe’s lofty rhetoric. In many ways, the mission of the Paris summit is both contradictory and inspiring: Europe is simultaneously proclaiming an “innovation-first” ethos and a deep reverence for regulation. France is promoting itself as the epicentre of a new AI era, even as it struggles with fiscal austerity. EU leaders are calling for unity in forging an AI future, while lacking the AI capabilities and resources at home.

Conclusion

As the 2025 Paris Summit gets underway, there is an urgent call to action for European policymakers, especially the new Commission, and European companies. They must go beyond grand declarations to a structural transformation of how AI is researched and actually deployed in Europe. At the same time, while sceptics may scoff at the EU’s bureaucratic inertia, the desire for a human-centred AI future resonates in civil society beyond Europe’s borders. When the dust settles, the question will be whether Europe’s sweeping ambitions will help the world harness the potential of AI without losing sight of human values and agency.

Anselm Küsters is Head of Digitalisation and New Technologies at the Centrum für Europäische Politik (cep), Berlin.

As a post-doctoral researcher at the Humboldt University in Berlin and as an associate researcher at the Max Planck Institute for Legal History and Legal Theory in Frankfurt am Main, he conducts research in the field of Digital Humanities.


Copyright Header Picture: Shutterstock