As the world stands on the brink of an Artificial Intelligence (AI) revolution, global policymakers find themselves grappling with a consequential question: should the future of AI be an open-source playground or a controlled realm dominated by licensed models? This regulatory debate, arguably the most significant one in current digital policy, confronts the profound implications of choosing either path.

The Weight of Choice: Open Source vs. Licensed AI Development

In the scientific debate about the open or closed development of AI, good arguments can be found on both sides. Open-source models, with their publicly available code and parameters, offer enhanced transparency and accessibility compared to closed-source models, facilitating understanding among developers. For instance, transparency allows researchers to comprehend the model’s workings, assess its reliability, and even contribute by submitting fixes. However, the decentralized nature of open-source development, involving contributors from varied backgrounds, poses challenges in ensuring consistent governance and quality standards across all applications.

The very nature of AI development – its potential to redefine economies, industries, and societal structures – underscores the gravity of this discussion. Opting for a closed, licensed ecosystem could pave the way for an unparalleled concentration of technological but also economic and political power. Imagine an era where a handful of companies like Google or OpenAI monopolize a technology as transformative as generative AI, wielding influence over global supply-chains, military technology, and, ultimately, societal discourse and humanity’s access to knowledge.

The rapid and free exchange of ideas, scientific publications, open source code, and trained models is the reason that AI has progressed so fast in the last decade.

Yann LeCun, Chief AI Scientist at Meta

Conversely, championing open models presents its own challenges. While allowing open access to AI models might democratize AI development, it could also create easier opportunities for misuse. Still, it is essential to clarify that open access does not equate to an unrestricted free-for-all. It can, and should, be balanced with responsible development and release procedures. Yann LeCun, Chief AI Scientist at Meta and prominent proponent of the open source governance model, has stressed in his US Senate testimony that ‘the rapid and free exchange of ideas, scientific publications, open source code, and trained models is the reason that AI has progressed so fast in the last decade’. Indeed, empirical results suggest that participation in open-source software projects is linked to a rise in entrepreneurship, potentially enhancing a country’s economic innovation and strength.

Influences and Fears: Silicon Valley’s Narrative and Societal Concerns

Recent discussions among American and European policymakers highlight the complexity of this debate. As Mark Scott has reported for Politico’s Digital Bridge, prominent tech leaders, including the likes of Meta’s Mark Zuckerberg and SpaceX’s Elon Musk, have rallied behind concerns over ‘bioterrorism’ – potential biohazards stemming from generative AI tools – in their frequent meetings over the past months. These concerns emphasize the potential dangers of unchecked access to AI technology. This narrative is further cemented by OpenAI’s concept of ‘frontier models’ beyond the current capabilities of ChatGPT – a still vague, yet foreboding projection of AI’s possible future.

Such a narrative in favour of strict licensing is seductive, especially when presented by Silicon Valley’s most prominent entrepreneurs, whose outsized influence has been repeatedly criticised by civil society actors. In addition, policymakers must consider that the future of AI is not merely an industry concern; it is now a societal one, too. Several recent polls throughout Western societies indicate growing apprehension about AI, with many voters fearing new forms of technological unemployment but also potentially existential risks for the world.

Historically, industrial revolutions have been double-edged swords, bringing prosperity while reshaping societal values and redistributing economic and political power. The AI revolution promises to be no different.

The Future is Open: Advantages of Collaborative AI Development

However, deference to tech giants’ dominant narrative and establishing a closed licensing model for AI development would have a grave pitfall, namely side-lining budding competitors, including those in Europe. The currently discussed ‘closed shop’ model could lead to new forms of ‘toxic competition’ and the next iteration of ‘chokepoint capitalism’, whereby a small group of AI leaders possesses the licences to conduct state-of-the-art AI training – while the rest of the world must watch. Research from the US suggests that licensing seldom improves quality, while often dampening competition and increasing costs. Therefore, such restrictions could hinder AI diffusion and innovation, potentially allowing autocratic competitors like China to take the lead.

In contrast to this dystopian scenario, truly ‘open’ AI platforms like Hugging Face, where a thriving community of over 15,000 developers collaborates, stand as testaments to the potential of open-source AI. Similarly, the research on entrepreneurship mentioned earlier found that an increase in GitHub participation in a given country generates an increase in the number of new technology ventures within that country in the subsequent year. Today’s welfare-increasing form of global AI research could be jeopardized by overreaching regulations, thereby stifling the very innovation and competition that are essential for AI’s evolution. Instead of blanket licensure as currently discussed in the US, pinpointed regulations targeting specific, potentially harmful applications of AI – like some of the risk-based rules in the European Union’s forthcoming AI Act – would be more effective.

As global policymakers steer the course of AI’s future in different formats, ranging from the G7 and OECD to the UK’s November Summit on AI, it is paramount to strike a balance. An open-source future, tempered by responsible development and sensible rules focused on short-term risks like biases in training datasets, could be the key to unlocking the vast potential of AI without compromising on competition or safety. The stakes could not be higher.


Anselm Küsters is Head of Department of Digitalisation/New Technologies at the Centre for European Policy (cep), in Berlin. As a postdoctoral researcher at the Humboldt University of Berlin and an associated researcher at Max Planck Institue for Legal History and Theory in Frankfurt am Main, he conducts research in the field of Digital Humanities.

Küsters holds a Master's degree in Economic History from the University of Oxford (M.Phil) and a PhD from Johann Wolfgang Goethe University in Frankfurt am Main.

 

 

 


Copyright Header Picture: shutterstock_Adeel Ahmed photos