The capabilities of AI-driven systems are undoubtedly impressive, as highlighted recently by the hype surrounding the AI chatbot from the company OpenAI. Its ability to understand dialogue and generate answers seems equal to that of a real person. So it is not surprising that, in a time of global unrest, some are hoping that algorithms will be able to prevent the next pandemicfight climate change or create an inclusive society.

What such utopian ideas often overlook is the fact that every AI system is fundamentally limited by the data that is used to train it. As a rule of thumb, machine learning performs worse when this data is inaccurate, incomplete, irrelevant, invalid, outdated or inconsistent. What does this fundamental relationship mean for our current age of global disorder, characterised by numerous crises – from Russia’s aggression to soaring inflation and climate chaos?

„Every AI system is fundamentally limited by the data that is used to train it.“

AI can be very useful even in times of crisis but algorithms that have been optimised using conventional data can unknowingly result in the wrong decision. There is therefore a need, especially with increasingly automated environments, for risk-sensitive rules to apply to AI during a crisis.

copyright: cep and pixabay
Was this crisis a "grey rhino" or a "black swan"? Economists often use these two terms to describe unpredictable or slowly evolving events with extreme consequences.

Black swans, grey rhinos and the polycrisis

AI models developed or trained based on periods of relative calm and stability can fail when major external shocks occur signalling the onset of abnormal times. The inherent nature of risk means that this problem cannot be completely avoided; a phenomenon for which the former Wall Street trader Nassim Nicholas Taleb coined the expression „black swans“. These are unpredictable events with extreme consequences, such as the 2008 financial crisis or the Covid pandemic which began in 2020.

Economists also use the term „grey rhinos“ to refer to known and slowly evolving risks that amplify these shocks, such as the current high level of debt or global climate change. While black swans and grey rhinos make any kind of prediction difficult, including man-made ones, AI applications are significantly more impenetrable than conventional forecasting systems. They are more difficult to challenge and may give rise to negative feedback loops in a very short time and over long distances. This creates new systemic risks.

„Initial examples suggest that, in a polycrisis, predictive AI systems can even be counter-productive.“

It becomes even more complicated when several shocks occur simultaneously and create interdependencies that are generally unexpected. The economic historian Adam Tooze has recently diagnosed such a „polycrisis„. All of this increases the likelihood that the algorithms currently in use will have been developed or trained based on data that could suddenly become irrelevant. Where AI models are trained based on data sets that are narrower than the population they are intended to reflect, this can cause so-called data leakage; a danger that is already threatening the reliability of machine learning in many disciplines. Initial examples suggest that, in a polycrisis, predictive AI systems can even be counter-productive.

Finance, medicine, security – AI during a crisis

For example, at the beginning of the pandemic, common American AI tools for detecting credit card fraud assumed, based on past experience, that most purchases were made in-person which led to many virtual transactions being classified as problematic. The algorithm recommended denying millions of legitimate purchases while quarantined customers were just trying to obtain basic foods or medicines (and even toilet paper) online. At the same time, similar problems plagued the AI-driven credit evaluation tools of Chinese tech giant, Ant.

Experts increasingly recognise that AI forecasts can even reinforce negative trends in complex situations. To deal with the consequences of the opioid crisis, US agencies used a well-known algorithm to determine the risk of drug addiction. However, the algorithm often refused treatment to those patients who were most vulnerable during the crisis or had a medically complex history. A polycrisis exacerbates the potential for such automated systems to turn into „weapons of math destruction„.

Automated systems can become particularly dangerous in a polycrisis.

Problems also arise with AI tools in the security sector when they are based on historical and thus potentially distorted data. Thus, for example, a lower police presence in a given area usually leads to fewer criminal charges, which misleads AI systems specialising in law enforcement to allocate resources to other locations. This, in turn, means that the affected area is monitored even less which increases its vulnerability to, for example, sabotage of critical infrastructure. The same problem plagues predictive AI systems used for border control whose misjudgements have systemic relevance in times of migration caused by climate change.

The utility of AI-based forecasting tools is limited to narrow areas such as finance, medicine or security, and necessarily excludes other areas of human life. As Martin Wolf notes, thinking in intellectual silos may be efficient in a reasonably stable world but it will inevitably fail in a polycrisis.

A fist formed by big data (copyright: shutterstock/Rhyzi)
In her book "Weapons of Math Destruction - How Big Data Increases Inequality and Threatens Democracy", Cathy O'Neil, an American mathematician, warns about the dangers of big data. According to Küsters, a polycrisis can exacerbate the potential for automated systems to turn into such "weapons of mass destruction".

How can we mitigate the risks of incorrect AI predictions?

To some extent, the collection of new types of data, global data sharing and common data standards can result in better AI systems. Another way could be reinforcement learning, which does not depend on external data sets. However, these alternatives do not offer a panacea since black swans can appear at any time, and the increasing speed and interconnectedness of crises make it virtually impossible to identify relevant data fast enough and keep it sufficiently up-to-date.

Ensuring robust framework conditions is the best defence against the risk of incorrect AI predictions

The best defence is therefore to ensure that robust framework conditions are created in the trilogue negotiations on the EU AI Act, coming up in 2023. The risk-based approach, currently envisaged, may not be adequate as it is impossible to know the dynamic risk of a system plagued by crises, especially in dramatically changing environments. If one accepts the risk-based approach, the dangers that arise during a polycrisis could be taken into account by classifying a higher proportion of AI systems as high-risk. It will also be crucial to carry out regular AI audits using sufficient staff and technical resources – without overburdening start-ups. The starting point must be for tech-savvy policymakers to pay greater attention to the damage potential of AI during a polycrisis.

 

This contribution is an abridged and revised version of the cepAdhoc No. 15 (2022) „AI as a Systemic Risk in a Polycrisis“ and based on a guest commentary in Tagesspiegel Background.

Anselm Küsters is Head of Digitalisation/New Technologies at the Centrum für Europäische Politik (cep), Berlin. As a post-doctoral researcher at the Humboldt University in Berlin and as an associate researcher at the Max Planck Institute for Legal History and Theory in Frankfurt am Main, he conducts research in the field of Digital Humanities. Küsters gained his Master's degree in Economic History at the University of Oxford (M. Phil) and his PhD at the Johann Wolfgang Goethe University in Frankfurt am Main.


Copyright Header Picture: shutterstock/cono0430; picture of grey rhino and black swan: own illustration by cep (sources of the pictograms: Microsoft + pixabay); copyrigh: picture of the fist with big data: shutterstock/Rhyzi