At the moment, everyone is talking about Artificial Intelligence (AI) and how it will revolutionise different sectors of the economy. How widely do you expect AI to be deployed in the financial industry, or in other industries that have strong influence on the economy?

I expect AI to be deployed extensively in the financial industry and beyond, not least since no one wants to be left behind by competitors. The financial industry has been an early adopter of advancements in computer technology. Since the 1980s, simple algorithms have been used for so-called index arbitrage, and since the early 2000s, high-frequency trading is widely popular. In a similar adaptation process, the industry is now starting to utilise AI for various purposes such as natural language processing, risk assessment, fraud detection, customer service, and investment analysis. AI offers the potential to automate many tasks in finance and to analyse large datasets in milliseconds, thereby increasing efficiency. While this might lead to improved decision-making and better pricing, it also creates potential pitfalls that must be considered. In addition to the financial industry, other sectors that are likely to see widespread deployment of AI include healthcare, manufacturing, and energy – in fact, my colleagues and I just finished a new study on how AI systems trained on metaverse data might revolutionise the healthcare sector.

How much control do you think AI will be given over financial decisions such as investing and trading?

The extent to which institutions or private individuals will hand over important financial decisions to AI will naturally depend on multiple factors, including upcoming regulatory frameworks, risk tolerance, future technological progress, and, above all, the degree of trust placed in AI systems. Currently, AI is already being used in the financial industry for algorithmic trading, but human oversight and intervention are still typically required to monitor and manage these systems to ensure compliance with regulations and to address unexpected market conditions. As AI technologies continue to advance, there is a possibility that AI systems could play an even greater role in financial decision-making. Machine learning algorithms, for example, can learn from historical data and adapt their strategies over time. This could potentially lead to more sophisticated AI systems that can make complex investment decisions based on patterns that may not be apparent to human traders. However, it is important to consider the risks associated with fully autonomous AI-driven financial decision-making. As the current discussion about ChatGPT and the potential of large language models illustrates, AI systems are not immune to factual errors or hidden biases. Moreover, competition law scholars have warned that there might be “algorithmic collusion” in the future, whereby AI systems communicate with each other and form a secret cartel. Therefore, it is likely that even as AI evolves, there will continue to be a need for human oversight, accountability, and regulatory safeguards for AI-driven financial decisions.

What are the main incentives and disincentives for institutions to do this? Do you think they are currently giving enough thought to the risks?

“For most sectors in the economy, the main reason to adopt AI systems is cost reduction. AI can streamline various processes, thereby reducing the costs of human labour and increasing operational efficiency. For financial institutions, in particular, I think the main incentive will be speed, which directly translates into higher margins and profits. AI-powered systems can process information and execute trades at high speeds, enabling institutions to capitalize on market opportunities and react quickly to changing conditions. However, one must consider that using AI in finance requires, as everywhere else, high-quality data for accurate decision-making. Institutions require up-to-date, high-quality data to prevent biases and erroneous conclusions from their AI systems. While it is in the best interest of these institutions to care about this, there is insufficient consideration of the macro-scale, meaning the interconnectedness of financial markets and the potential for algorithmic trading to amplify market movements, which can ultimately create systemic risks. Early warning signs have been the massive stock market crash in 1987 called “Black Monday”, partly caused by computer-based models for portfolio insurance hedging, and the “flash crash” in May 2010, when stocks suddenly dropped and recovered.”

In a recent cep study on AI in the polycrisis, you have argued that machine learning models trained on past data from „normal times“ could exacerbate a serious crisis. What are some examples of how that could potentially happen as pertains to finance?

During times of crisis, financial markets often experience high volatility and unprecedented events known as “black swans”. However, since these crashes are relatively rare, there is a lack of sufficient data on them. Machine learning models trained on historical data from relatively stable market conditions and, more generally, peaceful and robust socio-economic and political contexts, may not adequately capture the dynamics and patterns that emerge during a crisis. As a result, these models may generate inaccurate predictions or fail to identify risks, potentially leading to poor investment decisions and amplifying market fluctuations. Moreover, machine learning models used in algorithmic trading or automated investment systems can contribute to feedback loops, especially during a multi-faceted polycrisis. For example, if multiple market participants are using similar models and respond in a similar way to market movements, it can exacerbate volatility and thereby make a crisis even worse. To give one example: FICO scores are commonly used in the financial industry as a measure of an individual’s creditworthiness and are based on various factors such as payment history. When the COVID-pandemic hit, it caused unprecedented disruptions to economies and societies worldwide. Many individuals experienced job losses, financial hardships, and psychological stress, leading to changes in their purchasing behaviour and other economic decisions. Traditional credit assessment models, including those relying on FICO scores, are primarily trained on historical data from relatively stable economic periods. Therefore, these models did not properly account for the sudden changes in credit behaviour and consumer behaviour, with people prioritizing online purchases of essential goods and medicine. As a result, automated credit evaluation systems relying on these models significantly misclassified certain transactions or failed to recognise legitimate purchases made during the crisis. From a macroeconomic perspective, such ML models might have even led to decreased credit availability during the crisis – limiting access to credit precisely when it was most needed.

In your report, you describe how modern ML models can „amplify negative feedback lops after large distances within short periods“. Can you explain a bit more what you mean by this?

As mentioned, AI models in finance make decisions based on historical data and statistical patterns, aiming to optimise financial strategies. However, if multiple market participants are using similar ML models, their actions based on the models’ recommendations can reinforce each other and contribute to a feedback loop. In other words, if all traders use a ChatGPT-style recommendation system, they all might end up trading on the same side of the market. This is particularly likely if these AI models are solely based on past training data. In such a setting, the amplification of negative feedback loops can occur not only within a specific market but also across different markets and geographic regions, as automated decisions spread rapidly and trading activities are interconnected. In the age of a small number of dominant AI providers, we might lose the core ability of financial markets, namely their ability to summarise a lot of heterogenous information about the world – simply because the information becomes less diverse. Implementing appropriate risk management measures, adapting circuit breakers (which halt trading when there are significant market swings) to AI systems, better monitoring tools, and human oversight can help mitigate these feedback loops.

Your example of FICO scores and purchase data during the pandemic makes one think of the potential links between personal finance and high finance. Presumably, any personal finance service making extensive use of AI would be a rich source of nearly real-time financial data. It would also be a potential vector to affect people’s financing decisions, either intentionally or unintentionally. In this context, do you see any potential here for things to go very wrong if a large number of people are getting spending and investing decisions from AI that then turns out to be rubbish – perhaps in ways that feed into a larger systemic crisis. Do you think that’s a fair chain of reasoning, and do you have any thoughts about this specific bundle of risks?

I think your chain of reasoning is entirely fair! There are indeed significant risks associated with personal finance services that heavily rely on AI, as AI-driven recommendations might quickly turn out to be flawed or unreliable. Already today, we frequently encounter these problems with ChatGPT’s so-called “hallucinations”, which is often introduced as a technical term but just a euphemism for erroneous statements. In the case of AI-driven financial advice, however, these problems could potentially have broader implications and contribute to a larger systemic crisis. If many individuals are receiving AI-generated financial advice that proves to be misguided, it can lead to cascading financial decisions that are not aligned with economic realities. In other words, ChatGPT might lead to a new type of herding behaviour on financial markets, potentially amplifying market volatility, distorting asset prices, and creating imbalances in the financial system. But in times of a polycrisis, the respective accumulation of risks is even worse: Throughout all societal sub-spheres, we nowadays rely on AI systems to tackle the rapidly increasing complexity and the multitude of crises. Some perceive this technology as a “silver bullet” for all our problems. However, if flawed AI models distort market dynamics, it can impact the accuracy of these other AI models as well, leading to further flawed recommendations and another round of the domino effect.

Altogether, what is the worst-case scenario and what is the best-case scenario for the widespread use of AI in finance?

The widespread use of AI in finance presents both significant risks and benefits. In the worst-case scenario, flawed AI models could amplify inequalities, reduce transparency, and contribute to systemic crises. Conversely, the best-case scenario involves enhanced efficiency and better risk mitigation through quick reduction of market inefficiencies. Although historians, like myself, should never hope to predict the future, I think that the actual outcome will likely fall between these extremes and depend, above all, on the chosen regulatory responses.


Anselm Küsters is Head of Department of Digitalisation/New Technologies at the Centre for European Policy (cep), in Berlin. As a postdoctoral researcher at the Humboldt University of Berlin and an associated researcher at Max Planck Institue for Legal History and Theory in Frankfurt am Main, he conducts research in the field of Digital Humanities.

Küsters holds a Master's degree in Economic History from the University of Oxford (M.Phil) and a PhD from Johann Wolfgang Goethe University in Frankfurt am Main.

 

 


Copyright Header Picture: Andrey Suslov