Europe is on the cusp of a technological transformation that goes far beyond productivity or innovation. With the rise of autonomous AI agents and conversational interfaces, a central economic infrastructure of modern societies is disappearing: the translation layer between complex institutions and human action. What lawyers, tax advisors and consultants have done up to now could be taken over by intelligent systems in the future. This turns a technical innovation into a political issue. Because whoever controls the „final interface“ – the interface between human intention and digital execution – also controls the architecture of knowledge, decisions and power. For Europe, this development is therefore not only economically relevant, but also a question of its digital and political sovereignty.

Translation between institutions and citizens

Every complex society has a translation problem. Tax legislation does not speak to citizens. Medical reports do not explain themselves. The law is written in a language that those to whom it applies do not understand. For centuries, the solution has been to put someone in between: a solicitor, a tax advisor, a doctor, a translator, a programmer. These intermediaries are nothing more than interfaces – layers of translation between what a system knows and what a person needs.

The entire professional service economy, a market worth well over two trillion dollars worldwide, is essentially a translation industry. And it is facing a structural change that could surpass the transition from an agricultural to an industrial society in its depth.

What initially serves an economic purpose of providing translation and reducing transaction costs is proving to be a political question of sovereignty and maturity with regard to autonomous agent systems. This is because artificial intelligence is beginning to harbour its own intentions and create meaning independently. The „final interface“ is the interface between the real and virtual worlds, between intention and agency. For Europe, it is the interface at which the normative order translates into political sovereignty.

Transaction costs and the economics of mediation

In 1937, Ronald Coase asked a question that earned him the Nobel Prize: Why do companies exist? His answer: Because transaction costs — the costs of searching for information, negotiating and monitoring contracts — make it cheaper to coordinate work within an organisation than via markets (Coase, „The Nature of the Firm“). In 1990, Douglass North expanded this concept to include the institutional dimension: institutions — formal rules and informal norms — determine the level of transaction costs in a society and thus its economic performance (North, „Institutions, Institutional Change and Economic Performance“).

Applying this framework to the professional services industry paints a clear picture. The global legal services market exceeds one trillion dollars. Auditing and tax consulting: over 700 billion. Management consulting: more than 300 billion. Financial consulting: 150 billion. Translation services: 60 billion. In total, this „intermediation economy“ represents more than two trillion dollars in global value creation — professions that exist primarily because complex institutional knowledge systems are not accessible to individuals.

In this respect, a solicitor is nothing more than an interface between the law and a problem. A tax advisor is an interface between tax code and real life. A management consultant is an interface between data and decision-making. This description is not cynical — it is economically accurate.

Because these transaction costs are not trivial. They are cognitive barriers, systemic barriers, technical barriers, linguistic barriers — any form of friction that stands between institutional knowledge and human understanding. And it is precisely this friction that has tied up trillions in value creation and spawned entire classes of professions whose raison d’être lies solely in the service of translation.

In 2025, the California Management Review noted that Coase’s framework was „increasingly relevant in the age of artificial intelligence“ because the costs Coase identified could now be reduced „just as well with tools from outside the company“ ( ). The implication is radical: if artificial intelligence drives the transaction costs of translation to zero, the economic basis for a significant part of the professional service industry will disappear.

The evolution of the interface — from punched tape to language

To grasp the significance of this structural change, it is worth taking a look at the history of human-machine interaction. This history is, at its core, a history of the gradual reduction of translation effort.

In the early days of computer science, humans had to learn the language of the machine. Every interaction required a professional translator — a programmer — who transformed human intent into machine instructions.

The graphical user interface reduced this effort considerably. Humans no longer had to speak the language of the machine, but they did have to learn the language of the application: its menus, its workflows, its specific way of organising the world. Each piece of software was its own dialect. The web multiplied the dialects. Booking.com displays hundreds of filters because it tries to be a static interface for millions of different intentions. Half of what it displays is irrelevant to the user.

Then, at the end of 2022, a paradigm shift took place. ChatGPT did not prove the power of language models — that was already known. It proved that natural language works as a universal interface. For the first time, you could talk to a computer like you would to a colleague. No syntax. No menus. No learning curve. The translation layer between human intent and machine execution became thinner in one fell swoop than any graphical interface could ever have made it.

The development since then has followed a clear logic:

Text (2022–2023): Natural language as input. The keyboard remains, but the need to learn specialised software is eliminated.

Voice (2023–2024): The keyboard becomes optional. AI assistants begin not only to parse, but to understand — context, intent, nuance.

Agents (2025–2026): Systems that not only respond, but act. They browse, book, programme, negotiate. The human describes the desired outcome; the agent determines the steps. Microsoft Research refers to this as „Zero UI“ — technology that recedes into the background and adapts to human behaviour and intent.

Embodiment (next level): Robots. Physical agents that translate human intent into action in the real world. When you tell a robot to „clean up the kitchen,“ you program it — in English, in German, in natural language.

The market for agent-based AI is forecast to grow from $8.5 billion (2026) to $45 billion (2030). Deloitte expects adoption to quadruple in manufacturing alone by 2026. And just as Friedrich Hayek pointed out that the central problem of economics is not the allocation of given resources, but the use of knowledge „which is not given to anyone in its entirety“ (Hayek, „The Use of Knowledge in Society“, 1945), there is now the possibility of making this distributed knowledge accessible without institutional intermediation.

Why can’t laws speak?

This is where the truly radical question lies. Not: How can AI improve existing interfaces? But rather: Why do we need interfaces between institutional knowledge and human understanding at all?

Why can’t laws speak? Why can’t tax law apply to individual cases, provide citizens with context-specific information, outline their options and explain the implications of each choice? Why do we need a human intermediary who translates what the law means in this specific case for £300 an hour? Until now, the answer was: because the cognitive complexity of translation required a trained human intellect. This answer is no longer valid.

Legal Aid of North Carolina already operates LANC-LIA, an AI assistant that answers civil law questions in multiple languages. In Canada, JusticeBot helps tenants with housing disputes by providing rule-based advice. Thomson Reuters‘ report for 2026 shows that more than half of all lawyers expect AI-driven efficiency gains to fundamentally change the billable hour model — towards flat fees, subscriptions and hybrid models.

The example illustrates a more general principle. If a hospital report that no one in the family understands can be entered into an AI and explained „as if to a five-year-old“ within seconds — then the translation layer has not become thinner. It has disappeared.

McKinsey describes this as „superagency“ in 2025 — AI lowers barriers to expertise and enables more people to acquire expertise in more fields, in any language and at any time. The Journal of Futures Studies speaks of a simultaneous „crisis and democratisation of knowledge,“ with profound implications for existing knowledge hierarchies.

And yet it would be naïve to view this process as purely emancipatory. Research in Frontiers in Education shows that the widespread use of AI tools induces „cognitive dependency,“ which suppresses active memory and problem-solving. Younger users showed greater AI dependence and performed worse in critical thinking tests. The democratisation of knowledge carries the risk of atrophying the very cognitive abilities that produce knowledge in the first place. To quote Joseph Schumpeter, it is creative destruction in its purest sense — except that here it is not means of production that are being destroyed and recreated, but cognitive infrastructures.

The most expensive language barrier in the world

The translation problem is not just institutional. It is global. Somewhere in Zimbabwe, a journalist may have written a brilliant analysis of the Iran conflict. In South Korea, a philosopher may have articulated exactly the framework that European decision-makers need. In Brazil, a researcher may have solved a problem that Silicon Valley is still spending billions on.

We will never know. Because these ideas exist in languages that have no institutional translation pathways to those who need them. The world’s intellectual output is filtered through a handful of dominant languages, and the filter discards most of what it touches.

This is more than a philosophical observation — it is an economic argument about wasted global intelligence. The AI translation market, estimated at $1.2 billion in 2024, is expected to grow to $4.5 billion by 2033 (16.5% CAGR). KUDO’s analysis for 2026 shows that AI-powered language translation has evolved from experimental technology to a “ core engine of global communication.“ AI models are moving towards semantic understanding in 2026 — interpreting meaning, not just words.

When a Swahili-speaking scientist can publish findings that are immediately accessible to a Japanese research team — not through clunky translation, but through seamless, context-aware linguistic transfer — the global knowledge base doesn’t just grow. It transforms.

Language barriers are the world’s greatest cognitive waste. And they are beginning to fall. People can converse without switching to a language that is not their own. The friction disappears. But if friction was all that stood between a Zimbabwean journalist’s insight and a European political debate, then reducing friction changes everything.

The irreducible human being — and the question of judgement

If AI can translate everything, will the human element disappear? No. And understanding why not is crucial.

There is a category of decisions where it is not the translation that is difficult — but the judgement. Criminal convictions. Medical ethics. Education. Political considerations under uncertainty. These require something that no translation layer can provide: the experience of being human, of making decisions that affect other people.

One could put an AI on the bench. It would probably be more consistent, less biased, more efficient. But there is a level, perhaps not religious, but philosophical, where the following applies: at the end of the day, we are human beings living with other human beings. And in this context, it is fair to be evaluated and judged by one’s peers — rather than by something abstract. This could be a defining feature of European AI.

The London Business School puts it categorically: „AI systems do not exercise judgement“ — whereby judgement is defined as „the combination of relevant knowledge and experience with personal qualities to make decisions and form opinions“. The philosophical concept of phronesis — the Aristotelian virtue of situational knowledge, of knowing how to act correctly in a specific situation — remains a domain of human cognition. As Springer Nature’s Global Philosophy argues, the contextual, ethical and moral dimensions of a decision cannot be „reduced to algorithmic logic“.

In the field of law, an analysis by the Springer Nature Research Community makes the case directly: „AI fails at lawmaking“ because legislation requires judgement, a sense of justice and an awareness of human limitations — qualities that cannot be algorithmised. The International Committee of the Red Cross goes even further with regard to military target selection: „Proportionality assessments are contextually based legal and moral judgements that must remain rooted in qualitative human reasoning.“

Research also identifies an „accountability gap“ (PMC, 2024): when an AI system recommends a judgement, who is the author of the decision? The algorithm? The programmer? The judge who clicks „accept“? This question is not technical. It is constitutive of the relationship between citizens and the state.

This is where the real challenge lies: not in automating translation — that has already been solved technically, or will be soon — but in ensuring that in a world where machines are making more and more decisions autonomously, the connection to human values, preferences and reactions is maintained. Companies such as neuroflash are building infrastructure for precisely this purpose — APIs that allow any automated system to ask, „How will a real person react to this?“before a decision is made. It is a simulation layer for humanity that has been validated with almost 100% consistency with real market research (Nielsen Norman Group, 2025). It becomes more important, not less important, as more interfaces disappear and machines make autonomous decisions.

What happens when translation costs approach zero?

When you no longer have to do any translation work and can converse with any system in real time — that’s a new world. But new worlds don’t come without upheaval.

In 2025, the World Economic Forum found that 41 per cent of all employers plan to reduce their workforce by 2030 due to AI. The proportion of 21- to 25-year-olds in large listed technology companies halved between January 2023 and July 2025 (PwC, 2025). Entry-level positions requiring three or fewer years of experience declined by 15 percentage points.

At the same time, demand is paradoxically growing. American law firms recorded average demand growth of 1.9 per cent in 2025, and as much as 3.9 per cent in the third quarter. Employment in the legal sector rose by 6.4 per cent, even as AI penetration increased (MIT, 2026). This is not a contradiction — it is exactly what happens during a paradigm shift. When the cost of translation falls, the volume of translation explodes. Just as email did not eliminate communication but made it cheaper and therefore more frequent, AI-assisted legal and financial advice will make these services accessible to millions who cannot currently afford them. The pie is growing, even as the price per slice falls.

However, this poses a danger that goes beyond the labour market. Current institutions — regulatory authorities, professional associations, training systems — are geared towards the existence of human intermediaries. If the intermediaries disappear, not only will jobs be at risk, but also the institutional structures that have been built around them. It is illusory to believe that existing institutional arrangements will survive this transition unscathed.

Ultimately, any structural change will be decided there and by the question of how and where good, future-proof jobs are created. If tax advisors and lawyers lose their translation function, it could be argued that the state should redirect these professionals into institutional enforcement roles – tax auditors, compliance officers, law enforcement – where the return on investment is demonstrably high. Tax auditors generate many times what they cost. This is not a social programme; it is economic optimisation.

Here, a „visible hand regulatory policy,“ as economic historian Werner Abelshauser calls the German approach, would be superior to the laissez-faire paradigm. Not because the market is failing, but because the speed of technological change exceeds the adaptability of market mechanisms. The role of the state will become more important not only through the economic measures following this structural break, but at least as much in connection with institutional restructuring.

Europe and the architecture of the post-interfacial world

The trajectory is clear: static interfaces (websites, apps, forms) are giving way to conversational interfaces (chatbots, voice assistants), which in turn are giving way to agent interfaces (AI that acts on behalf of users), which in turn are giving way to no interface at all — only intention matters, executed by systems that understand.

In this world, the visual layer doesn’t matter. What matters is the intelligence layer underneath. The APIs. The data. The decision logic. A website doesn’t have to look a certain way — it can generate whatever interface is needed for the specific task. Or it can skip the interface altogether and just do what you asked it to do.

This is the coincidence that makes this moment historic: the technological possibility of driving translation costs to zero meets an institutional landscape built on the existence of these costs. This is not a gradual improvement of existing interfaces, but the elimination of the need for interfaces as such.

Any hope that this transition will be smooth is illusory. The logic of technological substitution has become inevitable. But this inevitability also presents an opportunity: a society in which every law speaks, every medical report explains itself, every complex system meets the citizen in their language, in their context, in the reality of their lives. In which the barriers between human intention and systemic performance are not measured in years of specialised training, but in the seconds it takes to say what one wants.

The final interface is the voice. And after that, perhaps, only intention. The danger of this development is obvious: autonomous agent systems develop their own intentions and the „final interface“ itself becomes a self-serving actor.

For Europe, whose diversity is both a cultural strength and an institutional weakness, institutional sovereignty over the „final interface“ becomes a question of sovereignty. It is the decisive layer in the architecture of the post-interfacial world, in which perceptions, interpretations, intentions, decisions and behaviour are determined.

 

This article is also available in German.

 

Henning Vöpel is Chairman of the Board of the Foundation for Regulatory Policy and Director of the Centre for European Policy (cep). Jonathan Mall is CIO of neuroflash. This essay arose from a conversation about AI, interfaces and the economics of translation in March 2026.


Copyright Header Picture: shutterstock