AI ethics initiatives are essential in bringing about fairer, safer, and more trustworthy AI technologies, but they also come with various shortcomings. One of the main concerns is that the proposed guiding principles are often too abstract, vague, or confusing and that they lack proper implementation guidance. Consequently, there is often a gap between theory and practice, resulting in a lack of practical operationalization by AI researchers, developers, and businesses.

Critics also point out the potential trade-off between ethical principles and corporate interests and the possible use of those initiatives for ethics-washing or window-dressing purposes. Furthermore, most AI ethics guidelines are soft-law documents that lack adequate governance mechanisms and do not have the force of binding law, further exacerbating whitewashing concerns. Lastly, there is also possible regulatory or policy arbitrage, so-called jurisdiction or ethics shopping to regions with laxer standards and fewer constraints, e.g., offshoring to countries with less stringent requirements for AI systems.

To address those concerns, scholars argue that hard laws are necessary – and perhaps inevitable – and more and more countries are moving in this direction. According to the 2023 AI Index Report, the number of countries with AI legislation increased from 1 in 2016 to 37 in 2022. Two of the most notable recent legislations include the Biden Administration’s Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the E.U.’s Artificial Intelligence Act (AIA).

Noteworthy is that while several articles evaluate the strengths and weaknesses of the AIA and propose reform measures that could help strengthen it, only a couple of articles do the same for the EO, and almost none compare the two regulatory initiatives. The following sections try to close this research gap by providing an in-depth comparative analysis of the EO and AIA. I.e., the blog post offers a critical computer-ethical evaluation of the EO’s and AIA’s respective strengths and weaknesses and similarities and differences. It also discusses ways to improve both legislations.

Strengths and Weaknesses

Both initiatives come with multiple strengths and weaknesses. The EO, for example, rests on eight pillars – all of which are crucial from an AI ethics point of view: ensuring AI safety and security, promoting innovation and competition, supporting workers, advancing equity and civil rights, protecting consumers (patients, passengers, and students), protecting privacy, advancing the federal government use of AI, and strengthening U.S. leadership abroad and fostering international collaboration. On the other hand, however, the EO frequently relies on soft law terminology, best practice principles, and guidelines. Consequently, the EO misses an adequate and effective governance regime, i.e., monitoring, enforcement, and sanctioning mechanisms. Besides, it also lacks a risk-based framework, similar to the one introduced by the AIA – including restricting or banning certain AI technologies – and prioritizes national/(cyber-)security aspects over other essential AI ethics concerns.

Among the AIA’s strengths are its legally binding (hard law) character, its extra-territoriality and possible extension of the ‘Brussels Effect,’ the ability to address data quality and discrimination risks, and institutional innovations such as the European Artificial Intelligence Board (EAIB) and publicly accessible logs and database for AI systems. Critics, however, are primarily concerned with the AIA’s proposed governance structure; they specifically criticize its lack of effective enforcement, oversight and control mechanisms, procedural rights, worker protection, institutional clarity, sufficient funding and staffing, and consideration of sustainability issues.

Similarities

Both proposals attempt to maximize the benefits stemming from the use of (generative) AI technologies while at the same time minimizing their costs. Both initiatives pay particular attention to addressing the human rights risks resulting from AI technologies, such as …

  • Spreading disinformation, deception, fraud, and manipulation (e.g., via chatbots and deep fakes),
  • Amplifying discrimination risks and biases (e.g., in the context of [criminal] justice, healthcare, housing, and consumer financial and labor markets),
  • Augmenting dangerous speech, hate crimes, and oppression,
  • Enhancing mass surveillance, and
  • (Further) undermining the trust in democracy and the rule of law.

Both initiatives also intend to strengthen the protection of vulnerable groups such as workers and consumers and of marginalized groups such as ethnic minorities, people with disabilities, young children and older adults, and LGBTQ+ community members, to name a few (although more could be done by both the AIA and EO to [better] protect those groups). Lastly, both initiatives use similar definitions (e.g., of AI and share comparable points of criticism, such as …

  • Lack of a future-proof definition of AI (one that would adequately cover generative AI and/or clearly distinguish between generative AI and other forms of AI),
  • Lack of an effective governance regime,
  • Lack of democratic accountability and judicial oversight,
  • Lack of procedural rights (i.e., remedy and complaint mechanisms),
  • Lack of stakeholder dialogue and engagement, and
  • Lack of addressing the challenges posed by generative AI.

Differences

Besides those similarities, there are also considerable differences between the EO and the AIA, most of which are related to the distinctive regulatory traditions of the E.U. and the U.S. The following section provides an overview of the main differences.

First, the AIA follows a more comprehensive and holistic approach: The ‘Text of the Provisional Agreement’ comprises twelve titles, 85 articles, and nine annexes; it is 272 pages long and covers a multitude of regulatory issues. In contrast, the EO encompasses eight guiding principles and thirteen sections and is considerably shorter than the AIA; it focuses on specific issues such as innovation and competitiveness and national and (cyber)security concerns while leaving out other important ones (e.g., socio-environmental sustainability or the impact of AI on democratic institutions and the rule of law).

Second, the EO is, in many regards, (significantly) weaker compared to the AIA; per definition, it is a directive by the president to guide and manage the operations of the federal government. An EO is thus different from other (legally binding) hard laws, such as the AIA, which includes establishing government bodies responsible for compliance monitoring and introducing penalties and fines for non-compliance. Most importantly, it can be revoked or modified by the next administration. As in the case of the EO, the directive relies on multiple agencies and departments to develop guidelines and standards – which is not only time-consuming but also increases the risk of inconsistencies, e.g., due to a lack of intra- and inter-departmental coordination.

Third, the E.U. and U.S. follow different regulatory approaches: focusing (more) on hard law on one side of the Atlantic (E.U.) versus focusing (more) on soft law and industry self-regulation on the other side (U.S.) (e.g., contrary to the AIA, the EO relies more often on best-practice guidelines and mere recommendations – some of which are [still] in the making). Noteworthy is that we also see these different regulatory traditions or pathways in other economic or policy areas. For instance, the U.S. still lacks a federal privacy or data protection law; instead, it relies primarily on sector- or industry-specific regulations (such as HIPAA), FTC’s guidelines (such as ‘Notice and Choice’), and state laws (such as the CCPA/CPRA). The focus is primarily on voluntary or industry self-regulation, soft laws, and best practice guidelines instead of mandatory or legally binding regulation. The E.U., however, follows a more comprehensive and holistic approach with the GDPR; it relies on mandatory laws and regulations (i.e., hard laws) and considers privacy as a human right (i.e., E.U. citizens have the right to access their data, correct inaccurate data, and demand the deletion of specific data, according to the so-called ‘right to erasure’).

Fourth, the EO sets different priorities than the AIA. It focuses, for instance, much more on safety and security aspects (together with innovation and competition, they make up for most of the EO). It also covers a variety of topics that are currently (somewhat) neglected in the AIA, namely worker and consumer protection, competitiveness, innovation (especially in the form of supporting small and medium-sized enterprises [SMEs]), and leadership in AI R&D. While some of those issues are covered in other European laws and regulations (e.g., the E.U.’s Digital Markets Act deals with competition and antitrust issues), it would have been worth revisiting those aspects and discussing them in more detail in the AIA (e.g., the AIA touches upon promoting AI R&D and innovation and supporting SMEs in the sections devoted to so-called ‘regulatory sandboxes’ but not elsewhere). The AIA could thus learn from the EO by focusing more on consumers, workers, and (SME) entrepreneurs.

Fifth, the EO also includes a separate section on data protection, which is missing in the AIA. Contrary to the U.S., the E.U. has its own privacy law, the GDPR – commonly referred to as the ‘gold standard’ in data and privacy protection. It is thus not necessary to address those issues in the AIA in detail; yet, given that the initial release date of the GDPR was in 2016 – i.e., before the rise of generative AI – it seems worth exploring whether the GDPR needs to be updated to account for the challenges posed by those type of technologies. Therefore, having a separate section on privacy and generative AI in the AIA might be worth considering.

Sixth, (somewhat) different from the AIA, the EO highlights multiple times the importance of international collaboration and bi-/multilateral agreements to promote ‘global societal, economic, and technological progress.’ Including a separate guiding principle devoted to international collaboration in the AIA – same as in the EO – would thus be helpful. The European Commission could also emphasize in official government documents and statements that they are working with their international partners towards leveling the playing field and avoiding a ‘race to the bottom’ and/or detail how these goals could be achieved – especially within the transatlantic partnership.

Reform Proposals

Several reform measures could be taken to address the identified weaknesses of the EO and AIA and strengthen both government initiatives.

For the EO, it is crucial to introduce a federal hard law AI legislation that applies to all fifty states and across industry sectors and thus goes beyond the current privacy regime (to create planning security for all stakeholders, the law should ideally not be easily reversible). Such a governance regime must be effectively enforced, i.e., compliance must be monitored by a federal agency, and non-compliance must be sanctioned, e.g., with the help of behavioral or structural remedies.

This requires a separate government agency to oversee and implement the AI legislation. Such an institution should ideally be politically independent and adequately funded and staffed – including legal and economic advisors and, most importantly, AI experts. To prevent overburdening the FTC, the new federal AI agency should not be tied to existing institutions (note that the FTC focuses primarily on antitrust and consumer protection, but even the AI-related consumer protection responsibilities of the FTC should be outsourced to a separate institution solely devoted to AI regulation).

Like the AIA, the EO should also introduce mandatory third-party audits for all AI systems, launch a stakeholder engagement and dialogue process, strengthen democratic accountability and judicial oversight, and establish information rights, legal remedies, and complaint and redress mechanisms. It might also be worth considering installing an AI board, publicly accessible logs, and a database for AI systems. Lastly, socio-environmental sustainability considerations deserve more attention.

For the AIA, the focus should be on introducing or strengthening …

  • Conformity assessment procedures: The AIA – same as the EO – needs to move beyond the currently flawed system of provider self-assessment and certification towards mandatory third-party audits for all high-risk AI systems, i.e., the existing governance regime, which involves a significant degree of discretion for self-assessment and certification for AI providers and technical standardization bodies, needs to be replaced with legally mandated external oversight by an independent regulatory agency with appropriate investigatory and enforcement powers.
  • Democratic accountability and judicial oversight: What is needed – for the AIA (and the EO) – is a meaningful engagement of all affected groups, including consumers and social partners (e.g., workers exposed to AI systems and unions), and a public representation in the context of standardizing and certifying AI technologies. The goal is to ensure that those with less bargaining power are included and that their voices are heard.
  • Redress and complaint mechanisms: Besides consultation and participation rights, critics also request the inclusion of explicit information rights, easily accessible, affordable, and effective legal remedies, and individual and collective complaint and redress mechanisms. That is, bearers of fundamental rights must have means to defend themselves if they feel they have been adversely impacted by AI systems or treated unlawfully. I.e., AI subjects must be able to legally challenge the outcomes of such systems.
  • Worker protection: Critics demand better involvement and protection of workers and their representatives in using AI technologies (this is especially true for the AIA, but even the EO should pay more attention to those issues and detail how [consumers and especially] workers can be better protected). This could be achieved by classifying more AI at-work systems as high-risk or prohibiting them. Workers should also be able to participate in management decisions regarding using AI tools in the workplace. Their voices and concerns should be heard, especially when technologies that might negatively impact their work experience are introduced. Workers should, moreover, have the right to object to the use of specific AI tools in the workplace and file complaints.
  • Governance structure: Effective enforcement of the AIA (and the EO) also hinges on independent institutions and ‘ordering powers.’ The EAIB has the potential to be such a power and to strengthen AIA oversight and supervision. This, however, requires that it has the corresponding capacity, expertise (in both technology and fundamental rights), resources, and political independence. To ensure adequate transparency, the E.U.’s AI database should include not only high-risk systems but all forms of AI technologies. Moreover, it should list all systems used by private and public entities. The material provided to the public should include information regarding algorithmic risk and human rights impact assessment. This data should be available to those affected by AI systems in an easily understandable and accessible format (note that the EO could also benefit from introducing institutions such as the EAIB and publicly accessible logs and a database for AI systems).
  • Funding and staffing of market surveillance authorities: Besides the EAIB and AI database, national authorities must be strengthened – both financially and expertise-wise (this also applies to the U.S., where establishing new federal or state agencies – besides the FTC – might be worth exploring). It is important to note that the 25 full-time equivalent positions foreseen by the AIA for national supervisory authorities are insufficient and that additional financial and human resources must be invested in regulatory agencies to implement the proposed AI regulation effectively.
  • Sustainability considerations: To better address the adverse external effects and socio-environmental concerns of AI systems, critics also demand the inclusion of sustainability requirements for AI providers, e.g., obliging them to reduce the energy consumption and e-waste of AI technologies, thereby moving towards green AI. Ideally, those requirements should be mandatory and go beyond the existing (voluntary) codes of conduct.
  • International cooperation: Lastly, national AI legislation efforts should be complemented by a global agenda for promoting AI ethics, as pointed out by the EO. This could, for instance, entail international, cross-border cooperation between various government agencies (e.g., between the E.U. and the U.S.) and international organizations (e.g., OECD, G7, and G20). The primary purpose of this form of collaboration is to share best practices, tools, and information and to develop a common (i.e., unified) approach toward AI companies and technologies. What needs to be avoided is a fragmentary regulatory landscape and non-coherent measures – and a ‘race to the bottom’; instead, governments and international organizations should strive towards increased harmonization – and a ‘race to the top.’

This blog post has conducted an in-depth computer-ethical analysis of two of the world’s premier AI legislations – the AIA and the EO. It has identified several strengths and weaknesses as well as similarities and differences. Besides, it has suggested reform measures that could help to strengthen both initiatives, such as ‘hardening’ enforcement, monitoring, and sanctioning. It remains to be seen whether and, if so, how these measures will be implemented in upcoming government documents.


Dr. Wörsdörfer is an assistant professor at the Maine Business School and School of Computing and Information Science and an associate member of the Climate Change Institute at the University of Maine. His current research focuses on business and human rights, AI ethics, big tech and antitrust, and climate finance. Most of his research has been presented at prestigious international conferences, such as the annual conferences of the Australasian Business Ethics Network, the European Business Ethics Network, and the Society for Business Ethics.


Copyright Header Picture: Deemerwha studio