Select Page

European lawmakers have reached a political agreement on the EU AI Act, a regulatoryframework that will have implications beyond Europe. Applications and software engineeringleaders should prepare to phase out prohibited AI systems, and to adopt AI trust, risk andsecurity management practices.

Quick Answer

What are the anticipated impacts of the European Union AI Act?

  • The AI Act is not approved yet and identifying the salient takeaways of the latest agreement is challenging, therefore leaders should focus on the conceptual compliance points of the framework that are unlikely to change.

  • The law will likely lead to both positive and negative outcomes for users and providers of AI in the European Union, so leaders should appreciate potential opportunities and prepare now to overcome future obstacles the EU AI Act could pose.

  • Although details of the text may be updated as the approval process progresses, leaders that adopt and operationalize responsible AI principles will be ahead of the competition with most of the upcoming requirements.

 

More Detail


Key Elements of the Draft EU AI Act


The Members of the European Parliament reached a political agreement on the text of the European Union Artificial Intelligence Act (EU AI Act) on 9 December 2023.
1 The current version of the text is currently undergoing further scrutiny before being officially approved, so the provisions of the law may change. However, the foundational principles on which the EU AI Act is based are likely to remain the same. These include: 2,3,4

  • AI uses are regulated based on their level of risks.
  • The EU AI Act will apply to any AI systems placed in the EU market or affecting EU citizens, so it will affect organizations outside the European borders.
  • Non compliant organizations will have to pay sizable penalties

The EU AI Act will be enforced by member state authorities and coordinated by a new European AI Office. 5

AI Uses Are Regulated Based on Their Risk Level

AI-enabled systems are grouped as per three risk categories, each one entailing specific obligations: 6

  • Unacceptable risk: Systems that may clearly threaten the fundamental rights of people in the EU will be banned.7 Notably, restrictions will apply to biometric systems.8 Emotion tracking at the workplace may be prohibited, as well as people categorization and remote biometric identification in public spaces, although selective exceptions may apply for law enforcement.
  • High risk: High-risk AI systems are those with a potential negative impact on people ’s safety or fundamental rights.7 Such systems will be subject to extensive and stringent compliance requirements, including information of users, logging of activity, documentation, human oversight and data quality assessment.
  • Minimal risks: The majority of the AI systems with narrow and tactical objectives will probably fall within this category. Although the EU AI Act does not impose any specific obligations in relation to the use of minimal-risk AI, voluntary codes of conduct are encouraged.

Most AI-enabled systems will be subject to additional transparency requirements, that is,disclosure obligations to protect users. For example, users will have to be informed when they are talking to a chatbot, when the content they are consuming is a “deepfake” and what type of biometric categorization system is being used.

For general-purpose models (also referred to as general purpose AIs, or GPAIs) specific transparency obligations are imposed, which relate to technical documentation and some degree of visibility on the data sources used for training the models. Systemic risks are also identified for the largest GPAIs. This provision applies to only the highest-performance models — the threshold is set at models that calculate several trillions of operations per second.However, as AI models become more capable, this threshold may increase.10 Additional mandatory requirements will apply to such models, for example, in relation to model evaluation and systemic risk assessment.

Free and open-source models may be largely exempted from the obligations of the EU AI Act, as long as they are not GPAIs that pose systemic risks.10

 

Figure 1: Risk Categories and Types Outlined in the EU AI Act

The EU AI Act Will Apply to Organizations Outside of EU Borders

Similarly to the General Data Protection Regulation (GDPR), the EU AI Act will have anextraterritorial reach. Both public and private organizations worldwide will have to comply if the AIsystem they are responsible for “is placed on the European Union market or its use affects people located in the EU.”10 Such responsibility may concern both providers and deployers of AI systems.

Noncompliant Organizations Will Have to Pay Sizable Penalties

The EU AI Act entails a progressive sanctioning scale, which determines fines bound to theseverity of the violations. The financial penalties will be expressed as a specific amount or asignificant percentage of global annual turnover (up to 7% of the total worldwide annual revenueof the preceding financial year for the most egregious of violations).10  SMEs and start-ups willlikely be fined in a way that is proportional to their size compared to larger technology firms. Sincethe EU AI Act builds upon the GDPR, it is likely that transgressors will be fined for both a violationof the EU AI Act and the GDPR contemporaneously.

 

Potential Impacts of the EU AI Act

Short term:

  • Barrier for European AI companies competitiveness: The AI Act will be a short-term barrier to European AI companies’ competitiveness in comparison to overseas vendors, slowing time to market and increasing legal spend. However, this would only happen if an organization is not currently in compliance with other obligations or is developing products in the high-risk or prohibited-risk categories. At the same time, the EU AI Act could temporarily drive investment in AI to other regions, where regulations are less demanding.
  • Inhibition of AI adoption and development in Europe: As it could require the reevaluation of certain key business use cases, it may slow the adoption of AI technologies, causing organizations to lose opportunities to leverage AI to improve business outcomes. The“expansive” nature of the EU AI Act is challenging in the sense of implementing for purposes of compliance/third party risk management. It seems that, for many organizations, this is a heavy lift, especially for smaller players that are trying to launch highly innovative first-of-a-kind AI innovations into Europe. For organizations with strong privacy/security programs already in pace, however, it may not be that much at all.
  • Compatibility with preexisting enterprise requirements: It could address key requirements that organizations willing to adopt AI (GenAI in particular) have, especially in sensitive verticals such as financial services, life sciences, manufacturing or healthcare. This may drive buyer decisions to Europe from overseas markets.

Long term:

  • Competitive advantages for compliant organizations: It could foster European AI companies’(both vendors and buyers) long-term competitiveness in comparison to overseas vendors,especially those that will implement the key obligations of the Act ahead of the legal deadline,and that will mitigate the risk of accumulating technical debt independently of the geographic focus of their operations. Such regional advantages could become global if other regions follow the lead of EU regulation. The competitive advantage flows from an “in control” situation that in turn fosters the public’s trust and commitment to the organization’s AI usage.
  • Unified standard for responsible AI use: By setting the bar for compliance requirements in a fast-paced technology industry, the Act could establish a consistent set of expectations and guidelines for the acceptable use of AI. This could harmonize third-party vendor risk assessment processes and controls across all organizations. The Act is a bold first attempt at establishing legal boundaries. It will apply to technology areas that would otherwise continue to grow while depending on the companies themselves (and marginally applicable regulatory frameworks such as the GDPR) to self-assess ethics, safety, security and privacy standards.
  • In the long term, the EU AI Act could become an example for other countries to follow and take inspiration from, as the GDPR was for privacy protection regulations worldwide. The EU AI Act represents an opportunity for Europe to lead the global AI race when it comes to responsible innovation. In the short term, as lawmakers set boundaries and road maps for safe AI innovation, the implementation and the execution of such guardrails could become easier for legal and compliance teams in comparison to defining them from scratch.
  • Social responsibility and protection of human rights: The Act could represent a milestone in the safeguard of human rights of European citizens with regard to a technology that is becoming increasingly more pervasive and, if left unattended, invasive.

Figure 2: Potential Impacts of the EU AI Act

 
 

Preliminary Recommendations

The EU AI Act has not been approved at the time of writing, so any details are subject to change inits final version, which is expected to be formally approved in early 2024, probably before the European Parliament elections that will take place between 6 June and 9 June 2024.12 It is, and will continue to be, a framework under constant change, with a lot of moving parts. The very same way in which AI use cases and technology will evolve in the next few months, let alone years, is largely unknown.

Applications and software engineering leaders should use the following recommendations as a starting point in their decision-making processes when defining AI-enabled use cases, as well as selecting deployment approaches or productized AI solutions:

  • Use AI responsibly and leverage AI TRiSM capabilities: When in doubt, assess whether you are using AI responsibly. Ask whether your use cases and systems meet the typical “responsible AI”criteria of fairness, transparency, security, privacy, human-centricity and accountability, as well as social, environmental and economic sustainability. AI trust, risk and security management(TRiSM) technologies are designed to improve AI systems and, specifically, the reliability,trustworthiness, transparency, fairness, privacy and security of AI models and applications. The responsible AI framework and the AI TRiSM capabilities can help you anticipate and be in front of several upcoming compliance requirements.
  • Address critical-path compliance requirements: Ensure compliance with applicable data-related regulatory requirements, such as the GDPR. This will be a prerequisite for compliance with the EU AI Act, since much of the infrastructure required to fully comply with the GDPR (including creating a data inventory, keeping records of processing activities and data privacy impact assessments) can be adapted to the Act. Compliance with applicable data-related regulatory requirements will also ease the transition for enterprises and limit the costs and“change overload” for business associated with the new law. Granular control over any data processed — personal or otherwise — is key to successful usage of AI technology. Instead of waiting for the final text of the Act to be approved, you should start gathering information across your organization’s business units and run a preliminary evaluation that includes:

. Making an inventory of the relevant AI models and AI-enabled applications
. Reviewing and extending existing practices to maximize fairness, reliability and explainability of the decisions made by AI
. models, starting with better traceability of data uses, but being aware none of these can be ensured.

. Extending security practices to the new attack surfaces.12

 

  • Assess the risks posed by existing and new initiatives: Any risk-assessment process starts with a proper definition and reevaluation of use cases, including those you have already implemented and those that are bound to your current business process. Ensure the way you embed AI capabilities is proportional, and avoid abusing them. For example, “emotion recognition” systems that monitor employee behavior are highly likely to be ruled out, so identify if, when and how this specific capability is used in your business processes and prepare to replace it.
  • Address procurement risk by adopting a strategy: Establish a procurement strategy, risk appetite and policy. Work with legal to align third-party contract provisions and vendor codes of conduct to the new requirements. Provide procurement with.a set of criteria of what is acceptable vendor behavior and develop a response for when existing vendors start to deviate from your own strategy. Avoid locking yourself into a single vendor’s offerings, and adopt a modularized approach, preventing over reliance on a single tool and having options for substitutions.
 
 

Contributors Evidence

1 The timeline of the process that led to the outline of the current version of the Act formally started in 2021, although notably the European Commission already published the communication Artificial Intelligence for Europe back in April 2018, the Ethics Guidelines for Trustworthy AI in April 2019, plus a White Paper on AI in February 2020.

The major milestones in this process have been:

  • April 2021: The European Commission presents a proposal for the EU AI Act, the first-ever comprehensive legal framework on artificial intelligence.
  • December 2022: The European Council adopts its common position (“general approach”) on the EU AI Act.
  • June 2023: The European Parliament approves its own draft proposal for the EU AI Act with 499votes in favor, 28 against and 93 abstentions.
  • June 2023: Start of the “trilogue negotiations” involving the European Union’s three branches(the European Parliament, the European Commission and the European Council) to reach a provisional agreement on the EU AI Act.
  • November 2023: France, Germany and Italy reach a separate agreement on how AI should be regulated. The split threatens the negotiations on the EU AI Act and originates from concerns raised by France, Germany and Italy about the regulation of foundation models in particular.Among other points, the side agreement supports the “mandatory self regulation” of foundation models via voluntary codes of conduct, the regulation of the applications of AI versus the technology in itself, and does not endorse the immediate need of a regulated sanctioning system.
  • December 2023: The trilogue negotiations come to an end, and Members of the European Parliament (MEPs) reach a political deal. The text has still to be formally adopted by the European Parliament and the European Council to become EU law.

2 Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, European Parliament News

3 Commission Welcomes Political Agreement on AI Act, European Parliament News

4
Artificial Intelligence Act: Council and Parliament Strike a Deal on the First Rules for AI in the World, European Parliament News

5
Both the EU and individual member state authorities will be tasked with enforcement of the EUAI Act. Individual member states will be required to designate “competent national authorities’’ to implement and enforce laws at a national level. Unlike the GDPR, multiple regulatory agencies within a single member state may be empowered to implement and enforce the EU AI Act. Moreover, a new European AI Office within the European Commission will be created to coordinate these efforts at the European level. Finally, a dedicated scientific panel of independent experts will assist these authorities in monitoring and testing general-purpose models.

6
Sources do not completely agree on the current categorization of risks adopted in the agreed version of the text, so here we are adopting the most common one. Previous versions of the EU AI Act also included a “limited risk” category, which appeared to mostly pose transparency requirements similar to those we refer to in the body of the note with reference to non-GPAI systems.

7
Charter of Fundamental Rights of the European Union, EUR-Lex

8
Biometric data already constitutes a special category of personal data in underlying data-protection regulation.

9
Such models are currently defined as those “using a total computing power of more than 10^25 FLOPs”. FLOPs are “floating point operations per second,” and they are used as a unit of measure to define a proxy threshold (10^25 FLOPs), above which general purpose models can pose a systemic risk in the EU AI Act.

10
Artificial Intelligence — Q&As, European Parliament News

11
It is expected that the EU AI Act will be fully applicable two years after the approval. According to the current agreed text, prohibited systems will have to be phased out starting from six months after the formal approval, and obligations for GPAI governance will become applicable after 12months.

12
OWASP Top 10 for Large Language Model Applications, OWASP Foundation