US regulators have begun investigating tech giants' investments in generative artificial intelligence startups including OpenAI, the creator of ChatGPT
Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. AFP News

KEY POINTS

  • Lawmakers passed the legislation in March, but its publication in the European Commission's journal in July set the wheels in motion for its coming into force
  • Under the law, regulators will evaluate AI technologies on a scale of four levels of risk
  • Over 700 companies say they will sign on to an AI Pact, a commitment to comply with the law early

The European Union has set a global precedent with the official implementation of the EU AI Act, which came into force on Thursday. This landmark legislation, governing artificial intelligence (AI), applies to all AI systems currently in place or under development within the EU. First proposed by the European Commission in 2020, the AI Act aims to address the potential negative impacts of AI and establish a regulatory framework for its development and usage by companies and individuals across the bloc.

The AI Act is widely recognised as the first law to set comprehensive regulations for artificial intelligence. After being passed by lawmakers in March, its publication in the European Commission's journal in July set the wheels in motion for its enforcement. The official start date marks the beginning of several key dates and timelines for companies to familiarise themselves with the new legislation and ensure compliance.

How Will the AI Act Work?

The AI Act evaluates AI technologies based on a risk scale and categorises them into four levels: no risk, minimal risk, high risk, and prohibited AI systems.

High-Risk AI Systems

Examples of high-risk AI systems include autonomous vehicles, medical devices, loan decision systems, educational scoring, and remote biometric identification systems. These systems will face strict obligations under the AI Act, including:

  • Adequate risk assessment and mitigation systems
  • High-quality training datasets to minimise bias
  • Routine activity logging
  • Mandatory sharing of detailed documentation with authorities for compliance assessment

Prohibited AI Systems

The AI Act also bans AI applications deemed "unacceptable" due to their risk level. Prohibited applications include:

  • "Social scoring" systems that rank citizens based on data aggregation and analysis
  • Predictive policing
  • Emotional recognition technology in workplaces or schools
  • AI systems that manipulate user decision-making or expand facial recognition databases through internet scraping

Who Will Be Affected?

The new legislation will primarily target large U.S. technology companies, which are the main developers of advanced AI systems. For instance, Meta has already restricted the availability of its AI model in Europe due to regulatory concerns, a move influenced by the EU AI Act. The company was previously ordered to stop training its models on Facebook and Instagram posts in the EU.

While Big Tech firms will be heavily impacted, the AI Act will also affect a wide range of businesses, including non-tech firms. According to Thomas Regnier, a spokesperson for the European Commission, about 85% of AI companies fall under the "minimal risk" category, requiring little regulation.

Compliance Timelines

Broadly, companies will have between three to six months to comply with the new regulations. Heather Dawe, head of responsible AI at the consulting firm UST, is already working with international clients to align their AI use with the AI Act. Dawe notes that while the guidelines are clear, complications arise if companies do not start the compliance process promptly.

Over 700 companies have pledged to comply with the AI Act early by signing an AI Pact. Dawe suggests that companies establish internal AI governance boards comprising legal, tech, and security experts to audit their technologies and ensure compliance with the new law.

Companies that breach the EU AI Act could face significant fines. Penalties range from €41 million or 7% of global annual revenues—whichever is higher—to €7.5 million or 1.5% of global annual revenues, depending on the severity of the infringement and the size of the company.

Oversight by the European AI Office

The European AI Office, established by the Commission in February 2024, will oversee compliance with the Act. The office will supervise general-purpose AI models and ensure adherence to the regulations. According to Euronews, the office will be staffed by 60 internal Commission employees and 80 external hires within the next year.

An AI Board, comprising high-level delegates from all 27 EU member states, held its first meeting in June to lay the groundwork for the Act's implementation. This board will work with the AI Office to ensure harmonised application of the Act across the EU.

Clarifying Risk Levels

Risto Uuk, the EU Research Lead at the Future of Life Institute, believes the European Commission still needs to clarify the risk levels of specific technologies. Uuk points out that the current guidance is quite general, and companies may need more specific feedback as the implementation progresses.

Uuk also suggests that the Act could impose stricter regulations and higher fines on Big Tech companies operating generative AI in the EU. Major AI companies like OpenAI and DeepMind, which fall under the "general-use AI" category, are currently classified as minimal risk.

Generative AI and the Act

Generative AI is identified in the AI Act as an example of "general-purpose" AI, referring to tools capable of performing a broad range of tasks on par with, or better than, a human. Examples include OpenAI's GPT, Google's Gemini, and Anthropic's Claude. The Act imposes strict requirements on these systems, such as:

  • Adherence to EU copyright laws
  • Transparency disclosures on model training
  • Routine testing and cybersecurity protections

The AI Act differentiates between various AI models, ensuring that open-source models—accessible to the public for building tailored AI applications—are not overly regulated.

Although the AI Act is now in force, most of its provisions will not take effect until at least 2026. Restrictions on general-purpose systems will begin 12 months after the Act's enforcement. Generative AI systems currently available commercially have a transition period of 36 months to comply with the new regulations.