AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement

Content-Type:

News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

The representatives of the main EU institutions at the last political trilogue on the AI Act. [European Commission]

After 22 hours of intense negotiations, EU policymakers found a provisional agreement on the rules for the most powerful AI models, but strong disagreement in the law enforcement chapter forced the exhausted officials to call for a recess.

The AI Act is a landmark bill to regulate Artificial Intelligence based on its capacity to cause harm. The file is at the last stage of the legislative process as the EU Commission, Council, and Parliament meet in so-called trilogues to hash out the final provisions.

The final trilogue started on Wednesday (6 December) and lasted almost uninterruptedly for an entire day until a recess was called for Friday morning. In this first part of the negotiation, an agreement has been found on regulating powerful AI models.

Scope

The regulation’s definition of AI takes all the main elements of the OECD’s definition, although it does not repeat it word for word.

As part of the provisional agreement, free and open-source software will be excluded from the regulation’s scope unless they are a high-risk system, prohibited applications or an AI solution at risk of causing manipulation.

On the negotiators’ table after the recess will be the issue of the national security exemption, since EU countries, led by France, asked for a broad exemption for any AI system used for military or defence purposes, including for external contractors.

Another point to discuss is whether the regulation will apply to AI systems that were on the market before it started to apply if they undergo a significant change.

OECD updates definition of Artificial Intelligence ‘to inform EU’s AI Act’

The Organisation for Economic Co-operation and Development’s (OECD) Council on Wednesday (8 November) adopted the new definition of Artificial Intelligence that is set to be incorporated in the EU’s new AI rulebook.

Foundation models

According to a compromise document seen by Euractiv, the tiered approach was maintained with an automatic categorisation as ‘systemic’ for models that were trained with computing power above 10~25 floating point operations.

A new annexe will provide criteria for the AI Office to make qualitative designation decisions ex officio or based on a qualified alert from the scientific panel. Criteria include the number of business users and the model’s parameters, and can be updated based on technological developments.

Transparency obligations will apply to all models, including reporting on energy consumption and publishing a sufficiently detailed summary of the training data “without prejudice of trade secrets”. AI-generated content will have to be immediately recognisable.

AI Act: Spanish presidency makes last mediation attempt on foundation models

The Spanish presidency of the EU Council shared a revised mandate to negotiate with the European Parliament on the thorny issue of regulating foundation models under the upcoming AI law.

The AI Act is a flagship bill to regulate Artificial Intelligence …

Importantly, the AI Act will not apply to free and open source models whose parameters are made publicly available, except for what concerns implementing a policy to comply with copyright law, publishing the detailed summary, obligations for systemic models, and the responsibilities along the AI value chain.

For the top-tier models, the obligations include model evaluation, assessing and keeping track of systemic risks, and cybersecurity protection.

The codes of practice are only meant to complement the binding obligations until harmonised technical standards are put in place, and the Commission will be able to intervene via delegated acts if the process is taking too long.

Governance

An AI Office will be established within the Commission to enforce the foundation model provisions. The EU institutions are to make a joint declaration that the AI Office will have a dedicated budget line.

AI systems will be supervised by national competent authorities, which will be gathered in the European Artificial Intelligence Board to ensure consistent application of the law.

An advisory forum will gather stakeholder feedback, including from civil society. A scientific panel of independent experts was introduced to advise on the regulation’s enforcement, flag potential systemic risks and inform the classification of AI models with systemic risks.

AI Act: MEPs close ranks in asking for tighter rules for powerful AI models

The MEPs involved in the negotiations on the EU’s AI rulebook circulated a working paper detailing their proposed approach to regulating the most powerful Artificial Intelligence models on Friday (24 November).

The AI Act, a landmark bill to regulate AI based …

Prohibited practices

The AI Act includes a list of banned applications because they are deemed to pose an unacceptable risk. The bans confirmed so far are on manipulative techniques, systems exploiting vulnerabilities, social scoring, and indiscriminate scraping of facial images.

However, the European Parliament has proposed a much longer list of banned applications and is facing a strong pushback from the Council. According to several sources familiar with the matter, MEPs were being pressured to accept a package deal, seen by Euractiv, that is extremely close to the Council position.

The parliamentarians were split on this matter, with the centre-right European People’s Party, co-rapporteur Dragoș Tudorache, and the president of the Social Democrat parliamentary group, Iratxe García, pushing for accepting the deal.

The Council’s text wants to ban biometric categorisation systems based on sensitive personal traits like race, political opinions and religious beliefs “unless those characteristics have a direct link with a specific crime or threat”.

The examples given were of religiously or politically motivated crimes. Still, the presidency also insisted on keeping racial profiling.

Leading MEPs make counter-proposal on AI rulebook’s law enforcement chapter

The EU lawmakers spearheading the AI law circulated a possible compromise on the dispositions related to law enforcement, one of the most sensitive areas of the file.

While left-of-centre lawmakers want to ban predictive policing, the Council’s proposal limits the ban to investigations solely based on the system’s prediction and not to cases with reasonable suspicion of involvement in criminal activity.

The Parliament also introduced a prohibition for emotion recognition software in the workplace, education, law enforcement, and migration control. The Council is only willing to accept it in the first two areas, except for medical or safety reasons.

Another controversial topic is the use of Remote Biometric Identification (RBI). MEPs have agreed to drop a complete ban in favour of narrow exceptions related to serious crime. The Council is pushing to give law enforcement agencies more room to manoeuvre and make the ex-post usage a high-risk application.

An additional open issue relates to whether these bans should apply only to systems used within the Union or also prevent EU-based companies from selling these prohibited applications abroad.

[Edited by Zoran Radosavljevic]

Read more with Euractiv

Supporter

AI4TRUST

Funded by the European Union

Check out all Euractiv's Projects here

Subscribe to our newsletters

Subscribe