The draft EU regulation on AI: reducing the potential or preventing abuse?
On Wednesday April 21, the European Commission released its much anticipated draft AI regulation. Will these proposed rules hamper the ongoing development and potential of AI or are they exactly what is needed to prevent abuse and protect individuals?
The “ Proposal for a regulation on a European approach to artificial intelligence ” (the draft regulation) follows on from the European Commission white paper on “high-risk” AI applications, released in February 2020. A previous draft was disclosed the week before it was released.
The current draft regulation aims to address the often complex challenges of AI technology, including regulating its use, preventing bias and discrimination, and balancing the use of AI by businesses with the needs and fundamental rights of individuals. At the same time, it seeks to encourage the use and growth of AI.
The proposals would ban the use of artificial intelligence for certain purposes and regulate its use in other areas, subject to exceptions such as specific investigations and terrorism. The project also includes significant penalties for violations – up to 6% of global annual revenue or € 30 million, whichever is greater.
Definition of AI
There is no globally accepted definition of AI and the European Commission has already expressed its willingness to take the initiative in establishing one. The draft regulation applies the same definition of an AI system as that used in the EC proposal for a machinery regulation (adopted on the same day as this draft regulation).
Article 3 (1) of the draft regulation defines AI as “ software which is developed with one or more of the techniques and approaches listed in Annex 1 and can, for a given set of objectives defined by humans, generate results such as content, predictions, recommendations or decisions influencing the environments with which they interact. ”
According to Annex 1, AI includes:
“(A) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods, including deep learning (b) Logical and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive motors, reasoning (symbolic) and expert systems (c) Statistical approaches, Bayesian estimation methods, search and optimization. “
Article 5 of the draft regulation sets out the uses of AI that the EU seeks to ban. This includes discriminatory use, the use of real-time remote biometric identification systems for law enforcement purposes, and AI that “deploys subliminal techniques beyond the consciousness of an individual. person in order to materially distort a person’s behavior ”.
Two levels of risk
Rather than applying a general rule, the draft regulations divide non-prohibited uses of AI into categories of “normal” AI and “high risk” AI.
Examples of high-risk AI are likely to include some aspects of self-driving cars (especially as the technology is used more and more) and, as the draft regulation says, AI systems that could be associated with “personal injury or death or property damage.” “
High-risk AI will be subject to risk assessments and various compliance requirements, for example: risk management through test data and staff training, record keeping and registration of each system. ‘IA in a database managed by the European Commission. Such a centralized high-risk AI database is likely to be controversial, especially since it is unclear what form it would take. The use of weapons / military is specifically excluded from the definition of high risk AI, which is likely to raise international political concerns.
AI in the workplace
It’s no secret that HR departments and recruiters frequently use AI to assess potential workers and assess job prioritization. The draft regulation aims to limit this by classifying this use as “high risk” and by imposing certain guarantees.
The likely intention of the European Commission is that this classification will limit the potential for AI discrimination in the workplace and on employee privacy. Yet this provision is already criticized for the element of “self-assessment”. Essentially, the employer will determine if their use of AI for recruiting and HR is up to the rules. In practice, this means that the draft regulation may offer more flexibility to the employer and not provide the protection it seeks to offer the employee.
The draft regulations require individuals to be made aware (unless it is obvious) that they are using / interacting with AI. The likely objective of the European Commission is to increase transparency and reduce the risk to consumers. However, the draft contains far-reaching exceptions to this requirement, not only to safeguard public safety, but also to cover up satire and parody. So, again, this proposal may not have the intended effect, provided it becomes law.
Clear goals, but are they practical?
The objectives of the European Commission with the draft regulation are clear. In his own words, he aims to:
- ensure that AI systems placed on the Union market and used are safe and comply with existing legislation on fundamental rights and Union values
- guarantee legal certainty to facilitate investments and innovation in AI
- improve governance and effective enforcement of existing law on fundamental rights and security requirements applicable to AI systems
- facilitate the development of a single market for legitimate, wise and trustworthy AI applications and avoid market fragmentation. “
It remains to be seen whether the draft regulation can actually achieve this, but there is still a lot of time to discuss its provisions. It is still in the early stages of the European Union legislative process and must now pass through the European Parliament before it can be implemented as law – a process which could take several years.
You can access a copy of the draft regulation here.