Towards a European legal framework for the development and use of artificial intelligence
In 2014, Stephen Hawking said: “The development of full artificial intelligence could spell the end of the human race. While the use of artificial intelligence is not new and dates back to Alan Turing (the godfather of computational theory), prominent researchers – as well as Stephen Hawking – have expressed concerns about the unregulated use of the systems. of AI and their impact on society as we know it. he.
While this concern has been raised since the dawn of computing, AI now powers so many real-world applications, ranging from facial recognition to fraud prevention, that demands for regulation of these systems have finally led to a proposal for a European framework introducing new obligations for suppliers, importers, distributors and users of artificial intelligence.
On April 21, 2021, the European Commission presented a proposal for a regulation (the “Regulation”), providing a legal framework for the development and use of artificial intelligence systems (“AI systems”). Although many organizations and institutions have published guidelines for AI, the EU is the first in the world to present a regulatory proposal of this scope. AI regulation has been high on the EU agenda for some time; the regulation is part of the EU’s broad strategy to ‘shape the digital future’. The objective of this Regulation is to create an internal market in which safe and reliable AI systems are facilitated and market fragmentation is avoided in order to ensure legal certainty with regard to AI systems. The idea is that a regulation that works well will increase public confidence in the safety of AI, which in turn will increase the development and use of AI.
Who will the regulations apply to?
The Regulation is intended to apply to public and private commercial providers who place AI systems on the EU market, users of AI systems located in the EU and providers and users of AI systems. IA located outside the EU, when the output produced by the AI system is used in the EU. The regulation introduces requirements that may apply to suppliers, importers, distributors and users for the development, commercialization and commissioning of high-risk AI systems.
What systems are regulated?
The regulation adopts a definition of AI that includes other algorithmic systems (eg, decision trees, search methods) in addition to “machine learning” techniques. The regulation implements a risk-based approach: a distinction is made between unacceptable risk, high risk, medium risk and low risk systems:
- Unacceptable risk: a limited set of AI applications that pose a clear threat to the security, livelihoods and rights of EU citizens, for example systems that allow governments to conduct social assessment and real-time biometric identification systems used in public for law enforcement purposes.
- The use of these systems is prohibited.
- High risk: i) AI systems that are included as a (part of) product, as a safety component, including medical devices, toys or cars and ii) AI with potential implications for fundamental rights, related (for example) to recruitment, credit rating or critical infrastructure.
- Most of the regulation addresses strict obligations for high-risk AI systems, including ensuring transparency, establishing risk management systems, using high-quality data sets, and monitoring ongoing compliance. .
- Limited risk: AI systems such as human interaction systems (chatbots), emotion recognition systems and “deepfakes”.
- For these systems, specific transparency obligations apply: users must be informed that they are interacting with an AI system.
- Low Risk: Applications that pose little or no risk to users.
- These applications remain unregulated. However, the regulation encourages vendors to voluntarily apply codes of conduct to their AI system, thereby effectively complying with the obligations for high-risk systems, even if this is not strictly required.
Each Member State will designate national authorities responsible for overseeing the application and implementation of the Regulation. Each Member State will thus designate an authority as the national supervisory authority. This authority will also be represented on the European Artificial Intelligence Committee, which will be established under the regulation. This committee will advise the European Commission to help with the implementation and enforcement of the regulation.
The Regulation allows Member States to oversee the application of the Regulation, but requires Member States to take all necessary measures to ensure that the Regulation is properly implemented, including effective and dissuasive penalties. To this end, the regulation specifically mentions the possibility of penalties and specifies the maximum amount of a penalty for certain categories of infringements of the provisions of the regulation.
Execution in the Netherlands?
The regulation leaves national authorities a certain freedom as to how they apply it. It is not clear which Dutch supervisor will apply the regulation. A study commissioned by the Home Office shows that generic and cross-sector oversight for both government and the private sector can be further strengthened. In addition to the actions underway, continued attention is needed to strengthen the available capacities of the supervisory authorities and the cooperation of the joint supervisory authorities.
As mentioned in the introduction, one of the aims of the regulation is to increase confidence in AI in order to stimulate innovation in AI. The regulation creates opportunities for companies specifically aimed at increasing innovation. One is the possibility of experimenting with AI in “sandboxes”. These are virtual, regulated test environments in which an AI system can operate and be tested on its own, without interfering with other systems. The other opportunity facilitated by the regulation concerns digital information centers, where companies can share information and experiences on AI.
Position of the Netherlands
The Dutch position on the proposal was announced on May 31, 2021. The government is generally positive, but has asked several questions and raised objections regarding the feasibility, the definitions used and the lack of room for evaluation. The European Data Protection Supervisor – of which (a representative) of the Dutch Data Protection Authority is a member – also expressed its opinion on the regulation on 18 June 2021, highlighting in particular the risks associated with the use of the ” remote biometric identification of people in accessible spaces and the risks of AI systems using biometrics to categorize individuals into groups based on ethnicity, gender, political or sexual orientation, or other grounds potential for discrimination. In June 2021, the Dutch government indicated in a letter to Parliament that it would create a legal basis under the Dutch GDPR implementing law for the processing of these special categories of personal data, in order to prevent discrimination. in algorithmic systems.
To come up
The European Parliament and Member States will examine the proposal. This should take some time, given the significant impact of the regulation. Once the final regulation is adopted, it will become directly applicable across the EU. It is expected that the legislation will enter into force in about two or three years and will apply from 24 months after the entry into force of the regulation.