AI and algorithms (part 3): why the EU AI regulation is a revolutionary proposal
On April 21, 2021, the European Commission released its bold and comprehensive proposals for the regulation of artificial intelligence. With suggested fines of up to 6% of annual global revenue, along with new rules and bans governing high-risk AI systems, the announcement has already generated a lot of interest, with speculation about its impact on technology companies that develop AI systems. and the industries that use them.
Because of the critical role that data plays in the development of machine learning technologies, regulation has real implications for the privacy profession in particular. While consideration of the full range of implications of the draft text is only just beginning, there are some important initial conclusions that can be drawn.
New framework for the future
What immediately appears in this proposal is a revolutionary attempt to regulate the future of our digital and physical worlds. With its announcement this week, the Commission presented an entirely new set of laws, which intends to place ethical issues such as mitigation of bias, algorithmic transparency and human oversight of automated machines on a legal basis. The framework therefore promises to have the same profound impact on the use of AI that the EU General Data Protection Regulation (GDPR) has had on personal data.
Data is at the heart of regulation
Data governance is an integral part of the obligations that are supposed to apply to providers of high-risk AI systems. The regulation requires vendors to use a range of techniques for datasets that are used in the training, validation and testing of machine learning and similar technologies. This includes identifying potential biases, checking for inaccuracies and assessing the relevance of the data.
The suggested maximum fines that can be imposed under the regulation, up to 6% of annual worldwide turnover, are only intended to apply in a limited range of circumstances. As an indication, these circumstances include a breach by a supplier of the data governance requirements, demonstrating the importance attributed to this issue by the Commission.
Strong governance and risk management are essential
Although the introduction of the principle of accountability into the GDPR was a radical change in privacy law, requiring organizations to put in place practical measures to demonstrate compliance, the AI ââregulation is even more ambitious. Suppliers of high-risk AI systems should implement comprehensive governance and risk management controls. This includes the need to create a regulatory compliance strategy, procedures and techniques for the design, development of the AI ââsystem, and a process for assessing and mitigating risks that may arise throughout its lifecycle. . Conformity assessments will also need to be undertaken to demonstrate compliance with the requirements of the regulation.
Uncertainty about the obligations of “ users ”
Organizations that purchase high-risk AI systems from third-party vendors are also subject to new rules. These rules are supported by the expectation that the user adheres to and monitors operational performance in accordance with a set of technical instructions to be developed by the supplier.
However, since it appears that the content of these instructions will be determined on a case-by-case basis and are not clearly specified in the regulation, this can create significant uncertainty for users as to the nature of their compliance obligations.
Lightweight approach for low risk AI
For AI systems that are neither banned nor deemed to be high risk, the Commission has taken a more pragmatic and leaner approach. Vendors will need to educate individuals when they interact with AI systems, unless this is obvious. However, neither they nor the users will be required to provide detailed explanations of the nature of the algorithms or how they work.
Wide extraterritorial reach
Concerns are likely to arise in relation to the broad extra-territorial scope that the regulation seeks to apply. Suppliers based in third countries, such as the United States, will be subject to the requirements of the regulation if they make their AI system available in the EU. Likewise, and perhaps more importantly, the law will also apply to providers and users of AI systems when the âoutputâ of that system is used in the EU. This condition has the potential to catch a significant number of additional organizations that do not have a commercial presence in Europe.
Lack of a single window mechanism
Interestingly, while there are many parallels that can be drawn between the GDPR and this proposal, the Commission has chosen not to include a one-stop-shop mechanism. The mechanism could have enabled a single lead authority to oversee the compliance of organizations operating in multiple Member States. Instead, the regulation provides that one or more national authorities will be appointed in each country with enforcement powers.
This approach could lead to the fragmentation of oversight of AI systems marketed and used on a cross-border basis. It will therefore be interesting to see whether further clarifications will be made on the mechanisms that will ensure appropriate cooperation and coherence between national authorities, beyond the creation of a new European Artificial Intelligence Committee.
Significance of the harm
Harm, or more specifically, the prevention of harm to persons, is the key objective behind the settlement. The committee considers that damage can occur both physically, due to the fact that AI systems are dangerous and in relation to the risks caused to the fundamental rights of individuals, such as privacy and the right to non-discrimination.
This principle can be seen as the basis for the commission’s rationale as to why certain types of AI have been identified as high risk and, therefore, subject to new rules and bans.
This is just the beginning
This week’s announcement by the Commission is just the start of a vital debate that needs to take place between policymakers, governments and industry on how AI should be regulated in the future. The next step is for the proposal to be examined and debated by the Council and the European Parliament.
AI governance should become a priority
In the meantime, it is important that organizations that develop or use AI consider the strength of their existing governance mechanisms. AI is becoming an increasingly important topic of interest to regulators, not only in the EU, but also in many other major economies, including the US and UK.
Organizations should determine whether they are currently taking appropriate steps to manage the risks of bias, inaccuracies and other forms of harm in their AI systems and ensure that they have adequate controls in place to ensure that they are Comply with existing regulations, including privacy, consumer and anti-discrimination legislation.
This message originally appeared as a IAPP Privacy Outlook publication.
Written by Dan Whitehead.