The EU AI Act – Can We Protect Human Rights with a Product Compliance Regulation?
/The European Union’s AI Act, which is often referred to as the world’s first comprehensive regulation of artificial intelligence is occasionally criticized for its sometimes inadequate protection of fundamental rights and European values. Some advocates even call for stricter regulations, or for extending the Act's scope to include major platforms’ recommendation algorithms – as originally proposed by the European Parliament.
Concurrently, the business sector often complains that Europe might lose its competitive edge due to overly bureaucratic regulations. This dilemma is frequently characterized as a tension between "security vs. innovation" or "Big Tech vs. European values".
There is indeed a dilemma with the fundamental rights in the AI Act, but it is of a different nature. An important, but often overlooked feature of the AI Act is that, both in its structure and in its regulatory logic, it is actually a product conformity (or compliance) regulation, that fits into the so-called New Legislative Framework (a package of 29 regulations and directives setting up the minimum standards for 26 product types). On first glance, AI systems, which are fundamentally software, fit well within the established realms of technology regulation, as strongly reflected in the original European Commission proposal.
Yet, AI is an unusual type of "product". It often lacks physical form. Further, it is not the product itself but its non-deterministic outputs that pose risks. Moreover, AI applications are also special in the sense that they could have a negative impact not only on health, safety and the environment (HSE), but also on so-called “soft” values such as fundamental rights, democracy, or the rule of law.
The AI Act thus connects two regulatory objectives: the screening of physical (HSE) risks and the management of human rights risks: even the first recital of the AI Act talks about health, safety and fundamental rights. All of this is attempted to be achieved in the form of a norm written following a product compliance approach, containing the institutions and procedures, reflecting an “engineering” logic, which are standard in such an approach, but complemented with a whole range of human rights protecting solutions (processes and substantial rules). The rights-protecting aspects of the Act became stronger and stronger as the legislative process progressed.
My prediction is that this combination will cause a lot of tensions, for two reasons.
The first is that while product compliance rules are ex ante in nature, human rights issues and conflicts in most cases can only be handled ex post. The product compliance rules (e.g. the requirements for high risk AIs in the AI Act) determine ex ante what must be done in order to prevent bad things from happening. In stark contrast to this, the content of the rights involved in human rights violations - especially the right to non-discrimination, the right to privacy, or the freedom of speech - unfolds afterwards, through legal cases, in a natural language environment, embedded into arguments and counterarguments, and in the social context of the individual case. Thinking in human rights is always thinking in cases.
The second, related reason is that the product compliance rules have so far rested on easily quantifiable parameters. If we take just a quick glance at any of these norms, (for example, the EU Toy Safety Directive,) we see hard requirements often referring to physical, mechanical or chemical properties, limited values, prohibited substances and so on. Human rights, democracy or the rule of law, are very difficult to quantify - or cannot be quantified at all. Although various indices and lists measure these aspects scientifically and comparatively, (see eg, this, this or this) they are not suitable for direct legal applications such as imposing fines or granting licenses. Given its blend of product compliance and human rights approaches, it is no wonder that the AI Act is the first product compliance norm that contains almost no clearly quantified criteria.
Context dependency and non-quantifiability mean, in other words, that human rights infringements caused by the outputs of AI are primarily not technical problems that can be handled by technical means. If the “AI” is biased, this means that the training data set is biased (as in the oft-cited COMPAS case). And the training data set is biased because the society, (or courts, or police) is biased. The AI Act will not be able to solve a fundamental social problem with social roots. Developers could be forced to manipulate, clean, sort, label or change the data and the algorithms in many ways, but the problem will come back repeatedly. Human rights impact assessment, as a kind of silver bullet, will not help significantly in this situation either; very probably it will be limited to copying template documents as often happens in the case of the data protection impact assessments required by the EU’s data protection regulation (GDPR).
The problem is that we have little experience with and a very limited number of examples of AIs that violate human rights. What examples we do have, such as the COMPAS case, or the Dutch childcare benefits scandal, indeed could be prevented with the mechanisms of the AI Act, but only because we already have a deep knowledge about these cases. We do not know and are unable to foresee many other types of human rights violations that might be occasioned by the use of AI. Moreover, in the Dutch case, the primary problem was not even the malfunctioning of the AI, but the misuse of sensitive data, the infringement of the rules of automated decision-making, and a whole range of other issues, mainly falling into the categories of “traditional” data protection, and fair procedure problems.
Some even argue, as seen in a recent blog post, that only reactive (ex post) regulation makes sense for AI, but I strongly disagree. The argument in this blog post is neither against ex ante regulation (because it is perfectly appropriate for HSE issues), nor am I against a fundamental rights-centred way of thinking in the AI context. I deeply believe that with such a novel technology, almost the only thing we can rely on is human rights, because these are the only reliable and stable side constraints that are creating boundaries for these technologies in use cases that are very often unforeseeable. But unfortunately, due to the fact that human rights are deeply embedded in society’s fabric, courts’ reasoning and language, an ex ante product conformity regulation, responding to the HSE universe, will only be able to apply to a very limited extent to human rights risks. One of the consequences is that it will be very difficult to adapt to the soft and context-dependent human rights requirements during the programming and training of AI systems, and especially to document all of them convincingly in technical documentation. Another consequence is that we will not be able to filter out all risks, no matter what policymakers believe or promise. We have to accept that we will experience and learn the effects and limits of AI on most human rights ex post facto, on a case-by-case basis, in a trial and error process.
Zsolt Ződi is a senior research fellow at the Institute of the Information Society, Ludovika University of Public Service, Budapest, Hungary. The author is grateful to András Jakab for his comments on the first version of the text.
Suggested citation: Zsolt Ződi, ‘The EU AI Act – Can We Protect Human Rights with a Product Compliance Regulation?’ IACL-AIDC Blog (4 June 2024) The EU AI Act – Can We Protect Human Rights with a Product Compliance Regulation? — IACL-IADC Blog (blog-iacl-aidc.org)