Thu. Dec 2nd, 2021

by Valerio De Stefano and Antonio Aloisi on 8th November 2021

A draft EU regulation on artificial intelligence risks exclusion of the social partners and lack of compliance with data-protection requirements.

Introduced by agreement, AI may be enabling—an engineer helps an operator programme a robot arm for a production line (Gorodenkoff/shutterstock.com)

Discussions about a European regulation on artificial intelligence have burgeoned since the European Commission published its proposal in April. In the aftermath, we wrote about potential threats to labour and employment rights. The text, of course, is not definitive and it is possible the final version will significantly diverge—there is indeed room for improvement.

Amid calls from employers for a ‘laser-sharp’ definition, which would be prone to circumvention and unable to keep pace with technical developments, AI is described in the draft by reference to some of its functions: prediction, optimisation, personalisation and resource allocation. Due to the pandemic-induced digital acceleration, almost everyone is now familiar with the promises and perils of AI-enabled tools adopted to perform decisional tasks in a large variety of domains and, in particularly sensitive contexts, as working ecosystems.

The draft regulation mentions both ‘AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests’ and ‘AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships’. This largely encompasses the managerial functions entrusted to data-driven management models, often lumped together under the popular formula of ‘algorithmic bosses’.

The commission acknowledges that these ‘high-risk’ AI systems ‘pose significant risks to the health and safety or fundamental rights of persons’. In the draft, however, a ‘laissez-faire’ approach prevails, and such systems have merely to comply with a ‘set of horizontal mandatory requirements for trustworthy AI’. These include ‘appropriate data governance and management practices’, package-insert-like documentation proving compliance with current rules, transparency of procedures, human oversight and ‘an appropriate level of accuracy, robustness, and cybersecurity’. The practice, however, is mainly based on self-certification via the ex-ante ‘conformity assessment procedures’ conducted by providers themselves or, in a few cases, delivered through standard-setting bodies.

Ceiling rather than floor

We had expressed the concern that the regulation could end up being a ceiling for labour protection rather than a floor. Its liberalising legal basis is article 114 of the Treaty on the Functioning of the European Union, on harmonisation in the internal market. This could be used to trump existing national regulations providing for involvement of the social partners before introduction of any technological tool apt to monitor workers’ performance. This is all the more so when AI applications are embedded in ordinary tools already used at the workplace to protect business assets, assess working performance and track productivity—or flag deviant behaviours.

Several member states have devised a dedicated model for surveillance technology in employment, whereby worker representatives or public bodies are required to be involved and, in some cases, may exercise veto powers. While most ordinary monitoring devices have often to pass a codetermination phase before being introduced in professional environments, the model envisaged by the AI regulation risks displacing all these procedural protections. They, in fact, could be interpreted as exorbitant and disproportionate relative to the safeguards provided and thus hampering the free provision of AI-related services the instrument aims to promote. Hence, more protective domestic laws risk being watered down, if read as incompatible with the harmonisation aims of the regulation.

The regulation should specifically provide that it is without prejudice to any existing or future labour and employment protection aimed at governing the introduction and use of any AI-enabled tool in European workplaces. This would avoid the regulation being used to lower or abrogate labour standards, harming privacy, freedom of expression, human dignity and equality.

More specific rules

Such a provision would be consistent with the commission’s claim that its proposal ‘is without prejudice and complements the General Data Protection Regulation’. A paramount provision of the GDPR, article 88, allows member states, ‘by law or collective agreements’, to ‘provide for more specific rules to ensure the protection of the rights and freedoms in respect of the processing of employees’ personal data in the employment context’.

The GDPR refers ‘in particular’ to ‘the purposes of the recruitment, the performance of the contract of employment, including discharge of obligations laid down by law or by collective agreements, management, planning and organisation of work, equality and diversity in the workplace, health and safety at work’, as well as ‘the protection of employer’s or customer’s property and for the purposes of the exercise and enjoyment, on an individual or collective basis, of rights and benefits related to employment, and for the purpose of the termination of the employment relationship’.

Crucially, according to article 88 these rules ‘shall include suitable and specific measures to safeguard the data subject’s human dignity, legitimate interests and fundamental rights, with particular regard to the transparency of processing, the transfer of personal data within a group of undertakings, or a group of enterprises engaged in a joint economic activity and monitoring systems at the work place’.

This aims to allow responsive solutions to the emergence of new instruments and practices that may significantly affect workers. The reference to ‘law or collective agreements’, moreover, provides an opportunity to bring worker representatives to the table.

National data-protection authorities have read this in conjunction with article 5 of the GDPR, which establishes its principles of ‘lawfulness, fairness and transparency’ along with other guiding tenets. In particular, compliance with domestic provisions on employee monitoring and privacy regulation has been considered a precondition for fulfilling the principle of lawful processing, thus strengthening integration between the GDPR and national rules.

Explicit provision

The purposes for which article 88 encourages member states to adopt specific labour and employment protection correspond to, and even go beyond, the potential use of tools envisaged by the AI regulation. Therefore, interpreting the draft as preventing national measures providing specific labour and employment safeguards would be incompatible with article 88 of the GDPR, which explicitly allows for such measures. And if that were the case, the AI regulation would not operate ‘without prejudice’ to the GDPR, as the commission claims.

To foster legal certainty, for the benefit of providers and users, an explicit provision excluding any national labour and employment regulation being ‘dismembered’ under the AI regulation would be opportune. Moreover, AI-enabled managerial tools pose severe risks to workers’ rights, well beyond the realm of privacy. Therefore, consistency must be ensured with the EU Charter of Fundamental Rights and secondary union legislation on consumer protection, non-discrimination and equality, health and safety.

As citizens, trade unions, data-protection authorities and litigators gradually master the ability to utilise the GDPR strategically to tame algorithmic bosses, it is crucial to explore the full potential of the rights it confers. Its role would be severely undermined should the proposed AI regulation become operative as a prevailing EU secondary-law instrument. On the contrary, an AI act must be seen as one piece of a complex, multidimensional jigsaw, whose twofold goal is to uphold data flow while guaranteeing that European citizens’ and workers’ rights are fully respected.

Valerio De Stefano Valerio De Stefano is the BOFZAP professor of labour law at KU Leuven and co-author of New Trade Union Strategies for New Forms of Employment (ETUC, 2019).

Antonio Aloisi Antonio Aloisi is Marie Skłodowska-Curie fellow and assistant professor of European and comparative labour law at IE Law School, IE University, Madrid.

Republished From: https://socialeurope.eu/artificial-intelligence-and-workers-rights

Leave a Reply

Your email address will not be published. Required fields are marked *