4 minute read

Conclusion

In a decision issued in February 2022 and published the same week we were finalizing this Report, the Hungarian DPA sanctioned a bank for unlawfully processing personal data resulting from voice recordings through an AI system that promised emotion detection and measurement for customers calling the bank, and prioritization of those cataloged as the most upset and impatient customers for callbacks.154

The DPA found multiple breaches of the GDPR: the principles of lawfulness, transparency and purpose limitation; notice obligations; the right to object; controller accountability obligations; and data protection by design and by default. The case resulted in a fine of over 650.000 EUR and an order to bring the processing of personal data into compliance within 60 days. The DPA did not pursue an assessment under Article 22 GDPR, since it concluded early on in the decision that “no direct decision-making is made” using the AI system. The outcome of the processing at issue merely served as a basis for further actions by the bank or its employees. However, this did not prevent the DPA from finding that the processing significantly breached the GDPR.

Advertisement

This case, involving a truly novel proposition of automated processing of personal data resulting in emotion recognition and classification, confirms the main conclusion of our study based on more than 70 decisions, opinions and guiding documents issued by Courts and DPAs: the provisions of the GDPR cover ADM processes and systems in a comprehensive manner, beyond the specific safeguards offered by Article 22 for processing of personal data resulting in decisions solely based on automated processing and that have legal or similarly significant effects on individuals. This is valid for AI systems involving the processing of personal data even when they are not qualifying ADM, live Facial Recognition systems, algorithms that distribute gigs in the sharing economy, automated tax fraud flags or automated assessments for issuing gun licenses — only to give some examples.

Even if the threshold for automated processing to be classified as qualifying ADM is high, Courts and DPAs have found multiple instances where Article 22 GDPR is applicable. In doing so, they have been developing sophisticated criteria to assess the degree of human involvement in ADM and to establish whether the impact of solely ADM on individuals is significant enough to trigger the protection of Article 22. Without going into detail (see Sections 2.1. and 2.2.), we note elements such as the broad organizational context in which an automated decision is being made, existence of training for the staff involved in the ADM process, influencing of choices and behavior of concerned individuals, the categories of personal data on the basis of which the ADM is being made and whether they draw on monitoring of behavior, or affecting opportunities of making income.

One of the most significant elements of the lawfulness of ADM, be it qualifying ADM or not, remains the existence of an appropriate lawful ground for processing. For instance, the use of live FR in schools was declared unlawful in several cases primarily because it did not have a valid lawful ground for processing in place — consent was considered to be the only ground that could justify the use of this technology to process personal data of students, and consent was not considered to be freely given in any of the cases analyzed that related to students and schools. On the contrary, relying on live FR to ensure safety on a football stadium was considered lawful by a DPA even if it was not based on consent, but on substantial public interest, and provided that a set of safeguards was also ensured. In other cases, the mere fact that consent was not sufficiently informed made the qualifying ADM unlawful (see Case 5).

Transparency obligations are very often invoked and applied in relation to ADM cases. Generally, enforcers make a distinction between general transparency obligations for all processing of personal data, including that underlying profiling and ADM which is not covered by Article 22, and specific transparency obligations under Articles 13(2)(f), 14(2)(g) and 15(1)(h) GDPR, only for qualifying ADM subject to Article 22, requiring a more complex set of information to be provided to data subjects (“meaningful information about the logic involved,” “the significance and the envisaged consequences on the data subject”). In this sense, a point of divergence is emerging in practice, with some enforcers (mainly DPAs) starting to push for a recognition of the right to obtain information about the logic involved in ADM not only in cases of qualifying ADM, but in any profiling or other ADM instances.

It is clear that since 2018 Courts and DPAs have started to ramp up enforcement of data protection law in ever more complex cases involving ADM and similar technologies. On top of issuing fines, the DPAs are making full use of their powers to issue corrective measures, such as ordering erasure of personal data or stopping certain processing activities.

A particular need emerging from analyzing these cases is for a concerted effort to better understand how data protection law is applicable to ADM and technologies like AI, ML, FR that rely on or result in (“process”) personal data. This would ensure that new legislative intervention — such as the proposed AI Act, the Platform Workers’ Directive, the Consumer Credit Directive — does not create an inflation of legal uncertainty, but only occurs where there are clear lacunae.

This article is from: