CONCLUSION
I
n a decision issued in February 2022 and published the same week we were finalizing this Report, the Hungarian DPA sanctioned a bank for unlawfully processing personal data resulting from voice recordings through an AI system that promised emotion detection and measurement for customers calling the bank, and prioritization of those cataloged as the most upset and impatient customers for callbacks.154 The DPA found multiple breaches of the GDPR: the principles of lawfulness, transparency and purpose limitation; notice obligations; the right to object; controller accountability obligations; and data protection by design and by default. The case resulted in a fine of over 650.000 EUR and an order to bring the processing of personal data into compliance within 60 days. The DPA did not pursue an assessment under Article 22 GDPR, since it concluded early on in the decision that “no direct decision-making is made” using the AI system. The outcome of the processing at issue merely served as a basis for further actions by the bank or its employees. However, this did not prevent the DPA from finding that the processing significantly breached the GDPR. This case, involving a truly novel proposition of automated processing of personal data resulting in emotion recognition and classification, confirms the main conclusion of our study based on more than 70 decisions, opinions and guiding documents issued by Courts and DPAs: the provisions of the GDPR cover ADM processes and systems in a comprehensive manner, beyond the specific safeguards offered by Article 22 for processing of personal data resulting in decisions solely based on automated processing and that have legal or similarly significant effects on individuals. This is valid for AI systems involving the processing of personal data even when they are not qualifying ADM, live Facial Recognition systems, algorithms that distribute gigs in the sharing economy, automated tax fraud flags or automated assessments for issuing gun licenses — only to give some examples. Even if the threshold for automated processing to be classified as qualifying ADM is high, Courts and DPAs have found multiple instances where Article 22 GDPR is applicable. In doing so, they have been developing sophisticated criteria to assess the degree of human involvement in ADM and to establish whether the impact of solely ADM on individuals is significant enough to trigger the protection of Article 22. Without going into detail (see Sections 2.1. and 2.2.), we note elements such as the broad organizational context in which an automated decision is being made, existence of training for the staff involved in the ADM process, influencing of choices and behavior of concerned individuals, the categories of personal data on the basis of which the ADM is being made and whether they draw on monitoring of behavior, or affecting opportunities of making income. One of the most significant elements of the lawfulness of ADM, be it qualifying ADM or not, remains the existence of an appropriate lawful ground for processing. For instance, the use of live FR in schools was declared unlawful in several cases primarily because it did not have a valid lawful ground for processing in place — consent was considered to be the only ground that could justify the use of this technology to process personal data of students, and consent was not considered to be freely given in any of the cases analyzed that related to students and schools. On the contrary, relying on live FR to ensure safety on a football stadium was considered lawful by a DPA even if it was not based on consent, but on substantial public interest, and provided that a set of safeguards was also ensured. In other cases, the mere fact that consent was not sufficiently informed made the qualifying ADM unlawful (see Case 5).
48
FUTURE OF PRIVACY FORUM