HHS ILSA Law Journal Issue I - 2024

Page 1

3 CONTENTS EDITOR-IN-CHIEF’S NOTE 4 THE GUEST EDITORS 5 ACKNOWLEDGEMENTS ....................................................................................................7 Human Rights & Technology: Unpaid human labour inArtificial Intelligence ByAlexandraAlbei 8 The Enforceability of Data Protection Rules in the EU By Orla Harris 17 Adjusting Policy and Legislature in Mass Surveillance and Intelligence in Compliance with the European Convention of Human Rights By
26 Children’s Rights in a Digital Environment
58
By

EDITOR-IN-CHIEF’S NOTE

Dear readers,

On behalf of the Editorial Board, it is my distinct pleasure to officially present you the first issue of the HHS ILSA Law Journal for 2024.

First launched in 2016 under the auspices of the Hague Chapter of the International Law Students Association (HHS ILSA), the HHS ILSA Law Journal invites students and alumni of the International and European Law Programme at The Hague University of Applied Sciences to respond to our biannual Call for Submissions. We aim to promote scholarly reflection on current and emerging topics of International and European law. By encouraging the critical study of contemporary developments, the Journal strives to give voice to an array of perspectives in identifying impending challenges, as well as offering possible solutions to them.

This issue of the Journal focuses on the topic of Law and Technology: Addressing Challenges, Regulation and Enforcement. With rapid technological developments and advancements, the current understanding and enforcement of the law is subsequently changing. While the use of such emerging technologies aid legal processes and their efficiency, its proliferation necessitates regulatory responses due to its inherent risks. These technological developments raise an array of issues such as accountability, liability, and limitations. It is on these shifts that we wish to reflect on by publishing comprehensive articles which discuss the successes achieved, challenges encountered, and regulatory frameworks imposed on technology due to its implications in law. We are pleased to present you with a great selection of contributions that illustrates the diversity and pertinence of the issues explored in this publication. We hope that you enjoy this issue and find its contents curious, stimulating, and thought-provoking.

On behalf of the editorial team, I wish you a pleasant read!

Ms Vedika Sajnani

2023-2024 Editor-in-Chief of the HHS ILSA Law Journal

4

THE GUEST EDITORS

Ms Kanan Dhru

Ms Kanan Dhru is the Senior Lecturer Legal Technology at the European and International LAW department at THUAS. Her work includes coordinating the courses on Law, Artificial Intelligence and New Technologies as well as Cyber Security. She is a part of the Expert Group on Artificial Intelligence and also represents the university on projects such as AI4Intelligence and AI for public safety. Previously, she was working at The Hague Institute for Innovation of Laws on issues around strategic partnerships to scale justice innovations worldwide.

A law graduate from the London School of Economics and winner of multiple awards for her work in the area of legal innovation, Kanan has been developing and researching innovative solutions in the legal domain since 2009. She has been a practicing lawyer, founder of a think-tank on legal reforms called Research Foundation for Governance and one of its kinds comic book series simplifying law for kids called Lawtoons. She has been a member of the Steering Committee of the Asia-Pacific Legal Innovation and Technology Association and has served on the World Economic Forum’s Global Agenda Council on India.

Ms Leyla Gayibova

Ms Leyla Gayibova is a Lecturer in Law and Technology in the International and European Law Programme at The Hague University of Applied Sciences and a Thesis Supervisor in the Politics, Psychology, Law, and Economics Programme at the University of Amsterdam. Ms Gayibova holds an LL.B. in International and European Law from the University of Sheffield and an LL.M in International Law (specialisation: human rights) from the University of Amsterdam. At THUAS, she teaches in two interdisciplinary minors: law and technology, and cybersecurity. Her research interests lie in AI regulation and data protection law, with a particular focus on the legal, ethical, and societal implications of the use of automated decision-making systems in the justice sector, such as the criminal justice system and asylum applications.

5

Mr Bartosz Krysiak

Bartosz Krysiak currently works as a lecturer of legal skills and private law with the International and European Law program at The Hague University of Applied Sciences. He holds a Master of Laws from Jagiellonian University in Cracow and a Commercial and Company Law LL.M. from Erasmus University in Rotterdam. Beyond commercial law, his professional interests include technology law, AI ethics, and topics pertaining to formal logic and argument schemes.

6

ACKNOWLEDGEMENTS

The ILSA Law Journal would first like to thank the authors who shared their outstanding contributions in this issue. We are incredibly grateful for the unwavering trust, patience, and enthusiasm they showed towards the realization of this publication.

We would also like to take this opportunity to express our appreciation to the Guest Editors: Mr Bartosz Krysiak, Ms Kanan Dhru and Ms Leyla Gayibova. We are deeply thankful for the unparalleled support and guidance they have provided us in conducting the selection and editorial process.

The Journal would also like to thank the 2023-2024 ILSA Management Board for its continuous support and encouragement. We would like to extend our gratitude to President and Treasurer Mr Tolga Doğan, Vice-President and Head of Social Events Mx Paolo Quattrone, Head of Main Events Mr Áron Moravecz, Head of Marketing Ms Theodora Mrejeru, and Editor-in-Chief of the HHS ILSALaw Journal Ms Vedika Sajnani.

Finally, we would like to thank the editorial team for their diligence and determination. The Journal would like to express its most sincere appreciation for the participation of its members including Secretary Ms Eliane Solis Vazquez, Managing Editor Ms Aurélie Lévesque and Editors Ms Alessandra Cao, Ms Emily Warchala, Ms Grace O'Halloran, Mr Illia Gats, Ms Insaf Arredouani, Ms Jahvel Jackson, Ms Maja Balsai, Ms Natalia Malecka, Ms Victoria Peña Morante, Ms Wiktoria Sumpf, and Mr Zoltán Miholics.

The selection process was solely conducted by the Guest Editors so as to avoid any bias and to ensure that the selection was based on merit.

7

Human Rights & Technology: Unpaid human labour inArtificial Intelligence

Abstract

The rapid expansion of Artificial Intelligence underscores the vital role played by human gig economy workers, particularly content moderators who shape social media platforms by generating and labelling data sets used for Machine Learning algorithms. However, major digital work platforms, including Amazon Mechanical Turk and Meta, fail to meet basic standards of fair work, with employees enduring unfair conditions and a substantial portion of their time being labelled as “unpaid”. The ethical dimensions of AI extend globally, with outsourced workers facing language barriers and precarious working conditions. The industry’s reliance on low-wage gig economy workers,, raises concerns about labour exploitations, a theme explored through the legal challenges faced by Sama, an outsourcing company whose services are being used by platforms such as Facebook and Google and who is currently involved in a lawsuit due to the precarious working conditions their employees in Kenya have been subjected to.

Looking forward, the focus should shift towards eradicating labour exploitation in the AI industry. Global regulations are essential to curb exploitative practices, particularly in systems dependent on the Global South. This note highlights the importance of a balanced approach, prioritizing the respectful treatment of workers over corporate profit, to ensure ethical practices in the evolving reality of AI. The focus remains on achieving sustainable development while acknowledging the critical toll of human workers in shaping AI landscape.

* LL.B. Candidate, International and European Law Programme, The Hague University of Applied Sciences.

8

I. Introduction

Artificial Intelligence (AI) is currently experiencing a rapid expansion.1 In the gig economy, individuals engage in short-term or temporary work arrangements facilitated by online platforms, connecting them with customers or clients.2 With the growing utilisation of digital labour platforms by many AI companies to enlist the services of human gig workers who are involved in critical, yet often overlooked tasks, such as generating or labelling extensive datasets crucial for the effective functioning ofAI systems.3 The development ofAI platforms involves leveraging human behavioural patterns to construct precise algorithms.4 Additionally, humans must familiarize themselves with AI operational procedures and the construction of its datasets.5

Roles in technology, such as software development, are evolving to ensure developers comprehend datasets that enhance AI intelligence.6 For instance, content moderators play a pivotal role in identifying and flagging content considered inappropriate for specific platforms, making them essential in ensuring the functionality of social media platforms.7 Moreover, these tasks of flagging and censoring serve as training data for automated systems designated to detect and address issues like hate speech, fake news, violence and other policy violations.8 This work is often carried out as part of initiatives to enhance the reliability and reduce bias inAI.9

However, despite their crucial role, content moderators frequently endure low wages when employed by tech giants and engage in trauma inducing tasks while under close surveillance.10 That is the case of content moderators in Nairobi, Kenya who have been

1 Billy Perrigo, “Gig Workers Behind AI Face ‘Unfair Working Conditions’ Oxford Report Finds” (Time, 20 July 2023) < https://time.com/6296196/ai-data-gig-workers/ > accessed 9 February 2024.

2 Aaron Raj “Will AI put an end to the gig economy?” (TECHWIRE ASIA, 1 December 2023) < https://techwireasia.com/12/2023/will-ai-put-an-end-to-the-gig-economy/ > accessed 13 April 2024.

3 ibid.

4 Dewayne Hart “Unveiling AI’s Secret Impact On Human Labor And Intelligence” (Forbes, 1 September 2023) < https://www.forbes.com/sites/forbestechcouncil/2023/09/01/unveiling-ais-secret-impact-on-human-labor-andintelligence/?sh=61e47ade4a74> accessed 13 April 2024.

5 ibid.

6 ibid.

7 Adrienne Williams, Milagros Miceli and Timnit Gebru, ‘The Exploited Labor Behind Artificial Intelligence’ (Noēma, 13 October 2022) < https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/ > accessed 9 February 2024.

8 ibid.

9 Billy Perrigo, “Gig Workers Behind AI Face ‘Unfair Working Conditions’ Oxford Report Finds” (Time, 20 July 2023) < https://time.com/6296196/ai-data-gig-workers/ > accessed 9 February 2024.

10 Billy Perrigo ‘Inside Facebook’s African Sweatshop’ (Time, 17 February 2022) <https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/ > accessed 9 February 2024.

9

employed by Sama, a California based outsourcing company which names itself “ethical AI”.11 Approximately 200 young individuals from all over Africa, who are working as Meta content moderators, are tasked with watching videos depicting acts of violence, including murders, rapes, suicides and instances of child sexual abuse.12 Their job is to ensure this kind of content is intercepted before reaching the average user.13

II. Worker Rights are Human Rights

A. Fairwork Research

As indicated in a report from the University of Oxford’s Internet Institute which evaluated 15 digital work platforms including Amazon Mechanical Turk, Scale AI and Appen revealed that none of them have achieved “basic standards of fair work”.14 Amazon Mechanical Turk (MTurk) is a crowdsourcing marketplace designated to facilitate the outsourcing of tasks and jobs to a distributed workforce. These tasks can range from straightforward data validation and research to more subjective assignments such as survey participation and content moderation, all of which can be completed virtually.15

The Fairwork report notes that the workers involved in the design, construction and testing of these technological solutions continue to confront significant challenges and endure unfair working conditions.16 752 workers in 94 countries, alongside formal interviews with platform managers, served as the survey’s foundation conducted by researchers from Oxford’s Fairwork team.17 The evaluation criteria were based on a score out of ten based on five principles which scored two points each: fair pay, fair conditions, fair contracts, fair management and fair representation.18 Notably, four platforms (Amazon Mechanical Turk, Workana, Microworkers and Freelancer) received a score of zero points and out of all of the

11 ibid.

12 ibid.

13 ibid.

14 Oxford Internet Institute ‘Fairwork Cloudwork Ratings 2023: Work in the Planetary Labour Market’ (Fairwork 2023) < https://fair.work/en/fw/publications/fairwork-cloudwork-ratings-2023-work-in-the-planetarylabour-market/ > accessed 9 February 2023.

15 Amazon “Amazon Mechanical Turk: Access a global, on-demand, 24x7 workforce” < https://www.mturk.com/ > accessed 13 April 2024.

16 Oxford Internet Institute ‘Fairwork Cloudwork Ratings 2023: Work in the Planetary Labour Market’ (Fairwork, 2023) < https://fair.work/en/fw/publications/fairwork-cloudwork-ratings-2023-work-in-theplanetary-labour-market/ > accessed 9 February 2023.

17 ibid.

18 ibid.

10

platforms assessed, none surpassed a score of five.19 The researchers emphasized that a score of ten out of ten merely indicated compliance with the “bare minimum”. 20 Furthermore, the report brings light to a significant revelation: workers allocate a substantial portion of their time to what the authors define as “unpaid labour”.21 Specifically, the study identified that across the five platforms most commonly utilized by AI companies for sourcing gig workers, 250 workers spend 26.8 percent of their time on tasks labelled as “unpaid”, including job searches, unpaid assessments, and job applications.22 Considering this unpaid time, the average hourly wage for them was calculated at USD 2.15.23 Although, the workers in question come from 51 countries with varying costs of living, this report underscored the ongoing challenge of inadequate compensation on these platforms.24

B. AI’s Human Cost

The emergence of AI systems introduces numerous ethical dilemmas, signalling a landscape where questions often outnumber answers. AI ethics is a particularly relevant issue given the swift pace of technological progress. Data labelling tasks occur far from the Silicon Valley headquarters of multinational corporations which champion an “AI first” approach.25 These tasks are delegated to workers across the globe, from Venezuela, where data labelling for image recognition in self-driving vehicles takes place, to Bulgaria, where Syrian refugees contribute to facial recognition systems by labelling selfies based on race, gender and age categories.26 These responsibilities are commonly outsourced to workers in countries like India, Kenya, the Philippines or Mexico.27 Many of these workers may not be fluent in

19 ibid.

20 ibid.

21 ibid.

22 ibid.

23 ibid.

24 ibid.

25 ibid.

26 Angela Chen ‘Desperate Venezuelans are making money by training AI for self-driving cars’ (MIT Technology Review, 22 August 2019) < https://www.technologyreview.com/2019/08/22/65375/venezuela-crisisplatform-work-trains-self-driving-car-ai-data/ > accessed 15 February 2024 ; Milagros Miceli, Martin Schuessler, Tianling Yang ‘Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision’ (2020) Proc. ACM Hum-Comput. Interact 1. 1. Article 115, October 2020 <https://dl.acm.org/doi/pdf/10.1145/3415186 > accessed 15 February 2024.

27 Mark Graham (ed) ‘Digital Economies at Global Margins’ (The MIT Press, 2019) <https://mitpress.mit.edu/9780262535892/ > accessed 15 February 2024.

11

English but receive instructions in English and face the risk of termination or being banned from crowd work platforms if they fail to fully comprehend the guidelines.28

The AI industry heavily relies on low-wage workers, who find themselves in precarious situations.29 In the absence of unionization, they face challenges in resisting unethical practices or advocating for improved working conditions due to the risk of losing their jobs.30 Companies deliberately recruit individuals from impoverished communities, including refugees, incarcerated individuals and others with limited employment opportunities, often employing them through third-party firms such as contractors rather than employing them directly as full-time employees.31 While it is commendable for employers to consider hiring from vulnerable groups, it is unacceptable to do so in a predatory manner without adequate protections in place.32

In addition to enduring a distressing work environment lacking adequate mental health support, these employees face scrutiny and consequences for straying from their assigned repetitive tasks.33 For instance, content moderators working for Meta in Nairobi, Kenya under the Sama contract undergo monitoring via surveillance software.34 Sama is a company based in San Francisco that employs workers in Kenya, Uganda and India for data labelling tasks on behalf of Silicon Valley clients such as Google, Meta and Microsoft, positioning itself as an “ethical AI” firm.35 They assert to have played a role in elevating over 50,000 individuals out of poverty.36

While on their official website, they claim to have helped many individuals out of poverty, a TIME investigation has found that despite their significant role in Facebook’s operations, workers at the Nairobi office are among the platform’s lowest-paid globally, with

28 Milagros Miceli, Julian Posada ‘The Data-Production Dispositif’ (Cornell University, 24 May 2022) <https://arxiv.org/abs/2205.11963 > accessed 15 February 2024.

29 Adrienne Williams, Milagros Miceli and Timnit Gebru, ‘The Exploited Labor Behind Artificial Intelligence’ (Noēma, 13 October 2022) < https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/ > accessed 15 February 2024.

30 ibid.

31 ibid.

32 ibid.

33 ibid.

34 ibid.

35 Billy Perrigo ‘OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic’ (Time, 18 January 2023) < https://time.com/6247678/openai-chatgpt-kenya-workers/ > accessed 15 February 2024.

36 Heather Gadonniex ‘From Dreams to Reality: Our Journey to Becoming a Certified B Corporation’ (Sama) < https://www.sama.com/blog/we-are-a-b-corp > accessed 15 February 2024.

12

some earning as little as USD 1.50 per hour.37 Sama is currently involved in a lawsuit, alongside Meta for alleged human-trafficking and union-busting in Kenya alongside multiple violations of the Kenyan constitution.38 Testimonies from Sama employees shed light on a workplace culture marked by mental distress, intimidation and alleged obstruction of the right to unionize.39 These revelations prompt significant concern about whether Facebook is exploiting the individuals responsible for safeguarding its platform’s safety.40 The TIMES report draws upon interviews with over a dozen present and past Sama employees, along with an extensive review of hundreds of pages of documents, including company emails, pay slips and contracts.41 The majority of the employees chose to remain anonymous, citing concerns about potential legal repercussions for revealing the nature of their work or Facebook’s role in it.

42

Monitoring workers ensures that decisions regarding violent content in videos are made within a strict 50 second timeframe, irrespective of the video’s length or disturbing nature.43 Some content moderators express concern that failing to meet this timeframe could lead to termination after repeated violations.44 This emphasis on speed and efficiency may contribute to the persistence of videos containing hate speech and incitement to violence on Facebook’s platform.45

III. The Path Forward

While the focus has predominantly been concentrated on ‘debiasing’ data, promoting transparency, and model fairness, it must be argued that eradicating labour exploitation in the AI industry should be the central focus.46 If corporations are prevented from exploiting labour, it would impede the rapid proliferation47 of harmful technologies.48 To achieve this,

37 Billy Perrigo ‘Inside Facebook’s African Sweatshop’ (Time, 17 February 2022) <https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/ > accessed 13 April 2024.

38 ibid.

39 ibid.

40 ibid.

41 ibid.

42 ibid.

43 Billy Perrigo ‘Inside Facebook’s African Sweatshop’ (Time, 17 February 2022) <https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/ > accessed 15 February 2024.

44 ibid.

45 ibid.

46 Adrienne Williams, Milagros Miceli and Timnit Gebru, ‘The Exploited Labor Behind Artificial Intelligence’ (Noēma, 13 October 2022) < https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/ > accessed 15 February 2024.

47 ibid.

13

comprehensive global regulations are necessary to prevent exploitation in global systems.49 Companies, branding themselves as ‘ethical’, cannot maintain such a status if they perpetuate exploitative practices,50 especially by relying on labour from the Global South.51

Shame can be effective in specific situations, and for businesses, public disapproval expressed as ‘shame on you’ can occasionally translate into revenue losses, serving as a catalyst for increased accountability.52 This was the case for Spotify, whose market value dropped by approximately USD 2.1 billion within a span of three days in 2022.53 This decline was due to folk rocker Neil Young’s decision to remove his songs from the audio-streaming platform in protest against Joe Rogan’s podcast, which is criticised for spreading misinformation.54 Following Young’s decision, hashtags like #CancelSpotify, #DeleteSpotify and #ByeSpotify gained traction on social media platforms.55

Boycotting works when done properly, and if people knew about the intensive human labour behind Artificial Intelligence and the brutal tasks gig economy workers are subjected to for the profits of companies such as Meta or Amazon Mechanical Turk, it would enable tech giants to revise the working conditions of their human gig workers. Achieving sustainable development requires shifting away from prioritizing corporate profit and rapid development, placing emphasis instead on the respectful and humane treatment of workers.

56

The first step is to demand greater transparency from tech companies that promote AI as solely machine-driven.57 We need truthful advertising regarding the involvement of humans in the enhancement of AI systems, whether it is curating news or managing social media content.58 Understanding where human labour contributes allows us to appreciate its value

48 ibid.

49 Nana Mgbechikwere Nwachukwu, Jennafer Shae Roberts and Laura N Montoya, ‘The Glamorisation of Unpaid Labour: AI and its Influences’ (5th Deep Learning Indaba Conference, 16 September 2023) DLI 2023 < https://arxiv.org/pdf/2308.02399.pdf > accessed 15 February 2024.

50 ibid.

51 ibid.

52 Todd Spangler, ‘Spotify Lost More Than $2 Billion in Market Value After Neil Young Pulled His Music Over Joe Rogan’s Podcast’ (Variety, 29 January 2022) < https://variety.com/2022/digital/news/spotify-2-billionmarket-cap-neil-young-joe-rogan-1235166798/ > accessed 20 February 2024.

53 ibid.

54 ibid.

55 ibid.

56 Nana Mgbechikwere Nwachukwu, Jennafer Shae Roberts and Laura N Montoya, ‘The Glamorisation of Unpaid Labour: AI and its Influences’ (5th Deep Learning Indaba Conference, 16 September 2023) DLI 2023 < https://arxiv.org/pdf/2308.02399.pdf > accessed 15 February 2024.

57 Mary L. Gray and Siddharth Suri, ‘The Humans Working Behind the AI Curtain’ (Harvard Business Review, 9 January 2017) < https://hbr.org/2017/01/the-humans-working-behind-the-ai-curtain > accessed 14 April 2024.

58 ibid.

14

and comprehend the training and support behind decision-making, particularly in matters of public interest.59

Additionally, the current issue relating to the workers in data collection and labelling must be addressed.60 Considering that the employees’ interaction with workplace AI aims to address its limitations rather than directly benefiting the employees themselves, there is a concern that AI may gain advantages from these interactions while the employees do not.61 This situation might contradict the AI Principles, as outlined by the Organization for Economic Co-operation and Development (OECD), which emphasizes that AI should primarily serve the people.62 It is crucial that workers involved in tasks like data collection and labelling are not only compensated fairly but also treated equitably.63 They should have a thorough understanding of their work and the implications of the data that they handle.64 The labour of individuals worldwide should not be obscured by the illusion of AI superiority.65 Similar to holding companies accountable for labour practices in tangible good production, we need transparency in digital content creation, ensuring accountability to both consumers and workers.66

Through the regulation of companies and the promotion of responsible AI development, there is an opportunity to pave a path towards a sustainable future that benefits all.67 Governments worldwide are enacting fundamental policies to regulate AI and algorithmic systems.68 Numerous regulatory agencies have taken up this task, such as the United States

59 ibid.

60 Nana Mgbechikwere Nwachukwu, Jennafer Shae Roberts and Laura N Montoya, ‘The Glamorisation of Unpaid Labour: AI and its Influences’ (5th Deep Learning Indaba Conference, 16 September 2023) DLI 2023 < https://arxiv.org/pdf/2308.02399.pdf > accessed 13 April 2024.

61 H. James Wilson and Paul R. Daugherty, ‘Creating the Symbiotic AI Workforce of the Future’ (MITSloan, 21 October 2019) < https://sloanreview.mit.edu/article/creating-the-symbiotic-ai-workforce-of-the-future/ > accessed 13 April 2024.

62 OECD, ‘State if implementation of the OECD AI Principles: Insights from national AI policies’ (OECD Digital Economy Papers, 18 June 2021) < https://www.oecd.org/digital/state-of-implementation-of-the-oecd-aiprinciples-1cd40c44-en.htm > accessed 13 April 2024.

63 Nana Mgbechikwere Nwachukwu, Jennafer Shae Roberts and Laura N Montoya, ‘The Glamorisation of Unpaid Labour: AI and its Influences’ (5th Deep Learning Indaba Conference, 16 September 2023) DLI 2023 < https://arxiv.org/pdf/2308.02399.pdf > accessed 13 April 2024.

64 ibid.

65 Mary L. Gray and Siddharth Suri, ‘The Humans Working Behind the AI Curtain’ (Harvard Business Review, 9 January 2017) < https://hbr.org/2017/01/the-humans-working-behind-the-ai-curtain > accessed 14 April 2024.

66 ibid.

67 Nana Mgbechikwere Nwachukwu, Jennafer Shae Roberts and Laura N Montoya, ‘The Glamorisation of Unpaid Labour: AI and its Influences’ (5th Deep Learning Indaba Conference, 16 September 2023) DLI 2023 < https://arxiv.org/pdf/2308.02399.pdf > accessed 13 April 2024.

68 Alex Engler, ‘The AI regulatory toolbox: How governments can discover algorithmic harms’ (Brookings, 9 October 2023) < https://www.brookings.edu/articles/the-ai-regulatory-toolbox-how-governments-can-discoveralgorithmic-harms/?b=1 > accessed 13 April 2024.

15

Federal Trade Commission’s Office of Technology and Consumer Financial Protection Bureau as well as the European Centre for Algorithmic Transparency, who are actively involved in shaping policies forAI regulations.

69

As consumers, we have the right to know the processes involved in AI-driven content, just as we demand transparency in food labelling.70 As citizens, understanding the sources of our information is essential.71 Moreover, as human beings, we should always be aware when human effort is behind the products we consume, whether physical or digital.72

IV. Conclusion

The rapid expansion of Artificial Intelligence has brought to light the critical role played by human gig workers in driving its development and functioning. Human gig workers are instrumental in shaping the landscape of social media platforms, contributing to the training data for automated systems addressing issues such as hate speech and policy violations.

The ethical dimensions of AI extend beyond data labelling tasks, as the industry heavily relies on low wage workers worldwide, often from vulnerable communities who endure low wages, unpaid labour, and inadequate protections. The path forward requires a shift in focus towards eradicating labour exploitation in the AI industry. Addressing these issues requires a multi-faced approach. Transparency from tech companies regarding the involvement of humans in AI development is crucial, allowing consumers to understand and appreciate the value of human labour behindAI-driven content.

Furthermore, there is a need to prioritize workers’ rights and ensure fair compensation and treatment. This includes providing training and support for gig workers and fostering a collaborative environment where both humans andAI technologies thrive together.

Ultimately, achieving sustainable development in the AI industry requires shifting away from prioritizing corporate profit and rapid development towards prioritizing the respectful and humane treatment of workers. By regulating companies, promoting responsible AI development, and prioritizing workers’ rights, we can pave the way for a future where AI benefits everyone.

69 ibid.

70 Mary L. Gray and Siddharth Suri, ‘The Humans Working Behind the AI Curtain’ (Harvard Business Review, 9 January 2017) < https://hbr.org/2017/01/the-humans-working-behind-the-ai-curtain > accessed 14 April 2024.

71ibid.

72 ibid.

16

The Enforceability of Data Protection Rules in the EU

Abstract

The enforcement of data protection rules in the European Union, particularly under the General Data Protection Regulation, faces significant challenges. This Note discusses the enforcement of the General Data Protection Regulation involving key regulatory bodies, such as the European Data Protection Board, the European Data Protection Supervisor, and national Data Protection Authorities. Through a short case study on Clearview AI’s facial recognition technology use by Europol and the EU, the limitations of current data protection enforcement become apparent. Furthermore, civil society organisations continue to advocate for change to enhance enforcement effectiveness, such as harmonising procedural rules and allocating greater resources to regulatory bodies. However, the proposed reforms face scrutiny for potential shortcomings in safeguarding complainants' rights and expediting investigations. Urgent action is needed to address these deficiencies to reinforce the European Union’s commitment to protecting individuals' privacy rights.

* LL.B. Candidate, International and European Law Programme, The Hague University of Applied Sciences.

17

I. Introduction

Data protection is an often talked about topic in the European Union (EU). Civil society organisations,1 privacy activists and (Big Tech) companies are all looking towards the EU's ever evolving regulatory framework awaiting the next steps. The Union has observed a shift towards enhanced data protection rules, from the evolution of Directive 95/46/EC to the General Data Protection Regulation and the newly approved European Parliament AI Act. The question arises, who is responsible for the enforcement of these vastly different legislations? In fact, these responsibilities lie with various different EU and national enforcement bodies. Regarding the enforcement of data protection rules, the responsibility lies with the European and national supervisory authorities in charge of data protection rules, specifically, the European Data Protection Board (EDPB), the European Data Protection Supervisor (EDPS) and Data ProtectionAuthorities (DPAs).

As an individual actively involved in exploring the legal field of privacy and data protection, it becomes evident that a substantial amount of information is readily accessible on the matter. The volume of available resources alone presents a challenge in terms of staying up to date on the topic. Therefore, this note aims to provide the reader with perspectives into the continuously evolving and fast-paced domain of privacy and data protection. This note commences by explaining a number of key enforcement issues the GDPR faces. This is followed by a short case study on Clearview AI and its impact in the EU (and briefly in the United States of America). This note closes with a look into the future of GDPR enforcement.

II. The GDPR and Enforcement

To start, this note will briefly address a significant piece of legislation at the forefront of this discussion, the strongest regulation in the EU's legislative arsenal to help protect our fundamental right to data protection, the General Data Protection Regulation (GDPR).2 The 2016 GDPR (in force since 2018) is a Regulation that harmonises data privacy laws in the European Union. It sets out the definition of personal data and outlines how such data must be handled, as well as data subject rights, sanctions amongst other things. To this, a topic of

1 Such as Future of Privacy Forum, Access Now, NOYB, EDRi, Privacy International, Bits of Freedom and many others.

2 Regulation (EU) 2016/967 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46EC (2016) OJ L119.

18

considerable research and debate is the enforceability of the GDPR. Ursula Pachl, Deputy Director General of the European Consumer Organisation, states that:

“As good as the GDPR is on paper, it has been hamstrung by weak enforcement when it comes to EU-wide infringements by big companies. (...) Weak and slow enforcement only suits Big Tech and other companies who make money from trampling on people’s right to personal data protection.”3

Civil Society organisations have accepted that there is a lack of GDPR enforcement.4 One of the main reasons being that those in charge of supervising data protection legislations (European and National DPAs) are overburdened with the ever-increasing number of complaints they receive and are underfunded and understaffed.5 Therefore, the number of cases they receive is disproportionately high given the lack of resources they have. The EDPS's Wojciech Wiewiórowski’s and Chair of the EDPB's Andrea Jalinek’s open letter to the European Parliament and Council, reflects the dire situation these data protection enforcers face.6 The letter states the authorities’ budgets are too low to allow the EDPB and EDPS to fulfil their tasks appropriately, as required by the GDPR.

Furthermore, at the European Digital Rights' (EDRi) 2024 Privacy Camp in Brussels, the panellist from the EDPB acknowledged the lack of resources allocated to these data protection authorities. He was posed the question; why their budget is not being paid attention to, given that the number of cases they receive each year is increasing. The panellist briefly responded; “that's a budgetary issue, you should ask them” (“them” being the European Parliament and the Council who approve the EDPBs budget)7. This answer reveals the bleak truth of why legislations (such as the GDPR) lack enforceability.

3 The European Consumer Organisation, ‘New Commission proposal falls short in boosting GDPR enforcement’ (BEUC, 4 July 2023) <https://www.beuc.eu/press-releases/new-commission-proposal-falls-shortboosting-gdpr-enforcement> accessed 17 March 2024.

4 European Digital Rights, ‘Civil society call and recommendations for concrete solutions to GDPR enforcement shortcomings’ (EDRi, 16 March 2022) <https://edri.org/our-work/civil-society-call-and-recommendations-forconcrete-solutions-to-gdpr-enforcement-shortcomings/> accessed 17 March 2024; Access Now, ‘Access Now raises the alarm over weak enforcement of the EU GDPR on the two-year anniversary’ (Access Now, 25 Mary 2020) <https://www.accessnow.org/press-release/alarm-over-weak-enforcement-of-gdpr-on-two-yearanniversary/> accessed 17 March 2024.

5 European Data Protection Board, ’Overview on resources made available by Member States to the Data ProtectionAuthorities and on enforcement actions by the Data ProtectionAuthorities’(EDPB Resource Report) (August 2021) <https://www.edpb.europa.eu/system/files/202108/edpb_report_2021_overviewsaressourcesandenforcement_v3_en_0.pdf> accessed 17 March 2024.

6 Wojciech Wiewiórowski and Andrea Jalinek, ‘Open letter on the EDPB budget proposal for 2023’ (EDPB and EDPS, 12 September 2022) <https://www.edpb.europa.eu/system/files/2022-09/letteronbudget_out20220068.pdf> accessed 17 March 2024.

7 Consolidated Version of the Treaty on the Functioning of the European Union [2012] OJ C326/26, Art 314.

19

III. ClearviewAI and the European Data Protection Board

A short case study will be put forward reflecting that, in this particular case, the enforcement of the GDPR is lacklustre through the example of Clearview AI’s use of facial recognition technology in the EU.8 Clearview AI is a US American facial recognition company established in 2017 that develops facial recognition software. ClearviewAI describe their practices as extracting publicly accessible images on the internet, which allows an individual to be identified using a representative photograph and stores such data in a database.9 NGOs have described this practice as ‘scraping billions of publicly available images from the web’and claim that these images are stored indefinitely in a database, which Clearview AI sells the data to its clients.10 Clearview AI’s clientele mainly includes law enforcement agencies, the military, and other government agencies across the world.

In 2019, the application of Clearview AI was demonstrated at Europol, where its potential use in law enforcement was discussed.11 This was of particular interest to the law enforcement agencies present at the event, given that law enforcement agencies typically only have access to publicly accessible networks (such as social media images). In contrast, Clearview AI provides the opportunity to have access to over 40+ billion images for law enforcement purposes. Europol stated that it would 'categorise potential future use of ClearviewAI as performing OSINT searches’ . 12

As the potential use of Clearview AI within Europol is of concern to the EDPS, the latter conducted an investigation into the matter and subsequently submitted an opinion to Europol.13 This opinion was submitted in accordance with the Europol Regulation in which the EDPS is granted the powers necessary to ensure the enforcement of data protection rules within Europol.14 The investigation concerned Europol’s obligation to use sources of

8 The reason why Clearview AI was used as an example in this note stems from the author’s attendance of EDRI’s 2024 Privacy Camp. A panel on Clearview AI’s use in the ongoing war in Ukraine prompted the author to conduct further research on the matter.

9 Clearview AI, ‘Company Overview’ (Clearview AI) < https://www.clearview.ai/> accessed 17 March 2024; Commission Nationale de L'Information et des Libertés (decision) SAN-2022-019 17 October 2022, p 2.

10 EDRi, ‘The ICO provisionally issues £17 million fine against facial recognition company Clearview AI’ (EDRi, 1 December 2021) <https://edri.org/our-work/the-ico-provisionally-issues-17-million-fine-againstfacial-recognition-company-clearview-ai/> accessed 14 April 2024.

11 European Data Protection Supervisor Opinion on the possibility to use Clearview AI and similar services at Europol (opinion) 2020-0372, p 2.

12 ibid.

13 ibid.

14 Regulation (EU) 2016/794 of the European Parliament and of the Council of 11 May 2016 on the European Union Agency for Law Enforcement Cooperation (Europol) and replacing and repealing Council Decisions

20

information which are ‘publicly available, including the internet and public data’15 and was aimed at determining whether Clearview AI is a ‘publicly available source’ within the meaning of article 17(2) of the Europol Regulation. In its opinion, the EDPS observed that Clearview AI is not available to members of the public, in particular journalists and civil society organizations, who therefore cannot access these databases and scrutinise their content. Instead, access to Clearview AI is only granted to specific individuals such as active law enforcement agencies as well as the military and other government agencies who send their title or rank and the name of the agency or department they are in. Therefore, the EDPS concluded that Clearview AI is not a publicly available source, given that its services can only be accessed after providing certain law enforcement credentials.16 The EDPS also recommended to refrain to ‘engage’ or ‘promote’ Clearview AI services, which sends a clear message to the privacy community and law enforcement agencies around the world; Clearview AI is not welcome in the EU for as long as its practices are in violation of EU regulations.

IV. ClearviewAI: EU National Data Protection Authorities and the USA

Clearview AI has also been problematic outside of the EUs institutions. For instance, EDRi, the largest European network of NGOs that defend rights and freedoms online, took issue with Clearview AI over claims that it is a ‘mockery of human rights’ . 17 EDRi and other prominent data protection and privacy watchdogs, such as None of Your Business18 (NOYB) and Privacy International,19 have filed multiple complaints against Clearview AI, on behalf of complainants in accordance with Article 80 of the GDPR.20 Complaints were submitted to data protection regulators in France, Austria, Italy, Greece and the United Kingdom, which 2009/371/JHA, 2009/934/JHA, 2009/935/JHA, 2009/936/JHA and 2009/968/JHA (2016) OJ L134/24. Art 43(2)(d).

15 ibid, Art 17(2).

16 European Data Protection Supervisor Opinion on the possibility to use Clearview AI and similar services at Europol (opinion) 2020-0372, p 5.

17 European Digital Rights, ‘About Clearview AI’s mockery of human rights, those highting it, and the need for EU to intervene’ (EDRi, 6 April 2022) <https://edri.org/our-work/we-need-to-talk-about-clearview-ai/> accessed 17 March 2024.

18 None of Your Business, Case-No: C043 (complaint) <https://noyb.eu/sites/default/files/202105/Clearview%20AI%20-%20EN%20DE%20-%20noyb%20-%20redacted.pdf> accessed 17 March 2023.

19 Privacy International, ‘Privacy International and others file legal complaints across Europe against controversial facial recognition company Clearview AI’ (PI, 25 Mary 2021) <https://privacyinternational.org/press-release/4520/privacy-international-and-others-file-legal-complaintsacross-europe-against> accessed 17 March 2024.

20 Regulation (EU) 2016/967 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46EC (2016) OJ L119 art 80.

21

resulted in each National DPA imposing a fine on Clearview AI.21 Thereby, multiple complaints and thereafter investigations were launched into Clearview AI's activities. The complaints related to difficulties encountered by the complainants in exercising their right of access and erasure within Clearview AI (which are rights guaranteed under the GDPR).22 Notably, the French DPA La Commission Nationale Informatique & Libertés (CNIL) investigated Clearview AI’s activities after receiving multiple complaints.23 Upon concluding its investigations in 2022, the CNIL imposed a EUR 20 million fine on Clearview AI and issued an injunction prohibiting Clearview AI to continue the collection and processing of personal data of data subjects on French territory, and attached a EUR 100,000 fine per day of non-compliance. Subsequently, the CNIL’s decision was published on Légifrance.24

The CNIL had started its investigation in late 2021, but Clearview AI failed to respond to information requests. Clearview AI did not respect the CNIL’s injunction on the processing of personal data on French territory, nor did it pay the EUR 20 million fine. Clearview AI was given a period of two months to comply with the injunction and justify its compliance to the CNIL, however, no such justification was received.25 More than a year later, on April 2023, the CNIL issued a second fine of EUR 5.2 million for non-compliance with the 2022 decision.26 The public’s reaction to this was interesting to note here. Some reported that Clearview AI ‘ghosted’ the regulator.27 Others report that Clearview AI claims to not conduct any business on French territory and that therefore its activities would not be subject to the GDPR.28 While it is true that the GDPR has exterritorial applicability in certain cases,29 here the matter lies solely on European Union territory and thus, Clearview AI cannot ignore these decisions. DPAs across the Union have issued similar injunctions against Clearview AI with again, no sign of life.

21 Privacy International, ‘Get out of our face, Clearview!’ (PI, 25 May 2021) <https://privacyinternational.org/campaigns/get-out-our-face-clearview> accessed 14 April 2024.

22 Regulation (EU) 2016/967 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46EC (2016) OJ L119, Art 15, 17.

23 Commission Nationale de L'Information et des Libertés (decision) SAN-2022-019 17 October 2022, p 2.

24 Commission Nationale de L'Information et des Libertés (Deliberation) SAN-2022-019 17 October 2022 <https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000046444859?isSuggest=true> accessed 17 March 2024.

25 Commission Nationale de L'Information et des Libertés, ‘Facial Recognition: 20 million euros penalty against Clearview AI’ (CNIL, 20th October 2022) <https://www.cnil.fr/en/facial-recognition-20-million-euros-penaltyagainst-clearview-ai> accessed 17 March 2024.

26 Commission Nationale de L'Information et des Libertés (Deliberation) SAN-2022-019 17 Octobre 2022, <https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000046444859?isSuggest=true> accessed 17 March 2024.

27 Natasha Lomas, ‘France fines Clearview AI maximum possible for GDPR breaches’ (Tech Crunch, 20 October 2022) < https://techcrunch.com/2022/10/20/clearview-ai-fined-in-france/> accessed 17 March 2024.

28 Agence France Press, ‘France Punishes Clearview AI For Failing To Pay Fine’ (Barron’s, 10 May 2023) <France Punishes Clearview AI For Failing To Pay Fine | Barron's (barrons.com)> accessed 17 March 2024.

29 Regulation (EU) 2016/967 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46EC (2016) OJ L119, Art 3(2).

22

Clearview AI also faced legal action on home soil, in the United States of America. For instance, in 2022, it faced a lawsuit in Illinois for violating the 2008 Illinois Biometric Information Privacy Act by collecting facial biometric data of Illinois residents without resident’s consent.30 This lawsuit is the first to focus on the harm that Clearview AI's technology could inflict on 'survivors of domestic violence and sexual assault, undocumented immigrants, communities of colour and members of other vulnerable communities'.31 These vulnerable groups are data subjects that face the gravest consequences of a surveillance programme which is used by the military, law enforcement and government agencies.32 However, unlike in the EU, the citizens of Illinois won the lawsuit against Clearview AI and can be assured that this company will only use their data when consent is given.

V.AWay Forward for Improving GDPR Enforcement

Now, the significance of this enforcement issue cannot be overstated. Data protection authorities invest substantial time and resources, often spanning years in investigating and deciding on cases. Take, for instance, the ongoing case involving Clearview AI, where authorities have scrutinised each aspect, issuing a series of decisions and fines in the process. However, despite these concerted efforts, companies like Clearview AI, seem to operate beyond the reach of European data protection authorities, thus evading EU accountability. Consider the CNIL, a prominent DPA within our Union, known for its handling important cases.33 Yet, even with its considerable enforcement capabilities, companies persist in sidestepping its actions and ghost their injunctions.

This raises questions about the efficacy and enforcement of regulations such as the GDPR. How can companies disregard the decisions and penalties imposed upon them, especially when Clearview AI has responded to legal claims in the US through settlements? This irony underscores the urgency of addressing such evasive tactics and reinforcing the integrity of data protection regulations. The failure to address such blatant disregard not only undermines the authority of data protection regulations but also erodes public trust in the ability of regulatory bodies to uphold data privacy standards effectively.

30 ACLU v. Clearview AI Settlement Agreement & Release, 2020CH04353 (Cook County Ill, 9 May 2022), p 2.

31 ibid.

32 ibid; Patrick Toomey and Ashley Gorski, ‘The Privacy Lesson of 9/11: mass Surveillance is Not the Way Forward’ (American Civil Liberties Union, 7 September 2021) <https://www.aclu.org/news/nationalsecurity/the-privacy-lesson-of-9-11-mass-surveillance-is-not-the-way-forward> accessed 17 March 2024.

33 Navacelle, The CNIL and recent cases on personal data protection’ (26 October 2022) <https://navacelle.law/the-cnil-and-recent-cases-on-personal-data-protection/> accessed 14 April 2024.

23

One could propose opening up the GDPR to amendments or additional rules, but this has been recognised as unlikely to occur at this time.34 However, both civil society organisations and the EU have been working on ways to improve the GDPR. Civil society organisations have signalled their dissatisfaction with the current state of the GDPR. For example, EDRi provides for 3 recommendations to improve the enforcement of data protection rules.35 First, to harmonise national procedural rules for the application of data protection rules. Second, to increase resources for DPAs and the EDPB, which has yet to occur. Third and finally, EDRi calls upon DPAs to make better use of existing tools for enforcement and cooperation under the GDPR.

In turn, the EDPB signalled to the EU Commission of its wish-list regarding the fact that ‘several procedural aspects could be further harmonized in EU law, in order to maximise the efficiency of the cooperation mechanism’.36 Based on the EDPB’s wish-list, the EU Commission has proposed new rules to streamline the cooperation between DPAs when enforcing the GDPR in cross-border cases,37 in an attempt to ensure stronger enforcement of the GDPR in cross-border cases.38 Before being published, the new rules received recommendations on how to be improved as they were considered to be lacking. For example, the European Consumer Organisation, jointly with Privacy International, Access Now, Bits of Freedom, EDRi, NOYB and other civil society organisations sent a letter to the EU Commission regarding the matter. They stated that the purpose of the letter was to propose remedies regarding some of the bureaucratic flaws these civil society organisations have faced with regards to their national and cross-border GDPR complaints. In essence their 4 proposals outline goals for mutual recognition of complaint admissibility, enhanced supervisory authority cooperation, comprehensive rights for complainants, and efficient appeal procedures. After a careful analysis of the proposed rules, only the 3rd proposal was partially addressed. Article 20 of the proposed rules specifies that the lead supervisory

34 European Federation of Data Protection Officers, ‘Position Paper’ (EFDPO, February 2024) <https://www.efdpo.eu/wp-content/uploads/2024/02/Position-paper-on-GDPR-Evaluation-2024.pdf> accessed 24 March 2024.

35 European Digital Rights, ‘Civil society call and recommendations for concrete solutions to GDPR enforcement shortcomings’ (EDRi, 16 March 2022) <https://edri.org/wp-content/uploads/2022/03/EDRirecommendations-for-better-GDPR-enforcement.pdf> accessed 17 March 2024.

36 Wojciech Wiewiórowski and Andrea Jalinek, ‘Open letter on the EDPB budget proposal for 2023’ (EDPB and EDPS, 12 September 2022) <https://www.edpb.europa.eu/system/files/2022-09/letteronbudget_out20220068.pdf> accessed 17 March 2024.

37 Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679 (2023).

38 European Commission, ‘Data Protection: Commission adopts new rules to ensure stronger enforcement of the GDPR in cross-border cases’ (European Commission, 4 July 2023) <https://ec.europa.eu/commission/presscorner/detail/en/ip_23_3609> accessed 17 March 2024.

24

authority shall grant access to the administrative file to the parties under investigation.39 Therefore, complainants and organisations representing them will also have full access to these files. Article 15 also includes access to preliminary findings to complaints.40 However, after being published, the proposed rules received criticism from the BEUC for falling short on an important aspect, the improvement of complainant’s rights to be heard and to get access to timely and important information from the investigations Lead Supervisory Authorities carry out.41 BEUC states that at least these new rules provide for closer and earlier cooperation in cross-border investigations. Therefore, even when the European Commission issues new rules to enhance the enforceability of the GDPR, unfortunately, it seems that their rules fall short according to civil society organisations.

Lastly, and as a final note, at the time of writing, there are a number of GDPR complaints that are pending from NGOs, notably from NOYB. For example, out of their 800+ cases, 85% of them have not been decided.42 Some of these cases have been pending for more than 1.5 years. Some may argue that NOYB is lodging complaints for the sake of complaining, but in this case, NOYB's track record with both Schrems I43 and Schrems II44 demonstrate the importance of these complaints and refute this argument. Further, these 563 complaints stem from only one NGO, meaning that this number does not include the number of complaints from the other EU of civil society organisations that have lodged GDPR related complaints. This clearly highlights that the EU needs to do more to be able to properly uphold its citizen’s privacy rights. The Parliament and Council need to urgently reevaluate the EDPS, EDPB and DPA budgets. This needs to occur so that more staff can be hired (as stated in the joint EDPS and EDPB open letter) to deal with the number of complaints lodged. Only once these issues have been addressed can the EU effectively uphold its citizens privacy rights.

39 Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679 (2023), Art 20.

40 ibid, Art 15.

41 The European Consumer Organisation, ‘New Commission proposal falls short in boosting GDPR enforcement’ (BEUC, 4 July 2023) <https://www.beuc.eu/press-releases/new-commission-proposal-falls-shortboosting-gdpr-enforcement> accessed 17 March 2024.

42 None of Your Business, ‘5 Years of the GDPR: National Authorities let down European Legislator’ (NOYB, 23 May 2023) <https://noyb.eu/en/5-years-gdpr-national-authorities-let-down-european-legislator> accessed 17 March 2024.

43 Case C-362/14 Maximillian Schrems v Data Protection Commissioner [2015] ECLI:EU:C2015:650.

44 Case C-311/18 Data Protection Commissioner v Facebook Ireland Limited and Maximilliam Schrems [2020] ECLI:EU:C: 2020:559.

25

Adjusting Policy and Legislature in Mass Surveillance and Intelligence in Compliance with the European Convention of Human Rights

Abstract

This article examines the implications and responses within Swedish policymaking following the European Court of Human Rights (ECtHR) ruling in the case of Centrum för Rättvisa v. The Kingdom of Sweden. Initially, it provides a comprehensive overview of the definitions and mechanisms of mass surveillance and signals intelligence, setting the stage for a detailed analysis of the Centrum för Rättvisa case. This case highlighted significant concerns regarding privacy infringements through Sweden’s bulk interception surveillance regime, prompting the ECtHR to identify violations of Article 8 of the European Convention on Human Rights (ECHR). In response to the Court’s findings, Sweden embarked on a series of legislative reforms and procedural adjustments to safeguard privacy rights and ensure compliance with human rights standards. Furthermore, Sweden’s efforts to increase transparency, the immediate destruction of irrelevant non-personal data, and the structured inquiry process into the legislative framework underscore(d) the nation’s commitment to aligning its surveillance practices with the principles of the ECHR. The article critically assesses the progress made by Sweden in addressing the ECtHR’s concerns, highlighting the ongoing efforts and plans outlined in the Government’s Action Plans. It concludes by reflecting on the case’s broader implications for Swedish policymaking in mass surveillance and intelligence, emphasising the importance of balancing national security concerns with the protection of individual rights.

* LL.B. Candidate, International and European Law Programme, The Hague University of Applied Sciences.

26

I. Introduction to mass surveillance

“To us, every complex algorithm is a black box. That means we see the input and output, but we have no idea what’s happening inside the black box and why.”1 What Marc Uwe Kling stressed in his dystopia about algorithmic dominance, Quality Land, rings true in modern mass surveillance as, to most people, signals intelligence and mass surveillance constitute such a black box. What is commonly known about mass surveillance of communication begins and ends with the broad news coverage of the Snowden Scandal in 2013, in which Edward Snowden exposed the extensive means by which the National Security Agency (NSA) intercepted and stored the user data of American and foreign nationals in considerable amounts.2

Mass surveillance is, to the general public, an issue we read about in published opinions or comment on in social media postings but forget about as soon as we put our phones on the nightstand and lay our heads to rest to sleep peacefully. But what about the European Union and its member states’ intelligence collection? Are the privacy rights of European citizens sufficiently safeguarded? And what happens if there is no judicial body to voice complaints and concerns to? Can legislature be amended to respects individual rights whilst maintaining operative functionality? What if the black box only provides you with the binary option of “yes” or “no” without any reasoning? These questions will be answered using the example of Centrum för Rättvisa v. Sweden (Rättvisa v. Sweden) before the ECtHR by extensively analysing the Court’s jurisprudence and argumentation, assessing the Swedish Government’s reaction to the ruling of the Grand Chamber.

This all begs the question:

Which developments and adjustments in Swedish policymaking occurred following the ruling in the ECtHR case Centrum för Rättvisa v. The Kingdom of Sweden?

A. Definitions relating to mass surveillance

1. Mass surveillance

1 Marc-Uwe Kling, ‘Quality Land’ (Penguin Books 2017).

2 Nick Younger, ‘The case of Edward Snowden’ (National Whistleblower Center, 19 November 2020) <https://www.whistleblowers.org/news/the-case-of-edward-snowden/> accessed 16 March 2024.

27

Privacy International defines mass surveillance as “[...] the acquisition, processing, generation, analysis, use, retention or storage of information about large numbers of people [...]”3 which in the context of this article is to be understood as the ordinary conduct through which states, and in particular their respective intelligence agencies, procure data that is deemed to be relevant to national security interests through interception of communications through cable or satellite sources. Mass surveillance is further applicable to CCTV video monitoring and facial recognition software, which, however, is not discussed in this particular article.

2. Signals intelligence

The definitions for the term Signals Intelligence (SIGINT) can be derived from multiple sources. The NSA defines SIGINT as any information formally “derived”4 – meaning intercepted5 – which entails inter alia radar, telecommunication, and any other digital signal interceptable from “foreign”6 sources by the NSA. The definition by the ECtHR, in its judgement of Rättvisa v. Sweden, complements this definition, applying it to the domestic, not foreign, context as “intercepting, processing, analysing, and reporting intelligence from electronic signals. These signals may be processed to text, images, and sound.”7 SIGINT, therefore, entails not only telecommunication or radar data from foreign actors but also the communications of domestic citizens, for example, through end-to-end communication applications such as WhatsApp. Furthermore, the ECtHR’s definition entails not only the mere content but also the metadata8 of these intercepted communications. In the case of Sweden, the entity carrying out the SIGINT is the National Defence Radio Establishment (Försvarets radioanstalt, FRA)9, which may only do so on explicit permission by the Government of Sweden, its offices, its military, and the otherwise permitted governmental

3 Privacy International, ‘Mass Surveillance’ <https://privacyinternational.org/learn/mass-surveillance> accessed 16 March 2024.

4 National Security Agency ‘Signals Intelligence (SIGINT) Overview’ < https://www.nsa.gov/SignalsIntelligence/Overview/#:~:text=SIGINT%20is%20intelligence%20derived%20from,capabilities%2C%20action s%2C%20and%20intentions> accessed 03 March 2024 (NSA SIGINT).

5 Attained by technological means, e.g. “wiretapping”.

6 NSA SIGINT (n 4) accessed 03 March 2024.

7 Case 35252/08 Centrum för Rättvisa v. Sweden [2021] ECtHR Grand Chamber Judgement (Rättvisa v. Sweden GC), para 14.

8 Information on how, when, through which means, IP, and any other data that is not expressly shown in for example, an Image.

9 Rättvisa v. Sweden GC (n 7), para 17.

28

entities, namely the Security Police and the National Operative Department of the Police Authority (NOA).10

3. Means of interception

The ECtHR, in the case of Rättvisa v. Sweden, sets out that the interception, so the act in which the interceptor copies, obtains, etc., the communication, is primarily done via means of interception. This is done without distinction by intercepting signals via their communicative satellites or cables belonging to the relevant communication providers. The medium being intercepted, carrying the communication, is referred to as the ‘communication bearer’ . 11

4. Detailed tasking directive

For any entities mentioned under point two, to permit the FRA to engage in SIGINT, these entities must issue a Detailed Tasking Directive (DTD).12 Such a DTD must state the goals, intention, and accurate requirements for the SIGINT action to be carried out. It is important to note that while such a SIGINT operation may be used to investigate occurrences, locations, points of interest, etc., it may never be directed towards a “specific natural person”.13 The purpose of empowering the Special Police and the NOA specifically to issue such a DTD is to ensure that the optimal obtainment of data holding strategical importance to the national security of Sweden.14

5. Legal reasons for interception

The Signals Intelligence Act (Lagen om signalspaning i försvarsunderrättelseverksamhet; 2008:717) and its paragraph 1(1-8) detail inter alia the following reasons as valid to conduct SIGINT: external imminent military threats (or which have potential to develop into an imminent situation), international terrorism (including but not limited to surveillance of drug cartels and other cross-border crimes), threats to societal infrastructure,15 foreign intervention against Sweden’s national autonomy, non-Swedish conflicts, and development and

10 ibid 19.

11 ibid 14.

12 Rättvisa v. Sweden GC (n 7) para 19.

13 Lag (2008:717) om signalspaning i försvarsunderrättelseverksamhet (SFS 2008:717, as amended by SFS 2022:495) § 4 (Swed).

14 Rättvisa v. Sweden GC (n 7) para 20.

15 Attacks on supplies of inter alia electricity or water and attacks by foreign technological means (hacking).

29

proliferation of dangerous weapons (e.g., nuclear missiles).16 It is to conclude from this that SIGINT operations are usually conducted under circumstances that are not marginal and significantly impact Sweden or the international order.

6.

Development activities

Outside the scope of the reasons detailed in the preceding point four, The Signals Intelligence Act under Article 1(3)17 further details other reasons to intercept communication. The FRA may also intercept communications if this is done for the cause of ‘development activities’. These development activities are of a nature that provides the Swedish state and its intelligence branch insight into technological developments and aids the aforementioned in maintaining the state-of-the-art in global technology trends. The critical difference between the reasons detailed previously and development activities is that while the former concerns situations and SIGINT, the latter concerns not the “what” in communications but rather the “how”18 – aiding the technological development of the Swedish intelligence capabilities.19

7. Selectors (common to intelligence organisations worldwide)

To limit the impact of the SIGNT on individual personal integrity through automated analysis of intel through cable interception, The Signals Intelligence Act contains provisions on so-called ‘selector’ terms, which are indicators used to identify relevant communications to the FRA while limiting ‘accidental’ causalities in harmless communications. These selectors can be phone numbers affiliated with extremist groups, specific keywords, e.g., Bomb, TerrorAttack, Presidential Assassination, and also industrial products.20 In exceptional circumstances, these selectors can be directed toward a particular natural or legal person, e.g., Osama Bin Laden.21

8. Bulk data collection

16 Lag (2008:717) om signalspaning i försvarsunderrättelseverksamhet (SFS 2008:717, as amended by SFS 2022:495) § 1(2) (Swed).

17 ibid § 1(3).

18 So, by which means the communication is or was carried out.

19 Rättvisa v. Sweden GC (n 7) para 24.

20 European Commission for Democracy through Law (Venice Commission), ‘Report on the Democratic Oversight of Signals Intelligence Agencies’ (CDL-AD (2015)011, 15 December 2015) para 2.

21 Lag (2008:717) om signalspaning i försvarsunderrättelseverksamhet (SFS 2008:717, as amended by SFS 2022:495) § 3 (Swed).

30

The term bulk data (collection) was coined by international organisations, regional Governments, and the European Union which, out of all and with regards to this article, provides for the most comprehensive definition. In the Report of the Venice Commission on Democratic Oversight of Signals Intelligence Agencies, the term ‘bulk data’ is defined as “very large quantities of communications data [content data and metadata] collected by automated processes”.22 This term ties in with the terms detailed in section I(A) of this article as bulk data collection is only possible through relevant selectors which allow large scale automated systems to collect communication. According to the judgement in the case of Rättvisa v. Sweden, more than seven states, inter alia Sweden, have publicly disclosed to work or have worked with systems of bulk data collection.23

II.

The case Rättvisa v. Sweden

A. The facts of the case Rättvisa v. Sweden

The organization central to this issue, Centrum för Rättvisa (CFR), was established in 2002 and is based in Stockholm.24 Its focus lies in advocating for clients involved in legal matters related to human rights and liberties, adhering to both the ECHR and Swedish laws.25 Additionally, CFR actively contributes to educational and research initiatives and participates in public debates on individual rights and freedoms.26 Functioning as a non-Governmental entity overseeing and at times criticising state actions, CFR expressed concerns about the privacy of its communications, conducted through various channels such as email, phone, and fax, domestically and internationally.27 It raised concerns about potential interception and surveillance of these communications using signals intelligence methods.28 Despite these concerns, the organization opted not to pursue domestic legal action, citing inadequate means to address its grievances under the ECHR to Swedish judicial entities.29

B. The alleged violation of article 8 of the ECHR

22 European Commission for Democracy through Law (Venice Commission), ‘Report on the Democratic Oversight of Signals Intelligence Agencies’ (CDL-AD (2015)011, 15 December 2015), page 37.

23 Rättvisa v. Sweden GC (n 7) para 131

24 Rättvisa v. Sweden GC (n 7) para 10.

25 ibid 11.

26 ibid.

27 ibid 12.

28 ibid.

29 ibid 13.

31

The CFR claimed that by virtue of bulk interception under Swedish law and legislation, lack of judicial regulation and redress, these acts by Sweden’s agencies constituted a violation of Article 8 of the ECHR ‘right to private life’.

C. Admissibility to the jurisdiction of the ECtHR from the application preceding the proceedings before the Grand Chamber

Upon initial application of CFR to the ECtHR, the Swedish Government (the Government) raised the question of admissibility against the former, claiming that the CFR was (a) not to be considered in exhaustion of all domestic remedies, (b) could, as a legal person, not be considered a victim of a violation of Article 8 of the ECHR and asserted that (c) the application by CFR was manifestly ill-founded.30

1. Exhaustion of domestic remedies preceding the relinquishment to the Grand Chamber

As to the question of the exhaustion (or non-existence) of a domestic remedy, the Court referred to the case Kennedy v. The United Kingdom (Kennedy)31 and recognised that the FRA, through its Foreign Intelligence Inspectorate (the Inspectorate) investigates individual claimants on their possible interception. Furthermore, the Court held that after the interception analysis was concluded, the Inspectorate assessed the interceptions’ compatibility with the law, from which the Inspectorate notified the requesting person, natural or legal. Following its assessment, the Inspectorate had the power to cease any proceedings or destroy the communications obtained through its SIGINT if held in breach of Swedish law.32

Conversely, the Court noted that the evaluation of the interception merely contained the analysis of its unlawfulness and that the Inspectorate furthermore only provided the complainant with said binary information about lawfulness or unlawfulness and not with information on whether there had been any interception.33 The Inspectorate, furthermore, claimed by the CFR and since confirmed by the ECtHR, refused to state any reasons as to its

30 Case 35252/08 Centrum för Rättvisa v. Sweden [2018] Judgement ECtHR Third Section (Rättvisa v. Sweden 2018), para 83.

31 Case 26839/05 Kennedy v. The United Kingdom [2010] Judgement ECtHR Third Section (Kennedy).

32 Rättvisa v. Sweden 2018 (n 30), para 171.

33 ibid 173.

32

prior decision, stood opposed to the “publication of the tribunal’s legal rulings[…]”34 principle established in the Kennedy case, which is regarded as enhancing legal certainty and scrutiny. The decision by the Inspectorate would (theoretically) leave an applicant without remedy as its decision is final and not for appeal to any other Court.35

While in theory, the FRA must make available to the applicant the place, purpose and categories of SIGINT obtained from the applicant,36 the FRA can refuse to disclose the aforementioned information when claiming secrecy,37 which in practice entails significant portions of all SIGINT obtained. Furthermore, the Court held the remedy of an appeal to the Administrative Court in Sweden inoperable. It ruled so based on the argument of secrecy, applicable to considerable portions of SIGINT, which cannot be disclosed before any Court of law. This inability to disclose (theoretically) affected the CFR and its rights as “the FRA’s procedure to correct, block or destroy personal data depends on the individual’s knowledge that personal data has been registered and the nature of that data.”38

The Court noted further that the Government had failed to provide tangible evidence of the effectiveness of the remedy in question and compared with the Kennedy Case, rendered possible remedies ineffective.39 Hence, the Court allowed for the appeal to be brought to the ECtHR without prior domestic legal action and dismissed the Government’s argumentation.40

2. Victim status under the ECHR as a legal person and manifestly ill-founded application

The Court held, while recognising doubts and disputed questions as to the possible private life of a legal person, that the means of communication, e.g., fax, calls, mail, and other means of communication with natural persons and legal persons, still fell subject to the protection of Article 8 of the ECHR.41 The Court furthermore supported the validity of the legal person’s victim status through SIGINT surveillance with the precedent case Association for European Integration and Human Rights and Ekimdzhiev v. Bulgaria, no. 62540/00, § 60.42 The Court,

34 ibid.

35 ibid.

36 ibid 174.

37 ibid 175.

38 ibid.

39 Rättvisa v. Sweden 2018 (n 30), para 177.

40 ibid 84.

41 ibid 85.

42 Case 62540/00 Association for European Integration and Human Rights and Ekimdzhiev v. Bulgaria [2007] ECtHR Judgement Fourth Section.

33

in this case, held that, while a legal person cannot be guaranteed or interpreted to have a personal life, “its mail and other communications [...] are covered by the notion of “correspondence” [...]”43, as well as it is entitled to have respected its home in light of Article 8 of the ECHR.44 Therefore, the Court asserted that the communications of natural persons merit the application of the ECHR and, thus, its victim status.45

Lastly, as to the admissibility criteria, the Court denied the Government’s argument of manifest ill-foundation of the application of the Centrum för Rättvisa and confirmed its conformity on all grounds.46

D. Objections by the government to the victim status of CFR before the Grand Chamber

1. The government’s

argument

After the relinquishment of the case to the Grand Chamber of the ECtHR, the Government, as to the CFR’s victim status, once more rejected the possibility of this particular status applying to the CFR, claiming their SIGINT to be merely foreign and therefore not concerning the domestically located applicant.47 The Government further claimed that the communications of the CFR, building on their argument in paragraphs 155 and 15648, being mostly domestic, would likely not be subject to monitoring as they are not transferred through channels surpassing the borders of Sweden, and in the event that they would, interception would be subject to a strict automated regime of selectors which are, according to the Government, specifically designed only to target communication explicitly relevant to intelligence.49

The Government argued that due to these strict and rigid means of automated and manual approvement, communications not relevant to the FRA would irretrievably pass through their channels, thus not being collected and painting the risk of false interception and analysis of unlawful intercepted SIGINT “virtually non-existent”.50 Further, the Government claimed

43 ibid para 60.

44 ibid.

45 ibid 85,86.

46 ibid 87.

47 Rättvisa v. Sweden GC (n 7) paras 155,156.

48 ibid 155,156.

49 ibid 157.

50 Rättvisa v. Sweden GC (n 7) para 157.

34

that due to strict scrutiny and lack of personal affection of the CFR, the Court should dismiss the CFR’s claim and victim status.

Prior to the relinquishment of the case to the Grand Chamber of the ECtHR, the Court argued, in paragraph 173 of the judgement of 19 June 2018,51 that the lack of detailed reasons for interception would allow for the CFR to be considered in exhaustion of his domestic remedies. In response to this, in its statement before the Grand Chamber, the Government argued that the Court’s prior decision regarding its conduct “ […] was not based on earlier case law and unduly expanded the relevant requirements.”52 Concluding, the Government’s essential arguments as to the invalidity of the CFR’s victim status were: the mostly domestic nature of the CFR’s communications53, the strict mechanisms of control against arbitrary and unnecessary interception of individuals,54 and the, from the position of the Government, satisfied requirement of a domestic remedy.55

2. The applicant’s argument

The CFR, in the case before the Grand Chamber, argued, in its essence, that the requirements set out in Roman Zakharov v. Russia 56 were fulfilled, allowing for the CFR appeal before the ECtHR, and the Court’s assessment of a piece of national legislation.57 Conversely to the Government’s argument of foreign communication as opposed to domestic communication, CFR pointed out that, while communication may be intended to occur between two purely domestic actors, the means of communication would nonetheless still be considered to have cross-border elements. As such, the CFR further elaborated that this would occur through two different chains of events. Either by implications of a foreign party or the communications being transferred via a network, either by air (satellites) or cable, that holds implications outside Swedish territory. According to the CFR, this is possible as the servers for such communications often happen to be located past the Swedish border, allowing the FRA to consider it foreign communication.58 Lastly, the CFR reinforced its

51 Rättvisa v. Sweden 2018 (n 30), para 173.

52 Rättvisa v. Sweden GC (n 7) para 159.

53 ibid 155,156

54 ibid 157.

55 ibid 159.

56 Case 47143/06 Roman Zakharov v. Russia [2015] ECtHR Grand Chamber Judgement (Zakharov).

57 Rättvisa v. Sweden GC (n 7) para 163.

58 ibid 164.

35

claim as to the non-existence of an adequate remedy on the national level, thus giving rise to an application to the ECtHR under a claim of violation ofArticle 8 of the ECHR.59

3. The court’s Application of facts to the respective arguments

In its assessment of the arguments brought forth by the parties to this case, the Court, in its Grand Chamber configuration, explained its deviation from its principle of no actio popularis – the assertion that applicants may not bring claims as to domestic law before the Court in abstracto. The Court’s reason dates to the case Klass and Others v. Germany60 from 06 September 1978, later affirmed by the benchmark cases Kennedy v. The United Kingdom61 and Roman Zakharov v. Russia62 In the case Klass and Others, the Court held that, by depriving the applicant of his right to bring a claim before the European Commission of Human Rights (later abolished and merged with the ECtHR under Protocol 11 in 199863), due to the natural secrecy of the matters concerning intelligence and surveillance, the Court would renderArticle 8 of the ECHR defunct and open to exploitation.64 Furthermore, in Klass and Others, the Court held that such a violation of the rights of natural persons under the ECHR could not be tolerated “[…] by the simple fact that the person concerned is kept unaware of its violation.”.65 Continuously, the Court developed a two-stage test known as the ‘Kennedy Approach’66 to ensure that secretive surveillance measures remain subject to scrutiny and oversight by both national judicial authorities and the ECtHR. The two stages of the Kennedy Approach are assessed as follows:

An applicant’s assertion of victimhood hinges upon the fulfilment of specific conditions. The first condition examines the scope of legislation allowing secret surveillance measures. It entails assessing whether the applicant could potentially be impacted by such legislation. This can stem from their belonging to a group targeted by the legislation or from the legislation’s broad impact on all users of communication services, enabling interception of anyone’s communications beyond legal control.67

59 ibid165.

60 Case 5029/71 Klass and Others v. Germany [1978] ECtHR Plenary Court Judgement (Klaas).

61 Kennedy (n 31).

62 Zakharov (n 56).

63 Protocol No 11 to the Convention for the Protection of Human Rights and Fundamental Freedoms, Restructuring the Control Machinery Established Thereby, 11 May 1994, ETS No 155 p 2.

64 Klass (n 60), para 34.

65 Klass (n 60), para 36.

66 Zakharov (n 56), para 171.

67 Zakharov (n 56), para 171.

36

The second condition centres on the availability and effectiveness of remedies at the national level. The level of scrutiny by the ECtHR is dependent upon the efficiency of these remedies.68 If the domestic system fails to provide effective remedies for individuals suspecting they were subjected to secret surveillance, it might incite widespread suspicion and concern among the public.69 In such instances, greater scrutiny by the ECtHR is warranted. The mere threat of surveillance, in these cases, may itself constitute a violation of the right to free communication guaranteed byArticle 8 of the ECHR.

Conversely, if the national system does offer effective remedies, it becomes more challenging to justify widespread suspicion of abuse. In such scenarios, an individual may only claim victimhood due to the existence of secret measures or legislation if they can demonstrate a personal risk of being subjected to such measures.70

Transferring these criteria to the present case, the Court agreed with the Government’s state of reasons that the CFR is not to be considered a person affected or specifically targeted.71 Conversely, it held that the second part of the first condition of the Kennedy Approach (by virtue of a persons affected through broad bulk interception) had to be examined.72 The Court held that, while the Government’s systems of scrutiny and review (Intelligence Inspectorate, Ombudsmen or Chancellor of Justice) provided certain aspects of a domestic remedy,73 the broader public could not be appeased in their concern of large-scale bulk interception by the mere information of interception and lacking reasoning that results from the FRA and Government’s remedy due to the already discussed facts in II(B)(1) of this article.74 Concluding, the Court dismissed the Government’s argument based on the aforementioned reasons and permitted itself to examine the Swedish legislation, disregarding the CFR’s legal status, in abstracto

E. The merits of the case and the findings of the court

1. The applicants view

68 ibid.

69 ibid.

70 ibid.

71 Rättvisa v. Sweden GC (n 7) para 168.

72 ibid 169.

73 Rättvisa v. Sweden GC (n 7, para 173.

74 ibid 175.

37

The CFR asserted that the Swedish model of interception, namely, the bulk interception regime imposed by the Swedish authorities on the population of Sweden, stood in conflict with the Court’s general view on bulk interception regimes as of Klaas and Others75 and other benchmark cases on mass surveillance.76 The CFR indicated that while certain regimes may fall within the scope of the margin of appreciation of the relevant state’s authority, the Government of Sweden implemented a regime far too broad and general, as it could be considered within the scope of the said margin of appreciation.77 The CFR furthermore noted that any departing judgement by the Court could pose the danger of leaving the relevant case law of the Court defunct or inconsistent, e.g., the retention of fingerprints en mass in S. and Marper v. the United Kingdom (30562/04 and 30566/04). 78 Considering the case that the Court may find the application of bulk interception legally justified, the CFR urged the Court to consider the minimum safeguards in Roman Zakharov79 “[…] as an initial framework […]”80 to scrutinise the application of said regimes.81

Continuously, the CFR requested and invited the Court to scrutinise the possibility of a purely judicial system of approvals for intelligence bulk interception, as this is the method-ofchoice for targeted (meaning individual) interception of communications, which would, in turn, oblige the relevant intelligence agencies to obtain a permit from a Court in the same manner that is required for the interception of an individual’s communication, who is alleged to have committed a crime and whose SIGINT would bring about a benefit to the criminal investigation and public security.82 While accepting that the oversight bodies of the Swedish FRA held the necessary independence and individuality from the executive power of the state, the CFR noted that the oversight body for the intelligence agency must be provided with a broader scope of powers, ranging from legally binding decisions to access to documents falling under the scope of secrecy, thereby making the body in question accessible to the public eye and providing scrutiny.83 Notably, the CFR proposed an idea to foster the existence of an effective remedy in the form of either “[…] post-fact notification of the subject of surveillance, a possibility to request information about the surveillance or […] a

75 Klass (n 60).

76 Rättvisa v. Sweden GC (n 7), para 182.

77 ibid

78 Cases 30562/04 and 30566/04 S. and Marper v. the United Kingdom [2008] ECtHR Grand Chamber Judgement.

79 Zakharov (n 56)

80 Rättvisa v. Sweden GC (n 7), para 183.

81 ibid.

82 Rättvisa v. Sweden GC (n 7) paras 185-187.

83 ibid 190.

38

body that can examine complaints without requiring the individual to submit evidence.”84 With regards to the general scope of the powers that the Swedish FRA encompasses, the CFR noted, that albeit its restraints are to be considered sufficient, its general stance was undermined by the broad range of discretion.85 Importantly, the CFR noted with concern that the aforementioned broad scope of the FRA and its use by state actors outside of the relevant foreign intelligence framework may be of further risk to impact the general population of Sweden. This, the CFR argued, was and is done by permitting executive actors to issue DTDs.86 87

Within the Swedish intelligence system via interception and intelligence in general, the CFR alleged two major irregularities.88 Firstly, the CFR noted that as to the different processes of electronic data processing (EDP), the Government had failed to impose a legal obligation onto the FRA to maintain a concise record of (bulk)interceptions, and secondly, shortcomings in terms of legal clarification as to the legality and circumstances of bulk interception by the FRA89 with an implied discrimination of protection in differentiating between natural and legal persons – excluding the latter from the FRA Data Protection Act.90 Lastly, in paragraph 198 of the judgement in the case Rättvisa v. Sweden, the CFR pointed out that the intended oversight body for the FRA, the Inspectorate,91 lacked the power to issue binding decisions, leaving it among other institutions such as the Ombudsmen or Chancellors of Justice, which albeit being powerful institutions, do not hold the power to issue binding decisions nor have ever had applications to their offices submitted and processed successfully.92

2. The government’s view

Starting their argumentation, the Government asserted that by nature of the SIGNITN, operations of the FRA contributed to the legal obligation to protect civilian life under the ECHR and thereby not only served a foreign intelligence purpose but rather one of a civil

84 ibid 191.

85 ibid 193.

86 Detailed Tasking Directives as mentioned under I(A)(3) of this Article.

87 Rättvisa v. Sweden GC (n 7), para 193.

88 ibid 196.

89 ibid.

90 ibid 197.

91 “Foreign Intelligence Inspectorate”

92 Rättvisa v. Sweden GC (n 7), para 198.

39

service93 and submitted that the collection of SIGINT differs significantly from secret surveillance, as the former handles the persons whose communications have been intercepted mainly as “[…] carriers of information” 94 and not as concerned individual.95 96

While rejecting the application of the prerequisite of a “[…] reasonable suspicion [...]”97 outlined by the CFR in relation to relevant case law98, the Government reiterated the existence of a resilient and adequate framework of safeguards that ensure the compliance of Swedish intelligence with the relevant domestic and international rights of natural persons.99 With regards to the submissions of the CFR under paragraph 193, the concern as to the power of the NOA to issue DTDs, which in turn enable the NOA to utilise intelligence capabilities,100 the Government stated inter alia the necessity and strict control regimes which justify the execution of developmental activities under the use of intelligence materials.101 As to the adequacy of safeguards, the Government reiterated the following: due process through a system of Court permissions and/or notification of the Foreign Intelligence Court,102 public hearings and private hearings with representatives of public interest,103 mandatory mission permit requests that detail “[...] all parameters, including the conditions needed to limit such interference.”,104 and various safeguards regarding the duration of interception and storing of all communication obtained.105 The Government confirmed the rights of state actors and other apparatus of the Swedish authority106 to utilise the communications obtained by SIGINT. Still, it emphasised that as of 2018, no such request was confirmed by the FRA.107 Having examined the arguments of the CFR as to the due to the secrecy argument, virtually inaccessible remedies before responsible Courts, the Government reiterated the existence of

93 ibid 202.

94 ibid 203.

95 In this case a concerned individual in this case means a person of whom communications are being intercepted for the purpose of obtaining incriminating evidence as to the person itself.

96 Rättvisa v. Sweden GC (n 7), para 203.

97 ibid 205.

98 Zakharov (n 56) and Case 37138/14 Szabó and Vissy v. Hungary [2016].

99 Rättvisa v. Sweden GC (n 7), para 206.

100 ibid 193.

101 ibid 207,208.

102 ibid 208.

103 ibid 209.

104 ibid 210.

105 Rättvisa v. Sweden GC (n 7) paras 211,212.

106 Inter alia, the Security Police or Armed forces

107 Rättvisa v. Sweden GC (n 7), para 214.

40

multiple, in its view potent, remedies.108 The Government systematically rejected the arguments of the CFR and the view of the Court in the preceding case.109

3. Intervening opinions by state parties

(a) The Republic of Estonia (Estonia)

In its intervening argument, the Estonian Government argued for updated standards to evaluate the legality of mass surveillance for foreign intelligence, highlighting its importing differences from criminal surveillance.110 Estonia suggested modifying legal criteria to reflect better the scope and secrecy needs of intelligence gathering, including specifying permissible areas for surveillance and eliminating the requirement to notify individuals under surveillance, emphasising the importance of secrecy in national security efforts.111

(b) The French Republic (France)

The French Government highlighted the critical role of bulk interception in detecting unknown threats. It argued against the “reasonable suspicion” criteria due to the distinct nature of such operations compared to targeted surveillance.112 The French Government reiterated that states should have considerable freedom in managing these regimes, emphasising the need for in-context evaluation of safeguards against abuse.113 France criticised stricter scrutiny methods seen in past judgments, advocating for compatibility with privacy rights without mandatory judicial pre-authorization, which provided subsequent independent oversight.114 Furthermore, France differentiated between the privacy impacts of collecting communication data versus content, suggesting lesser safeguards for the former.115 On intelligence sharing, France underscored the need for secrecy and acknowledged that procedures and safeguards may differ internationally, highlighting the complexity of using foreign intelligence data.116

(c) The Kingdom of the Netherlands (Netherlands)

108 ibid 219.

109 ibid 221.

110 ibid 222.

111 ibid 223.

112 ibid 224.

113 ibid 225.

114 ibid.

115 Rättvisa v. Sweden GC (n 7) para 226.

116 ibid 227.

41

The Dutch Government asserted that bulk interception is essential for identifying national security threats, highlighting the vulnerability of key sectors like energy and transportation to cyber-attacks and terrorism. It emphasized the insufficiency of targeted interception due to the sheer volume of global digital communications.117 The Netherlands argued against the need for updated safeguards or the imposition of a “reasonable suspicion”118 criterion, suggesting such measures would impair intelligence effectiveness without significantly improving rights protections.119 The Netherlands further contended that bulk interception, governed by specific aims and data use restrictions, does not inherently infringe on privacy more than targeted interception, which analyses almost all captured communications.120 Lastly, the Netherlands opposed mandatory detailed justifications for interception criteria, citing threat unpredictability and asserting that existing oversight mechanisms suffice.121

(d) The Kingdom of Norway (Norway)

The Norwegian Government argued for broad discretion in implementing bulk interception for national security, citing the need to adapt to rapid technological advancements and the challenges of tracking hostile entities.122 Norway emphasised the essential role of modern surveillance capabilities in identifying unknown digital threats and maintaining national security.123 Norway further advocated against the Court of Justice of the European Union's oversight approach focusing on the sufficiency and adequacy, that could undermine the states’ flexibility in operating bulk interception regimes.124 Additionally, Norway advised against adopting CJEU concepts and criteria, highlighting differences between EU and non-EU states and legal frameworks, particularly in interpreting “proportionality” and rights related to personal data protection.125

(e) Recognizing the political interest and summarizing the intervening arguments of third states

The political motivations and stances of the involved state Governments namely Estonia, France, the Netherlands, and Norway collectively reveal a common thread in an approach

117 ibid 228.

118 ibid 230.

119 ibid.

120 ibid 229, 231.

121 ibid 232.

122 ibid 233.

123 ibid 233.

124 ibid 234.

125 Rättvisa v. Sweden GC (n 7) para 235.

42

towards bulk interception for national security purposes. In this context, these Governments, representing a collective political stakeholder, underscore the critical importance of adapting surveillance mechanisms to modern technological capabilities and emerging security threats, reflecting a broader political and strategic stance on the balance between national security and privacy rights. This collective stakeholder emphasized the unique nature of bulk interception as an indispensable tool for foreign intelligence and national security.126 The argument for adapting the criteria for assessing the ECHR’s compatibility of such surveillance practices to the specific context of bulk interception underlines a shared understanding of the evolving nature of threats and the necessity for states to detect and counteract these threats proactively.127

This stance suggests a prioritisation of national security, recognising that the traditional frameworks for surveillance assessment, especially those suited for criminal investigations, may not be adequate for the broad and long-term scope required in the gathering of foreign intelligence. The political stakeholders’ consensus on the inapplicability of the ‘reasonable suspicion’ criterion for bulk interception operations further highlights a shared belief in the need for flexibility and discretion in intelligence gathering over the alleged concerns of natural or legal persons within the stakeholder(s) relevant jurisdiction.128 This reflects an approach to surveillance motivated by foreign- and security policy, acknowledging the challenges of identifying unknown threats in a rapidly changing digital landscape.129 The Governments’ collective viewpoint advocates for a regulatory environment that allows for the effective operation of bulk interception regimes while also considering protecting individuals’ rights to privacy to a certain extent.

The aforementioned interests show a mutual recognition of the importance of maintaining operational secrecy and the effectiveness of intelligence operations which at times conflicts with the request for information by natural or legal persons allegedly affected. The Governments argue against the imposition of stringent pre-operational requirements, such as notifying affected individuals, which, in their opinion, could potentially compromise the efficacy and secrecy of national security measures.130 This stance underscores the geopolitical consideration of safeguarding state security interests in an increasingly interconnected and

126 ibid 223, 224, 228, 234.

127 ibid 222, 224, 228, 233.

128 ibid.

129 ibid.

130 Rättvisa v. Sweden GC (n 7), paras 223,227,231,234.

43

digital global environment. Additionally, the collective political stakeholder131 expresses a shared belief in the necessity of posterior oversight mechanisms rather than stringent preauthorisation requirements for surveillance operations. This perspective balances the need for state flexibility in national security operations with the provision of safeguards against potential abuses of surveillance powers.132

In conclusion, the collective stance of the involved state Governments in the Rättvisa v. Sweden case133 reflects a complex interplay between the imperatives of safeguarding national security and adapting to technological advancements while navigating the legal and ethical considerations of privacy rights. This political stakeholder’s approach underscores a recognition of the challenges posed by modern threats and the strategic necessity of maintaining robust, flexible, and effective intelligence capabilities in the face of these challenges.

F. The ECtHR’s decision

1. The court’s preliminary remarks

The Court addresses the bulk interception of cross-border communications, underscoring the unique challenges this surveillance presents in the digital era. With most communications being digital and global, such surveillance extends far and wide, necessitating, yet complicating the establishment of safeguards. Unlike targeted interception, bulk interception is primarily used for foreign intelligence and identifying new threats, including from unknown entities.134 This necessitates a degree of secrecy that limits public information on its operation, making safeguards both crucial and hard to define.135 The Court faced the task of evaluating these regimes for Convention compliance amid evolving technological capabilities and threats like terrorism and cyberattacks, requiring an assessment based on the few details available about their implementation and the safeguards against potential abuse.136

2. The court on the existence of an interference

131 The “collective stakeholder” means the combination of the intervening opinions by the states in the Grand Chamber judgement.

132 Rättvisa v. Sweden GC (n 7) paras 225,232,235.

133 Rättvisa v. Sweden GC (n 7).

134 ibid 236.

135 ibid.

136 Rättvisa v. Sweden GC (n 7) para 237.

44

The Government argued that there was no interference with the rights granted in Article 8 as Sweden’s bulk interception regime did not directly target the CFR, deeming the chance of communications being classified as very low.137 The Court, however, viewed bulk interception as increased interference with Article 8 rights across several stages: (a) capturing and initially retaining communications, (b) applying selectors to sift through the data, (c) analysing selected data, and (d) retaining and possibly sharing the processed data.138 Initially, vast amounts of data are intercepted, including that of individuals not of interest to intelligence services. Selectors are then filtered this data, with analysts reviewing material at a later stage. The final use of data, including sharing, marks the last stage.139 The Court emphasized that Article 8 applies at all stages, with the degree of interference and the need for safeguards increasing as the process progresses. It highlights the importance of protection, especially when personal data is processed or accessed.140

In its assessment, the Court referred to its jurisprudence in the case Leander v. Sweden under which “Both the storing and the release of [information about the applicant’s private life] [...] amounted to an interference with his right to respect for private life as guaranteed by Article 8 § 1[...]”141 hence the storing of data relating to the private life of the citizens of Sweden in the case at hand constituted such an interference.

3. The court on the justification of the interference

The Court stipulated that any infringement upon an individual’s rights under Article 8 of the ECHR must be scrutinized under three primary criteria: legality, legitimate aim, and necessity within a democratic society.142 For an interference to be justified, it must have a basis in domestic law, align with the principles of the rule of law, and be both accessible and foreseeable to the individual concerned. This legal framework ensures that measures are not arbitrary and that individuals have adequate protection against unauthorized surveillance.143

The Court acknowledged that the principle of ‘foreseeability’ in the context of secret surveillance operations, such as the interception of communications, cannot permit individuals to predict surveillance actions and consequently alter their behaviour.

137 ibid 238.

138 ibid 239.

139 ibid 240-243.

140 ibid 244.

141 Case 9248/81 Leander v Sweden [1987] ECtHR Judgement para 48.

142 Rättvisa v. Sweden GC (n 7), para 246.

143 Rättvisa v. Sweden GC (n 7), para 246.

45

Nevertheless, the legislation must be explicit about the conditions under which the state is empowered to employ such surveillance measures.144 This necessity stems from the inherent risks of arbitrariness associated with executive powers exercised in secrecy. Therefore, the Court held that the law must clearly outline the scope of authority, application conditions, and the procedure for exercising surveillance to safeguard individuals against arbitrary interferences.145

Secret surveillance legislation is evaluated for its adherence to both the ‘lawfulness’ and ‘necessity’ of the interference. The ‘quality of law’ in this context implied that the legislation should not only be accessible and predictable but must also restrict the application of secret surveillance measures strictly to situations deemed necessary in a democratic society. This entails the incorporation of adequate and efficient safeguards and guarantees against potential abuses.146

The Court has established minimum legal safeguards to mitigate the risk of power abuses in surveillance activities. These safeguards, following the Court notion, must include specifying the types of offences that may justify surveillance, identifying the groups of people who might be surveilled, limiting surveillance duration, and outlining the procedures for data handling, sharing, and destruction.147 These requirements are integral to ensuring that surveillance measures are implemented judiciously and are particularly pertinent in the context of national security. Furthermore, the Court held that the legal framework must provide for thorough supervision of surveillance measures, including mechanisms for notification and legal redress.148 Oversight and supervision of secret surveillance are critical at various stages: when surveillance is ordered, during its execution, and after its conclusion.149 Given the clandestine, secretive nature of such surveillance, the Court advocated for judicial oversight as the most effective means of ensuring independence, impartiality, and procedural propriety. According to the Court, post-surveillance, notifying individuals becomes essential for allowing them to seek legal remedies and challenge the legality of the surveillance, thus acting as a safeguard against abuse.150

144 ibid para 247.

145 ibid.

146 ibid 248.

147 ibid 249.

148 ibid 249.

149 Rättvisa v. Sweden GC (n 7), para 250.

150 ibid 250,251.

46

While acknowledging that states possess a significant degree of discretion in determining the measures necessary for national security, the Court emphasized that this discretion is subject to stringent European scrutiny. This scrutiny ensures that surveillance systems designed to protect national security do not undermine democratic processes.151 The Court’s assessment focused on the comprehensive circumstances surrounding surveillance measures, including their authorization, scope, duration, and the safeguards in place to limit interference to what is strictly "necessary in a democratic society".152

4. The court on the need to develop case law

The Court acknowledged the necessity of bulk interception regimes given the evolving technological landscape and the increased sophistication of threats, maintaining that the decision to employ such regimes remains within state discretion.153 Despite the application of six minimum safeguards in earlier cases,154 the Court recognizes that technological advancements have altered communication patterns, necessitating a re-evaluation of these safeguards due to the increased volume and nature of electronic communications and related data.155 Bulk interception, distinct from targeted interception, primarily focuses on international communications for purposes like foreign intelligence gathering and cybersecurity, differing in scale and nature from previous cases .156 While bulk interception can target individuals through selectors like email addresses, the Court acknowledged that the process involved capturing communications en masse without directly monitoring devices.157 Considering the potential for abuse, the Court asserted that while bulk interception is permissible for protecting national interests, it requires a narrower margin of appreciation and adapted safeguards to ensure compliance with Article 8, reflecting its unique characteristics and stages.158

5. The court’s suggestion in bulk interception cases

151 ibid 252.

152 ibid 252,253.

153 ibid 254.

154 Case 9248/81 Valenzuela Contreras v Spain [1998] ECtHR Judgement para 46, Case 54934/00 Weber and Saravia v. Germany [2006] ECtHR Third Section Judgement para 95.

155 Rättvisa v. Sweden GC (n 7) paras 255,256.

156 ibid 257-259.

157 ibid 260.

158 Rättvisa v. Sweden GC (n 7) para 261.

47

The Court acknowledged that certain safeguards applicable to targeted interception are not directly transferable to bulk interception, such as defining specific offenses or categories of individuals for interception. Instead, detailed domestic rules should outline the authorization grounds and circumstances for bulk interception to ensure necessity and proportionality.159 The Court deemed supervision and review crucial, requiring end-to-end safeguards including independent authorization, ongoing oversight, and post-operation review to mitigate abuse risks.160 The Court accepts that including all selectors in authorization may not be practical due to their volume and need for flexibility, yet, insisted on specifying types or categories of selectors, especially emphasizing safeguards for strong selectors linked to identifiable individuals.161 Having considered this, the Grad Chamber held that an independent authority must supervise each interception stage, and detailed records should be maintained to facilitate this.162

Thus, the ECtHR held that effective remedies should be available for anyone suspecting interception of their communications, with or without subsequent notification, emphasizing the need for proceedings before an independent body capable of issuing legally binding decisions.163 Moreover, the Court highlighted the need for specific safeguards when sharing intercepted material with foreign entities, ensuring convention-compliant collection and handling by receiving states.164 The Judges also noted that the acquisition of communications data, due to its potential for privacy intrusion, requires similar safeguards to those for content.165

6. The court’s concrete assessment

The Court thoroughly evaluated the Swedish signals intelligence system, affirming its lawful foundation and its alignment with the pursuit of legitimate objectives such as national security.166 The Court’s analysis centred on whether the legislative framework of Sweden provided accessible, foreseeably clear, and necessary safeguards consistent with the principles of a democratic society, addressing the criteria previously established for

159 ibid 262.

160 ibid 263,264.

161 ibid 267-269.

162 ibid 270.

163 Rättvisa v. Sweden GC (n 7) paras 271,273.

164 ibid 276.

165 ibid 277,278.

166 Rättvisa v. Sweden GC (n 7) para 279.

48

appraising the compliance of bulk interception regimes.167 The Signals Intelligence Act (Lagen om signalspaning i försvarsunderrättelseverksamhet; 2008:717) of Sweden was found to distinctly outline the conditions under which bulk interception could be authorized, encompassing a range of national security threats and other significant concerns. This detail and clarity were deemed sufficient, particularly given the evolving and unpredictable nature of foreign threats.168 The Court praised the procedural mechanisms in place for the authorization, selection, examination, and use of intercepted data, particularly noting the essential role of the Foreign Intelligence Court in maintaining the legality and proportionality of intelligence activities.169

However, the Court identified crucial areas of concern, especially regarding the handling of intercepted material not containing personal data, the lack of a prerequisite to weigh individual privacy rights when sharing intelligence internationally, and the absence of a robust mechanism for post facto review. These deficiencies raised potential risks of disproportionate infringement on individuals’ privacy and correspondence rights under Article 8 of the Convention.170 Despite recognizing the Swedish regime’s safeguards, such as judicial pre-authorization and the oversight by an independent body, the Court concluded that these did not sufficiently mitigate the shortcomings identified. It stated that the Swedish bulk interception regime, although largely compliant with the Convention’s standards, exceeded the permissible margin of appreciation by not providing comprehensive safeguards against the potential for arbitrariness and abuse.171 This detailed assessment underscored the critical role of bulk interception for safeguarding national security, yet it also highlighted the necessity for such a regime to embed extensive, continuous safeguards.

These safeguards must ensure compliance with democratic values and the protection of individual rights as stipulated in the European Convention on Human Rights.172 The Court stressed that even though the Swedish model includes detailed legal rules and is narrowly scoped, providing for several protections, the judicial pre-authorization process and independent oversight alone were insufficient. The regime’s shortcomings, particularly in the context of data protection, intelligence sharing, and the effectiveness of post-surveillance review mechanisms, significantly impacted the regime’s compatibility with Article 8,

167 ibid 282.

168 ibid 284-287.

169 ibid 288-299.

170 ibid 326,330,364.

171 ibid 373.

172 ibid 365,367.

49

emphasizing the paramount importance of end-to-end safeguards in bulk interception systems to uphold the rule of law and prevent abuse.173

After considering the preceding application, the Court rejected the Government’s preliminary objections and held “[...]by fifteen votes to two, that there has been a violation of Article 8 of the Convention – the right to respect for private and family life”.174 This judgement opposes a previous judgement of the Court in 2018, which denied the presence of a violation of Article 8 of the Convention175 The judgment in this case thereby overrules its previous decision and affirming the CFRs claim.

7. An assessment of the ruling considering the judicial interests of the ECtHR

The judicial interest of the ECtHR in legislating on cases related to mass surveillance underlines a nuanced approach to balancing state security interests with individual privacy rights. The ECtHR’s decisions, notably in cases like Big Brother Watch and Others v. United Kingdom and Centrum för Rättvisa v. Sweden, 176 reveal a judiciary that is engaged in delineating acceptable state behaviour in the domain of surveillance within the framework of the ECHR.177 The Court’s interest in these cases goes beyond mere adjudication as it seeks to establish a comprehensive legal framework that can guide both national Courts and legislatures on the complex issues of mass surveillance. This judicial guidance is pivotal in an era where technological advancements have significantly outpaced the development of corresponding legal norms.

Left unregulated, these systems and technologies pose a threat to the fundamental freedoms and rights of citizen, not only in the European Union, as their power and influence on police and intelligence activities continuously supersedes itself with every new ‘version’ and ‘update’. By setting forth criteria, safeguards, and especially precedents, the ECtHR endeavours to ensure that member states’ surveillance regimes are compatible with the ECHR, particularly Articles 8 and 9, which protect the right to private life and freedom of expression, respectively. 178

173 Rättvisa v. Sweden GC (n 7) para 373.

174 ibid Reasons of the Court p 95.

175 Rättvisa v. Sweden 2018 (n 30), para 85.

176 Cases 58170/13, 62322/14 and 24960/15 Big Brother Watch and Others v. United Kingdom [2021] ECtHR Grand Chamber Judgement.

177 ibid.

178 Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as amended) (ECHR) arts 8,10.

50

The ECtHR’s ruling underscores a proceduralist approach, focusing on the safeguards necessary to make mass surveillance regimes compliant with the ECHR. This approach reflects the Court’s understanding that while states may have legitimate security interests that necessitate the use of mass surveillance, such practices must be executed within a legal framework that prevents abuse and protects fundamental rights. The development of criteria, such as those outlined in the Big Brother Watch case,179 signifies the Court’s proactive role in shaping the legal standards that govern mass surveillance, ensuring they are clear, precise, and subject to independent oversight and review.

Moreover, the ECtHR’s rulings highlight a judicial balancing act. On one hand, the Court recognizes the necessity of mass surveillance in protecting national security and combating serious crimes.180 On the other hand, it insists on the establishment of robust legal safeguards to mitigate the risks such surveillance poses to privacy and freedom of expression.181 This balance reflects the Court’s interest in preserving the integrity of the ECHR while acknowledging the realities of contemporary security challenges. The Court’s proceduralist stance underscores the complexities involved in adjudicating issues of mass surveillance.182 The ECtHR is aware of the varying capabilities and security needs of its member states, as well as the rapid evolution of technology. Its ruling, therefore, aims to provide a flexible yet principled legal framework that accommodates these variations while upholding the core values of the ECHR.183

In legislating through its judgments, the ECtHR plays a crucial role in the ongoing discourse on the right to privacy in the digital age. Its decisions serve not only as legal benchmarks for national surveillance laws but also as a forum for articulating the ethical and legal principles that should guide state conduct in the realm of mass surveillance. Through its nuanced approach, the Court seeks to ensure that the pursuit of security does not come at the expense of the fundamental rights and freedoms that the ECHR seeks to protect.

III. The effects and aftermath of the judgement on Swedish policy and intelligence structure

179 Case 24960/15 Big Brother Watch and Others v. United Kingdom [2021] ECtHR Grand Chamber Judgement.

180 Rättvisa v. Sweden GC (n 7), para 279.

181 ibid 365,367.

182 The ECtHR's proceduralism focuses on assessing whether states have followed fair and transparent legal processes, prioritising the integrity of decision-making and the rights of individuals within these procedures rather than focusing solely on substantive outcomes or specific rights violations.

183 Rättvisa v. Sweden GC (n 7) para 236.

51

A. The government on 25 November 2021

The Swedish Government has addressed the European Court’s findings of a violation of Article 8 concerning the nation’s signals intelligence legislation, which lacked proper proportionality and safeguards. To rectify the identified shortcomings, including issues with the destruction of non-personal data, considerations for privacy when sharing intelligence, and the need for effective ex post facto review, Sweden has taken specific steps and proposed a plan of action.184

Compensation of EUR 52,625 was paid to the CFR for legal costs, marking the Government’s acknowledgement of the violation. Furthermore, Sweden initiated legislative reforms to improve the situation. A notable action includes the proposal of a new Act on

Personal Data Processing at the National Defence Radio Establishment. This act mandates a thorough assessment before sharing intelligence with foreign entities, ensuring sufficient data protection is provided. It lays out criteria that must be satisfied for any personal data transfer, including the necessity of the transfer for the establishment’s duties, the absence of secrecy restrictions, and adequate protection guarantees by the recipient.185 As of the date of this article, this act has been implemented into Swedish law.186

Additionally, the Government committed to revising the 2008 Signals Intelligence Act to include considerations arising from the Court’s judgment. This includes an inquiry set for 2021–2025 to evaluate and recommend further legislative or procedural changes as needed.187 To ensure transparency and inform relevant bodies, the Government published the judgment and summaries both in English and Swedish on its website and distributed these materials to key authorities and institutions. The Swedish Government planned to provide an updated Action Plan by 25 May 2022, demonstrating ongoing efforts to align its signals intelligence practices with the requirements of the European Convention on Human Rights. Continuously, this plan was submitted.188

B. The CFR on 17 December 2021

184 Communication from Sweden concerning the case of Centrum for rattvisa v. Sweden (Application No. 35252/08) [DH-DD (2021)1287], Secretariat of the Committee of Minister (25 November 2021).

185 ibid.

186 Lag (2021:1172) om behandling av personuppgifter vid Försvarets radioanstalt (SFS 2021:1172) (Swed).

187 Communication from Sweden concerning the case of Centrum for rattvisa v. Sweden (Application No. 35252/08) [DH-DD (2021)1287], Secretariat of the Committee of Minister (25 November 2021)

188 ibid.

52

The document hereto is the communication from CFR concerning Sweden’s compliance with the judgment regarding its bulk interception surveillance regime.189 CFR acknowledges the payment of just satisfaction by Sweden but criticizes the Government for its inadequate Action Plan and the lack of concrete measures to address these shortcomings.190 The communication emphasized the Government’s proposal to appoint an inquiry without specifying a timeline, which Centrum för Rättvisa found to be insufficient. It contrasted Sweden’s response with the United Kingdom’s proactive approach in a similar case and criticized the expansion of Sweden’s surveillance capabilities without first addressing the Court’s concerns.191 The communication concluded with recommendations for the Committee of Ministers to ensure swift and concrete progress in executing the judgment, including the appointment of an inquiry by a specific deadline, ensuring practices align with Convention principles, and revising the Action Plan with detailed steps and a timeline for compliance.192

In summary, Centrum för Rättvisa called for urgent action by the Swedish authorities to rectify violations identified by the European Court of Human Rights concerning the bulk interception surveillance regime, criticizing the Government’s slow and inadequate response and the expansion of surveillance powers without necessary safeguards.193

C. The government on 27 May 2022 – second Action Plan

The updated Action Plan from 27 May 2022 reflected Sweden’s ongoing efforts to align its signals intelligence legislation with the European Court’s directives, addressing violations of Article 8 of the ECHR.194 This plan not only confirms the completion of individual measures with the payment of just satisfaction (EUR 52,625) to the plaintiff but also elaborates on the advancement of general measures previously outlined.195

One significant update in this plan was the progression towards reviewing the 2008 Signals Intelligence Act. While the initial Action Plan highlighted the intention to conduct this review, the updated plan provides specifics about the inquiry’s setup, including the

189 Communication from the CFR (17/12/2021) in the case of Centrum for rattvisa v. Sweden (No. 35252/08) [DH-DD (2022)6], Secretariat of the Committee of Ministers (17 December 2021).

190 ibid.

191 ibid.

192 ibid.

193 ibid.

194 Communication from Sweden concerning the case of Centrum for rattvisa v. Sweden (Application No. 35252/08) [DH-DD (2022)575], Secretariat of the Committee of Ministers (27 May 2022).

195 ibid.

53

processing of terms of reference within the Swedish Government Offices and the inquiry’s scheduled start later in the year. This review was expected to be comprehensive, taking around two years to complete, including consultation, drafting a government bill, and amending legislation.196

Furthermore, the updated plan confirmed the implementation of the new Act on Personal Data Processing at the National Defence Radio Establishment as of 1 January 2022. This act addresses one of the Court’s main concerns by imposing stringent requirements on the transmission of intelligence material to foreign partners. The Government has provided translations of crucial sections of this act and related ordinances to give a complete understanding of the legislative framework now in place.197

Addressing the Court’s identified shortcomings, the updated plan reported a proactive measure by the National Defence Radio Establishment: the immediate destruction of irrelevant non-personal data collected under the Signals Intelligence Act, effective from 1 July 2021. This decision preceded a legal requirement and aims to mitigate risks to Article 8 rights, with oversight provided by the Foreign Intelligence Inspectorate.198

The updated plan also reiterated the commitment to addressing the remaining shortcomings through the legislative review process. This includes ensuring privacy considerations in intelligence sharing with foreign entities and establishing a robust ex post facto review mechanism, underscoring the independence of public authorities as per Sweden’s constitutional provisions.199 Regarding publication and dissemination efforts, the updated plan maintained the actions taken as outlined in the initial plan, ensuring that the judgment and related summaries are accessible to relevant authorities and the public.200

Conclusively, the updated Action Plan signified Sweden’s adherence to addressing the European Court’s concerns comprehensively, detailing the steps taken since the initial plan and the structured approach towards legislative and procedural reforms. It reassured continuous updates and transparency with the Committee of Ministers, with the next update

196 Communication from Sweden concerning the case of Centrum for rattvisa v. Sweden (Application No. 35252/08) [DH-DD (2022)575], Secretariat of the Committee of Ministers (27 May 2022).

197 ibid.

198 ibid.

199 ibid.

200 ibid.

54

scheduled for 25 November 2022, reflecting a proactive and methodical response to the Court’s directives.201

D. The government on 25 November 2022 – third Action Plan

The third Action Plan submitted by the Swedish Government on 25 November 2022 represented another step in the process of addressing the European Court’s concerns regarding violations of Article 8 of the Convention by Sweden’s signals intelligence legislation.202 This plan followed previous submissions on 25 November 2021 and 25 May 2022, providing updates on measures taken to comply with the Court’s judgment. Both the second and the third Action Plans confirm the payment of just satisfaction (EUR 52,625) to the plaintiff, noting that this resolves the individual measures required by the Court’s judgment.

In terms of general measures, the inquiry into the Signals Intelligence Act marked a significant focus. The second plan had announced that the terms of reference for the inquiry were being processed, with the inquiry expected to start later in the year. The third plan updated that the Government officially appointed an inquiry on 14 July 2022, titled “Översyn av lagen om signalspaning i försvarsunderrättelseverksamhet”(Dir. 2022:120) (Review of the law on signals intelligence in defence intelligence activities), indicating progress in addressing the legislative shortcomings identified by the Court. This inquiry is tasked with ensuring the Signals Intelligence Act is modern, appropriate, and considers technical, international developments, security, defence, and personal integrity protection.203

Regarding the implementation of new legal provisions, the second plan confirmed the implementation of the Act on Personal Data Processing at the National Defence Radio Establishment as of 1 January 2022, addressing some of the Court’s concerns regarding the transfer of personal data to foreign entities. The third plan reiterated the implementation of this Act and highlights that further analysis is required to address the remaining shortcomings identified by the Court. The timeline and reporting have been more structured in the third

201 Communication from Sweden concerning the case of Centrum for rattvisa v. Sweden (Application No. 35252/08) [DH-DD (2022)575], Secretariat of the Committee of Ministers (27 May 2022).

202 Communication from Sweden concerning the case of Centrum for rattvisa v. Sweden (Application No. 35252/08) [DH-DD (2022)1314], Secretariat of the Committee of Ministers (25 November 2022).

203 ibid.

55

plan, providing specific milestones for the inquiry, with a first interim report due by 1 April 2023 and the final report by 1 April 2024.204

This detailed a structured timeline for proposed legislative amendments and consultations, contrasting the second plan’s estimate of approximately two years for the inquiry to conclude. Both plans mentioned the continued effort to publish and distribute the Court’s judgment and summaries to relevant authorities, Courts, and the public, with no significant updates in the approach or actions taken between the second and third plans. While both plans concluded that all necessary individual measures have been taken, the third Action Plan promises to submit updated information in the next Action Plan by 25 May 2023, indicating an ongoing commitment to addressing general measures and legislative shortcomings.205

In essence, the third Action Plan built on the foundation laid by its predecessors by formalizing the inquiry into the Signals Intelligence Act and setting clear deadlines for reporting and recommendations. It acknowledges the implementation of significant legal provisions while recognizing the need for further analysis and action to fully comply with the Court’s directives, marking a continued commitment to legal and procedural reforms.206

IV. Conclusion

Following the ruling in the case Centrum för Rättvisa versus The Kingdom of Sweden by the European Court of Human Rights significant developments and adjustments have occurred within Swedish policymaking, specifically concerning the practices and regulations of signals intelligence and surveillance to ensure compliance with human rights standards. The Court identified violations of Article 8 of the Convention, necessitating Sweden to undertake several legislative and procedural adjustments to align its surveillance framework with the European Convention on Human Rights.

In response to the Court’s decision, Sweden initiated comprehensive legislative reforms to enhance privacy protections. This includes the implementation of the Act on Personal Data Processing at the National Defence Radio Establishment, requiring a detailed assessment before sharing intelligence with foreign entities to ensure robust data protection.

204 Communication from Sweden concerning the case of Centrum for rattvisa v. Sweden (Application No. 35252/08) [DH-DD (2022)1314], Secretariat of the Committee of Ministers (25 November 2022).

205 ibid.

206 ibid.

56

Additionally, the Swedish Government conducted a thorough review of the 2008 Signals Intelligence Act to rectify the shortcomings highlighted by the ECtHR. This review aims to modernise the legislative framework, considering both technological advancements and the need to balance security imperatives with the protection of individual rights.

A notable step towards increased transparency and oversight was the publication and distribution of the ECtHR’s judgment and summaries in both English and Swedish, making these materials accessible to relevant authorities, institutions, and the public. Furthermore, the National Defence Radio Establishment has started the immediate destruction of irrelevant non-personal data collected under the Signals Intelligence Act, a proactive measure to mitigate risks to the rights protected under Article 8 of the Convention.

The Swedish Government’s structured inquiry and reporting process demonstrates a systematic and transparent approach to addressing the ECtHR’s concerns. This process includes specific timelines for the inquiry into the Signals Intelligence Act, aiming for comprehensive legislative amendments that further safeguard privacy and reduce the risk of arbitrary surveillance. The payment of compensation to the plaintiff for legal costs signifies Sweden’s acknowledgement of the violation and its commitment to rectifying the issues identified by the Court.

Enhanced data protection measures and the legislative review process represent significant policy adjustments in Sweden’s approach to signals intelligence. These measures are designed to ensure careful consideration before transferring intelligence material to foreign partners and to establish effective mechanisms for reviewing and amending the legislative framework governing surveillance practices. Sweden’s commitment to continuous improvement is evident in its plans for further updates to the Action Plan, underscoring an ongoing effort to align its surveillance practices with the principles of the European Convention on Human Rights.

Overall, the developments following the Centrum för Rättvisa versus The Kingdom of Sweden underscore Sweden’s efforts to recalibrate its surveillance framework. These adjustments highlight the country’s dedication to creating a balanced framework that respects individual rights while addressing national security concerns, demonstrating an engagement with the European Court’s directives to enhance the legal and procedural foundations of its signal’s intelligence operations, thereby preventing future violations of Article 8 of the Convention.

57

Children’s Rights in a Digital Environment

Abstract

This note explores the widespread influence of technology in children’s lives and addresses the complicated connection between digital technology and the rights of a child using some insights from General comment No. 25 (2021) on Children’s Rights in Relation to the Digital Environment and other legal instruments and sources. This Comment is important as it lays out the adequate steps necessary to protect, respect and fulfil children’s rights in the digital environments. The piece underlines the growing hazards of cybercrimes and privacy breaches while acknowledging the advantages of digital participation, since individuals under the age of eighteen account for one-third of worldwide Internet use. The note examines the necessity of the Convention on the Rights of the Child's effective execution in the world of technology and calls for legislation, regulations, and safety procedures. The piece contends that guaranteeing minors’ rights in the digital realm requires establishing an appropriate balance between advances in technology and the welfare of children, calling for a safe digital environment that prioritizes the well-being of the youth.

* LL.B. Candidate, International and European Law Programme, The Hague University of Applied Sciences.

I. Introduction

Digital technology pervades our surroundings, constituting an integral aspect of daily life. Over the years, technological advancements have reached an unprecedented level, leading to a reliance on it for various purposes. A notable segment of this dependency involves individuals under the age of 18, who have evolved into digital citizens alongside countless others actively engaging with digital technology. Statistics show that one third of the world’s population of internet usage is attributed to children under the age of eighteen, who utilize it for education, entertainment, communication1, infringements of rights and more.2

II. Digital abuses

The internet has become an integral part of our lives, thanks to advances in digital technology. With more people going online than ever before, we have seen a surge in internet activity. However, along with the benefits, there is also a darker side to this digital revolution. Cybercrimes, cybersex trafficking, privacy breaches and more have become all too common in this digital age.3

What is concerning is that many children are not adequately educated about the potential dangers of navigating the web. Without proper guidance, they are vulnerable to targeted attacks and exploitation online. However, by empowering children with the knowledge and skills to navigate the digital world safely, we can help them avoid falling victim to online threats and ensure their well-being in the digital age.

A. Cyber crimes

Regrettably, children, being still unfamiliar with the harsh realities of the world, they tend to be gullible and naive. Consequently, they often employ weak passwords, which they may reuse across various platforms, and carelessly divulge them to friends. Such exchanges of sensitive information through online channels inadvertently facilitate cybercrimes for hackers

1 ‘GrowingupinaConnectedWorld’(UNICEF Office of Research – Innocenti,2019)<https://www.unicefirc.org/publications/pdf/GKO%20Summary%20Report.pdf>accessed6February2024.

2 ‘ChildrenandGrooming/OnlinePredators’(Child Crime Prevention & Safety Center) <https://childsafety.losangelescriminallawyer.pro/children-and-grooming-online-predators.html>accessed6February2024.

3 Veiligheid, M. van J. en (2016) Forms of cybercrime, Cybercrime | Government.nl <https://www.government.nl/topics/cybercrime/forms-of-cybercrime> accessed 29 April 2024.

and cybercriminals.4 Exploiting this vulnerability, hackers can engage in activities such as doxing (releasing embarrassing pictures), impersonating children, and perpetrating scams.5 Furthermore, owing to children’s lack of experience with financial matters and their inherent trust in others, they may readily share private details such as addresses, phone numbers, and even credit card information. This poses a significant risk of identity theft and privacy breaches.6

B. Cyber-sex trafficking

Vulnerable individuals are the ones who are most commonly targeted by cybercriminals.7 In our day and age, minors start using the internet at the age of six, with every third child being exposed to something traumatic or disturbing.8 The digital realm has become an open space to lurk on children.9 Cyber sexual abuse crimes fulfil a non-exhaustive list of activities, including but not limited to online grooming, non-consensual exploitation through sexual pictures and videos, and revenge porn.10 With all these arising issues, legal systems cannot keep pace with the fast evolving and sophisticated development of cybercrimes.11 That is why we need to set regulations in place.

C. Privacy breaches

As previously stated, children do not understand the danger of sharing private information within the digital realm, leading to slip ups of personal data that could be exploited.12 If the online platform where the child’s data is being stored experiences a breach, their data could

4 SofiaKaufman,‘OnlineGamingRisks:AreYourChildrenSusceptible?’(RSS,10July2023)< https://www.aura.com/learn/online-gaming risks#:~:text=While%20gaming%20online%2C%20your%20kids,could%20lead%20to%20offline%20dangers >accessed14April2024.

5 ibid.

6 ibid,3.

7 ‘Cyber Sexual Abuse & Human Trafficking: Understanding the Nexus’ (Working with BJA NTTAC, 17 November 2023) <https://bjatta.bja.ojp.gov/media/event/cyber-sexual-abuse-human-trafficking-understandingnexus> accessed 14 April 2024.

8 ‘Minors at Risk of Cyber Trafficking’ (MARRI) <https://toolboxes.marri-rc.org.mk/tips/minors-at-risk-ofcyber-trafficking/ > accessed 14 April 2024.

9 ibid.

10 ‘Cyber Sexual Abuse & Human Trafficking: Understanding the Nexus’ (Working with BJA NTTAC, 17 November 2023) <https://bjatta.bja.ojp.gov/media/event/cyber-sexual-abuse-human-trafficking-understandingnexus > accessed 14 April 2024.

11 ibid.

12 Rightly, ‘Children Can Be vulnerable from Misuse of Their Online Data: Rightly’ (Rightly – Champions of Data, 26 July 2023) <https://right.ly/our-views-and-opinions/keeping-childrens-data-safe/ > accessed 14 April 2024.

be leaked to those who will misuse it, causing various privacy concerns.13 Furthermore, data breaches affecting children often stem from preventable factors such as compromised passwords.14 Many users rely on a single, easily guessed password, often based on personal information like birthdays or pet names.15

Additionally, application vulnerabilities pose significant risks, as software can be exploited by cyber attackers.16 Malware, another common threat, is often deployed through social engineering tactics or by exploiting software vulnerabilities, posing a serious risk to target systems.17

III. Scope of convention

The Convention on the Rights of the Child defines a child as any human below the age of eighteen years (Article 1).18 Its primary objective is to institute safeguards and provide special care for children, recognizing their mental and physical immaturity, extending protection to them before and after birth (PREAMBLE).19 In doing so, the Convention takes into consideration the significance of a child's traditions and cultures, acknowledging them as integral elements of their development, and it achieves this goal through international cooperation (PREAMBLE).20 All States party to this convention must respect and enforce the rights of the child found within the convention.21

IV. Digital environment

The Committee on the Rights of the Child has commented (No. 25 (2021)) on the need State parties to effectively apply the Convention to protect children’s rights in digital environments. This involves implementing laws, policies, and other measures designed to ensure robust compliance with the Convention and its Optional Protocols (Para 7).22 By doing, so the infringement of child rights could be significantly reduced. The Committee 13 ibid.

14 ‘Data Breach: Examples, Causes, and How to Prevent the next Breach’ (HackerOne) <https://www.hackerone.com/knowledge-center/data-breach-examples-causes-and-how-prevent-next-breach> accessed 14 April 2024.

15 ibid.

16 ibid, 13.

17 ibid, 13.

18 ConventionontheRightoftheChild(adopted20November1989,enteredintoforce2September1990)(CRC)art1.

19 CRC(n3)Preamble.

20 ibid.

21 ibid.

22 UNCRC ‘General Comment 25 on children’s rights in relation to the digital environment’ (2 March 2021) UN Doc CRC/C/GC/25.

reached its conclusion after consulting with over 700 children from 28 different countries, ensuring that their experiences were directly integrated into the drafting process.23 This inclusive approach aimed to reflect the concerns of these children regarding the digital environment.24

Moreover, the Committee conducted reviews of state parties’ reports on their compliance with children’s rights in the digital sphere, thereby gathering valuable information on existing challenges and potential practices.25 Furthermore, the Committee analysed previous decisions and interpretations by human rights treaty bodies to establish legal precedents and sought input from state experts to ensure a comprehensive understanding of the issues at hand.26

A. Best interest of the child

Article 3 of the Convention mentions that all actions concerning children should have their best interest as a primary concern as this is necessary for their well-being.27 State parties should reinforce this by conforming to the standards.28 Although the digital environment has instituted various safeguards aimed at protecting children, such as age verification mechanisms, it does not effectively shield them from potential harm, as minors can readily falsify their age and input a fictitious birthdate. There needs to be more restrictive safeguards to protect minors.An example would be to ask for one’s identification to prove age.

Such safeguards, or similar ones, have already been set out within multiple countries. One example is the United Kingdom, in which the United Kingdom Information Commissioner published 15 standards that online services should meet in order to protect children and their rights – Age-Appropriate Design Code (hereinafter Code).29 The Code was inspired from the General data Protection Regulation (GDPR).30 The Code is the first of its kind, but it is emerging globally and already being considered in the United States and Europe.31 The Code will require digital services digital services to automatically provide children with a built-in

23 ibid.

24 ibid.

25 ibid.

26 ibid.

27 CRC (n 3) art 3.

28 ibid.

29 ‘15 Ways You Can Protect Children Online’ (ICO) <https://ico.org.uk/for-organisations/advice-for-smallorganisations/whats-new/news/15-ways-you-can-protect-children-online/# > accessed 14 April 2024.

30 ibid.

31 ibid.

baseline of data protection whenever they download an app, a new game or visit any kind of online platform, keeping the privacy settings high by default.32

B. Right to life, survival, and development

The internet should not be a harmful thing for children. It should help them grow, develop, and learn.33 Technology is now a significant factor that continuously plays a role in children’s lives. Article 6 of the Convention states that all children have the right to life and that States should ensure child survival and development by preventing unnecessary risks (Para 15).34 The digital environment in the 21st century is used as a learning tool and children’s cognitive, emotional, and social development can be negatively impacted through inappropriate exposure.35 Caregivers, educators, and other individuals involved in the life of the child should receive training on the proper use of digital devices for children and the impact it can have on the crucial development stages in childhood and adolescence (para 15).

36

Technological advances have reached a point at which we can no longer abandon its use. The digital tools that are now used in a child’s life are multi-layered. On the educational front you have interactive learning, e books, virtual classrooms and more that provide different leaning opportunities and educational advancements which engage children in a way that the ‘old way’no longer can.37 We are opening doors to communication and facilitation globally.38 But of course, there are concerns and worries in regard to the internet and being online; cyberbullying, online predators, hackers, etc. Despite this risk, eliminating such tools would in fact be detrimental to our current generation and those to come.

C. Right to privacy

32 ibid.

33 UNCRC (n 6).

34 CRC (n 3) art 6.

35 UNCRC (n 6).

36 ibid.

37 ‘7 Reasons Why Students Need Technology in the Classroom’ (Explorance) <https://explorance.com/blog/7reasons-students-need-technologyclassroom/#:~:text=Access%20to%20information%20and%20resources,be%20available%20in%20traditional% 20textbooks> accessed 14 April 2024.

38 ibid.

Privacy is vital for the safety of a child and threats may arise from a collection of personal data.39 This data may include sensitive information such as a child’s identity or location (para 68).40 Interference with this right is only acceptable when it is neither arbitrary or unlawful, if so, it should serve a legitimate purpose and be proportionate (para 69).41 legitimate purpose and be proportionate (para 69).42 States should mandate the incorporation of privacy-bydesign principles in digital products and services impacting children (para 70).43 The OECD Recommendation on Children in the Digital Environment reinforces this argument as the OECD promotes the adoption of measures to provide age appropriate child safety by design.44 They do so by emphasizing the need to develop technology that safeguards children from accessing inappropriate online content, considering their age and maturity level.45 In addition to informing stakeholders with clear information as to the trustworthiness, quality, user friendliness and privacy by design of such technologies.46

Moreover, regular assessments of privacy and data protection legislation should be conducted, with a focus on establishing procedures and practices that deter both intentional violations and inadvertent breaches of children's privacy (para 70).47 For example, the GDPR sets out in Article 8 safeguards for the child. It explains that any information society services that are directed to a child category as lawful only if the child is 16 years of age and establishes that if the child is less than 16 that formal consent from a legal guardian is required.48 Said responsibility fall onto the controller. It is up to the controller to make reasonable efforts to verify the age of the child or to make sure that said consent has been given.49

D. Access to information

39(Children’s rights in relation to the Digital Environment) < https://docstore.ohchr.org/SelfServices/FilesHandler.ashx?enc=6QkG1d/PPRiCAqhKb7yhsqIkirKQZLK2M58 RF/5F0vEG+cAAx34gC78FwvnmZXGFUl9nJBDpKR1dfKekJxW2w9nNryRsgArkTJgKelqeZwK9WXzMkZ RZd37nLN1bFc2t > accessed 14 April 2024.

40 ibid.

41 ibid.

42 ibid.

43ibid.

44 Organisation for Economic Co-operation and Development (OECD), 'Recommendation of the Council on Children in the Digital Environment' (Published 16 February 2012, amended 31 May 2021) OECD/LEGAL/0389. Art 5.

45 ibid 5(a)

46 ibid 5(b)

47 ibid.

48 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1, art 8(1).

49 ibid (2).

In contradiction with what was said previously in the right of life, survival and development, state parties should reinforce Article 17 of the Convention which entitles children to exercise their right to access to information (para 50).50 This right should only be limited when legally necessary or in line with the Convention.

However, the development of age-appropriate digital content for minors is encouraged to ensure diverse access to information. But the importance of protecting children from harmful content such as discrimination, and exploitative information, while promoting guidelines for digital service providers and content moderators (para 51).51 The Digital Services Act (DSA) regulates these online intermediaries and platforms, with the goals of preventing illegal and harmful activities online and the spread of disinformation.52

V. The role of organisations

A. Civil society organisations

Civil Society Organisations (CSOs), also known as Non-Governmental Organisations (NGOs), play a vital role in creating protective environments for children by engaging with communities in proactive conversations about child rights.53 Through workshops, community events, campaigns, and volunteering efforts, CSOs and NGOs raise awareness and empower individuals to contribute to the holistic protection of children.54 By fostering partnerships and collaboration within communities, these organisations promote a culture of child safeguarding and advocate for policies and practices that prioritize the well-being of children.55

B. Child/human rights institutions

In accordance with General Comment No. 2 (2002) in regard to the Convention on the Right of the Child (hereafter the Convention), National Human Rights Institutions (NHRI’s)

50 ibid.

51 ibid.

52 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L 277/50, art 28.

53Arain S, ‘Empowering Futures: The Vital Role of Local NGOs and CSOS in Child Protection in Pakistan <https://www.linkedin.com/pulse/empowering-futures-vital-role-local-ngos-csos-child-protection-arainrfnrf#:~:text=NGOs%20and%20CSOs%20engage%20communities,the%20well%2Dbeing%20of%20children> accessed 14April 2024.

54 ibid.

55 ibid.

are essential to promote and protect the rights of the child through the implementation of the Convention and the states who chose to ratify it.56 Article 4 of the Convention on the Rights of the Child states that parties must “undertake all appropriate legislative, administrative and other measure for the implementation of the rights recognized in the present Convention”.57 The Convention embodies that every child should be recognized, respected and protected as rights holders and a human being.58 Since the Convention, and many other institutions, many developing countries have passed laws to protect children, and various regional and national programs have been launched to address issues like child trafficking.59 Recognizing children's right to protection has raised awareness about their vulnerability and prompted significant efforts to combat neglect and abuse, including addressing potential abuses by corrupt or illegitimate child service providers and criminal organisations.60

VI. Solutions

In order to protect children’s rights in the digital environment, states, as part of their legal responsibility, should safeguard children against harmful content and digital abuse.61 This can be done through various means such as strategic parenting approaches and policymaking.62 There needs to be accessible channels available where children would be able to, if necessary, open complaints or reports if they suspect breaches on their rights.63 Provide children with the tools to decide for themselves how other may collect and use their data alongside with making children aware of digital risks and rights.64

A. Parenting approaches

56 (Children’s rights in relation to the Digital Environment) <http://docstore.ohchr.org/SelfServices/FilesHandler.ashx?enc=6QkG1d%2FPPRiCAqhKb7yhsiQql8gX5Zxh0c QqSRzx6ZcNR3YdFwaRoLFkDFvNRlVoE9r590QoHaQTQRwonqARWV9Blutv2Nz3ITQ%2BFebW%2BlO KrOPw9z5qNBGnjUDapSbL> accessed 14 April 2024.

57 ConventionontheRightoftheChild(adopted20November1989,enteredintoforce2September1990)(CRC)art4.

58 ‘15 Ways You Can Protect Children Online’ (ICO) <https://ico.org.uk/for-organisations/uk-gdpr-guidanceand-resources/childrens-information/childrens-code-guidance-and-resources/how-to-use-our-guidance-forstandard-one-best-interests-of-the-child/the-united-nations-convention-on-the-rights-of-thechild/#:~:text=The%20UNCRC%20embodies%20the%20idea,been%20transformed%20in%20many%20areas> accessed April 14 2024.

59 Sharp J, ‘Thirty Years since the Convention on the Rights of the Improvements and Problems All over the World’ (Humanium, 19 November 2019) <https://www.humanium.org/en/thirty-years-since-the-convention-onthe-rights-of-the-child-childrens-rights-improvements-and-problems-all-over-the-world/> accessed 14 April 2024 .

60 ibid.

61 ‘Briefing: Children’s rights in the digital age’ (Child Rights International Network) < https://home.crin.org/issues/digital-rights/childrens-right-digital-age > accessed 12 February 2024.

62 ibid.

63 ibid.

64 ibid.

UNICEF offers parents various strategies to raise awareness about online dangers with their children. One effective approach for parents involves setting clear rules and having honest discussions about internet safety.65 Children need to understand that everything they share online leaves a digital footprint.66 Parents should emphasize the permanence of online content and the importance of responsible behaviour.67

Additionally, parents should promote values of respect and fairness, teaching children that discriminatory or inappropriate communication is unacceptable.68 Monitoring children's devices and ensuring they are updated with privacy settings configured, is crucial for protecting their rights.69 Tools like safe search and covering the camera when not in use can further enhance online safety for them.70

B. Policy making

Policy making is essential for safeguarding and protecting children (from intentional and unintentional harm), as it provides guidelines on how to prevent abuse and mitigate potential harm.71 These policies, such as the Child Safeguarding Standards,72 outline procedures for reporting issues and responding appropriately to various situations.73 By informing children and parents, these policies offer valuable guidance on steps to take when faced with such challenges.74 Typically available online and easily accessible, such policies contain comprehensive information on handling relevant situations.75 They include protocols for safe recruitment, data protection, complaint procedures, and reporting mechanisms.76

C. Avenues for complaints and reports

65 ‘How to Keep Your Child Safe Online’ (UNICEF Parenting) <> https://www.unicef.org/parenting/childcare/keep-your-child-safe-online> accessed April 14, 2024.

66 ibid.

67 ibid.

68 ibid.

69 ibid.

70 ibid.

71 Justice and consumers – child protection policies: What they are? Why they are so important? When they are necessary?’ European Commission (14 January 2020) <https://ec.europa.eu/newsroom/just/items/666497> accessed 14 April 2024.

72 The International Child Safeguarding Standards <https://commission.europa.eu/system/files/2023-10/KCSCS-Standards-ENG-200218.pdf> accessed 29 April 2024.

73 Justice and consumers – child protection policies: What they are? Why they are so important? When they are necessary?’ European Commission (14 January 2020) <https://ec.europa.eu/newsroom/just/items/666497> accessed 14 April 2024.

74 ibid.

75 ibid.

76 ibid.

There are already plans in place such as the complaint mechanisms in the Children’s Rights Alliance to implement improved mechanisms for children to report online abuses. However, the Children’s Rights Alliance – United Voices for Children highlights the necessity of an individual complaint mechanism due to the absence of provisions in the ‘general scheme’ to address harmful online content.77 While the legislation defines such content, it fails to offer a solution for victims exposed to its dangers.78 Currently, the ‘general scheme’ only outlines safety codes to protect children, without establishing a direct avenue for individuals to appeal in case of breaches in digital safety standards.79 This omission falls short of guaranteeing a child’s right to an effective remedy.80

However, an individual complaint mechanism would bridge this gap. It would allow victims to report harmful content, undergo a preliminary examination, and if necessary, prompt an investigation that could lead to the issuance of take-down orders if the platform fails to comply.81 A similar mechanism has been installed by the National human Rights Institutions (NHRIs), it is the child friendly complaint mechanisms put in place to provide children with the tools to support child friendly practices online.82

D. Educational awareness

What children learn in school profoundly influence their daily learning experiences and knowledge acquisition; it profoundly influences them. Therefore, it is imperative to embed the proper education within their curriculum on how to safely be online and use different platforms. By implementing within their educational systems a few lessons where children can learn about online risks and how to tackle them, a significant change to the number of children who fall victim to online abuses could potentially be seen. Children would benefit greatly from practising to be cautious online, understanding the importance of being selective about their online activities and interactions. This includes being mindful of what they share,

77 ‘Press release: Individual complaints mechanism needed to protect children and young people online/ Children’s Rights Alliance’ <https://childrensrights.ie/wp-content/uploads/2023/05/123-Online-SafetyIndividual-Complaints-Mechanism-Concept-Paper.pdf > accessed 14 April 2014.

78 ibid.

79 ibid.

80 ibid.

81 ibid.

82Child-friendly complaint mechanisms, <https://www.unicef.org/eca/sites/unicef.org.eca/files/201902/NHRI_ComplaintMechanisms.pdf> accessed 29 April 2024.

avoiding suspicious links and attachments, and recognizing when interactions may be concerning, they can express any concerns they may have with adults.83

Insufficient education about internet safety for children can result in adverse consequences. For instance, exposure to harmful online content like violence, harassment, or age-inappropriate material can lead to psychological effects.84 Children may experience anxiety, diminished self-esteem, and other related challenges.85 Moreover, negative social influences may disrupt children's relationships within the family and among friends, exacerbating communication difficulties and fostering negative behavioural patterns.86

VII. Conclusion

In conclusion, safeguarding children's rights in the digital age necessitates a comprehensive approach that balances the opportunities and risks presented by technology. While digital platforms offer immense potential for learning and growth, they also pose significant threats to children’s privacy, safety, and well-being. Addressing these challenges requires collaborative efforts from governments, civil society organisations, technology companies, and individuals alike.

Effective protection measures must prioritize children's best interests and incorporate privacy-by-design principles into digital products and services. Robust policies and regulations, coupled with parental guidance and educational initiatives, are essential for promoting responsible digital citizenship and mitigating online risks.

By working together to implement proactive measures and foster a culture of digital responsibility, a safer online environment can be established where children can explore, learn, and interact without compromising their rights or safety. By upholding our collective responsibility, the well-being, and future prospects of the youngest members of our digital community will be ensured.

83 ‘How to Teach Children about Online Data Privacy’ (Ghostery)<https://www.ghostery.com/blog/how-toteach-your-children-about-online-dataprivacy#:~:text=Practice%20caution%20online%3A%20Children%20should,or%20behavior%20they%20encou nter%20online> accessed 14 April 2024.

84 ‘Educate children to be discerning and avoid harmful content on the internet’, Logiscool Vietnam <https://www.logiscool.com/en-vn/blog/education/educate-children-to-be-discerning-harmful-content> accessed 29 April 2024.

85 ibid.

86 ibid.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.