8 minute read

Artificial Intelligence and Human Rights: Contemporary and Future Problems Caused by the Rise of AI

Artificial Intelligence and Human Rights: Contemporary and Future Problems Caused by the Rise of AI

By Leesha Curtis, JS Law and Business & Eoin Jackson, SS Law

Advertisement

Introduction

In recent years, artificial intelligence (AI) has developed beyond a mere figment of science-fiction, and is now a key part of business and society. AI is currently developing faster than it can be regulated, with legal frameworks becoming obsolete as soon as software is updated. A resulting lack of human rights protections has left consumers open to breaches of their privacy rights and even discrimination. Furthermore, as AI begins to mirror human traits, does it, itself, become entitled to human rights protections? The unprecedented growth of AI needs to be reconciled with robust human rights frameworks to prevent injustice. While AI has provided us with countless opportunities and more efficient operations, its impact on human rights should not be underestimated.

Contemporary Human Rights Issues with AI

In an increasingly virtual market, AI is synonymous with competitiveness. For many businesses their success is now intertwined with their ability to embed AI in their operations. However, with AI comes data analytics, and with this comes privacy rights. In order to maximise AI’s profit-making abilities, businesses need to harvest vast amounts of consumer data. Consequently, profit-making has become data-driven. This has resulted in implications for the privacy rights of consumers, as regulators are unable to keep up with the increased digitalisation of industries.

Data has been described as the “new oil” and is now akin to currency, with consumers divulging their data in exchange for personalised experiences. On a superficial level, this seems mutually beneficial. It positions businesses to provide better services, thus boosting sales, and giving consumers superior experiences. This can be seen with Netflix and Spotify utilising data and algorithms to create personalised recommendations for users.

However, at what point does personalisation become an infringement on privacy rights?

The implications of big data analytics became particularly stark in 2016 when they manifested themselves in the political sphere. Facebook’s involvement with the Cambridge Analytica scandal is a prime example of the weaponization of AI. Here, the data of millions of Facebook users was collected and utilised for political advertising in the US Presidential elections and, allegedly, the Brexit referendum. While Facebook was penalised with heavy fines, robust human rights frameworks are a more appropriate means of protecting privacy rights. Such frameworks are needed to ensure businesses view human rights protection as a necessary part of value creation, rather than an obstacle to innovation.

The use of monetary penalties for such breaches demonstrates how underdeveloped this area of law is. At present, the closest we have to robust protection is General Data Protection Regulation (GDPR). The Irish Data Protection Commission recently fined WhatsApp for a lack of transparency in the implementation of Articles 5(1)(a), and 12-14 of GDPR. These penalties demonstrate governments’ willingness to engage with these issues. Unfortunately, such penalties are often viewed by companies as the “cost of doing business.” The cur-

rent AI race acts as a perverse incentive for social media platforms to harvest their users’ data. Such penalties, therefore, fail to go to the heart of this issue as they are viewed by big tech companies as a hurdle in the path of profit maximisation. This was illustrated when Facebook was once again at the centre of controversy several months ago. Whistleblower and former Facebook data scientist, Frances Haugen, testified that the tech giant’s AI was being used to “amplify misinformation.” This highlights the ineffectiveness of financial penalties and the need for robust human rights protection in this industry.

Privacy rights are becoming difficult to define. The boundaries of what is deemed private and public information are increasingly blurred. Consequently, it is difficult for lawmakers to hold companies to account or for users to know if their rights have been breached. Likewise, there is much asymmetric information between regulators and big tech companies. At present, lawmakers are reliant on whistleblowers, like Haugen, to expose issues with AI. The controversy and backlash that surrounds whistleblowing, leaves governments and regulatory bodies in a disadvantaged investigative position.

More concerning still is the threat AI poses to the right to equality and the right to protection from discrimination.

Ultimately, as AI is created and trained by humans, it is not immune to human flaws such as prejudice and bias.

This could have serious ramifications in our justice systems. Several jurisdictions have already adopted facial recognition as a tool for crime detection. However, it was revealed that this technology was prone to viewing black people as criminals. Thus, AI has the potential to increase systematic discrimination and widen racial inequalities. This is highly problematic as AI cannot be regulated or held accountable in the same way discrimination by humans can. In fact, this lack of accountability would embed inequality deeper into the fabric of society and irrevocably stack the deck against marginalised people.

Future Human Rights Issues with AI - Should Robots Have Rights?

It is clear that AI poses a significant challenge to existing human rights frameworks. These challenges are likely to be exacerbated as AI becomes closer to what could be perceived as human intelligence. In a scenario where AI technology can think and behave in a manner similar to humans, is it fair to treat it as if it is a mere tool for human achievement? There are potential ethical and moral dilemmas that could arise from the treatment of future AI technology as being without any form of rights or legal personality. For example, an AI technology that creates a work of art, as is already possible under existing technology, would have no intellectual property rights. Similarly, an AI which could hypothetically be self-aware, could not avail of any form of labour rights e.g., fair working conditions, which would lead to them being obliged to work whatever hours their human owner wished them to do. There are also questions as to whether we should continue to allow humans to harm AI machines without consequence if those machines were to develop emotional and intellectual self-awareness. A human who is cruel to animals can face civil and criminal penalties, yet the same protections are not available to AI, which may prove morally questionable as it gains greater intelligence.

While some would consider these issues to be a matter for future regulators, it should be noted that several legislative bodies are already examining whether AI should be encompassed within a rights based framework. In 2017 for example, the European Parliament proposed the drafting of a set of regulations to govern the use and creation of robots and AI and debated on whether they should grant robots “electronic personalities,” to allow them to possess some form of protection equivalent to human rights. This would involve, according to the proposal, a concept of legal personhood similar to that possessed by a company. In other words, the AI would have the capacity to enter contracts, benefit from intellectual property rights and engage in legal action. However, they would not necessarily be protected in the same manner as humans. Indeed, the proposal stress-

es that the sole task of AI would remain “to serve humanity.”

There is no easy solution as to how AI should be protected. However, it should be noted that recognition of AI as having some form of legal personality would make it easier to remedy the rights violations outlined at the beginning of the article. If AI can both sue and be sued, then they can be held liable for any violation of privacy or anti-discrimination rights, particularly where the AI is deemed to be self-aware. Thus, as AI becomes more ‘human-like,’ it becomes important to consider what legal personhood should look like and what, if any rights, should both protect AI and protect us from its consequences?

Conclusion

AI is one of the greatest technological advances of modern times. However, with this creation comes a responsibility to ensure AI does not generate or exacerbate human rights violations. From privacy rights to discrimination, AI can circumvent our current interpretation of human rights and how they intersect with technology. This makes it difficult to regulate AI in a manner that ensures all of society can benefit from its usage. Similarly, as AI begins to parallel human intelligence and consciousness, there are questions regarding whether it is ethical to treat it as if it is merely another tool for human usage. Robots possessing human rights may appear to be a matter for science fiction novels; however it is becoming clear that interpretations of rights may need to be adjusted to reflect these moral quandaries. In doing so, our approach to animal rights or environmental rights may serve as a useful precedent for our future treatment of AI. Regardless of what approach is taken, it is evident that AI will be the next frontier in the human rights agenda.

This article is from: