6 minute read

of Law

Next Article
Technology

Technology

1

Introduction

Advertisement

Algorithms play an important role in deciding the scope and progress of various kinds of coherent and parallel economies, which are canvassed through the global commons. Artificial intelligence ethics is an important field – which is emerging in the domain of international technology law steadily. India, one of the emerging Global South countries, is adopting newer competition legal and technology law approaches to transform its sovereign imperative and its strategic interests in the realm of global governance.

This report analyses, where India requires proper and implementable competition-technology law approaches, which act in due proportionality to defend two kinds of sovereign imperatives, i.e., algorithmic & information-related. Algorithmic sovereignty simply refers to the issue of regulatory considerations in the domain of artificial intelligence ethics, where AI technologies need scrutiny, judicial and executive –both, with effective redressal and dispute resolution mechanisms. Information sovereignty alludes to the problem of data law and how India should and has been adapting and developing nuanced practices of data protection & data-related regularization frameworks. The approach of this report is clearly to provide optimistic and practical alternatives to the current legal and policy framework of algorithmic and information sovereignty in India. We are not suggesting a complete overhaul. However, we have surely suggested appropriate policy solutions wherever rendered.

How Algorithms as Agents of Information Impact Rule of Law

Algorithms as the agents of information, in the context of this report, must be understood taking into consideration the lack of

8

Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001

explanability, transparency and foreseeability by design, that the Artificial Intelligence systems generally have. Explanability stems from the understanding along with transparency that by default and design, any algorithmic infrastructure must be prepared and tacit in avoiding conflicting circumstances, in human environments. Now, the relationship cannot be established directly because the dependence on algorithms varies from sector to sector, and even in various domains of law, the liability of the owners of such systems, would be case-to-case based. Hence, we have adopted a balanced approach in determining the following: • How are algorithms the agents of information in the context of the report? • How are algorithms capable to impact rule of law as a politico-legal phenomenon? • How are algorithms capable to render economic impacts due to its effect on law and order & rule of law-related situations? Now, algorithms can be agents of information, because they can process information in the most tangible form they can process, and then even democratize the information. The most obvious and apparent example comes from social media-based algorithms. However, the proliferation of algorithmic activities is visible, wherever the ML algorithms are reasonably involved. The involvement of algorithms means, that under the SOTP classification adopted by the Indian Society of Artificial Intelligence and Law (Abhivardhan, et al., 2021)1, the algorithms are enabling the AI system or infrastructure as a subject – (it is using humans as data subjects) since it is learning and enabling better effective tasks to be achieved. Now, no subjective analysis can be granted on how exactly this is played out because there are various kinds of algorithmic-data infrastructures, which cannot be of course marginalized. However, there is no doubt that algorithms can act as the agents of information, nevertheless.

1 The first mention of the classification is in the 2020 Handbook on AI and International Law.

With respect to the second question, it is important to estimate the definitive and deterministic aspects of rule of law. Rule of law means the adherence of the law, as well as the order created by the law, based on the clinical basis involved. Algorithms of course are assumed to be law-agnostic and order-agnostic because they are human-made, not human-conscious. This is an important reason why the context of human rights is brought up to focus on academic consensus for human-centred AI Ethics (Christley, 2020). Human-centric AI, however, as a concept, is based on the rights-based approach, which makes it limited to the focus of positive legal hierarchies, through extending human rights law expansion. That maybe is a juridical way to impart a positive law (even hard law) approach to achieve goals. However, it is not necessary that the goals have become productive enough. There are serious problems with the rights-based approach, due to ample lack of practicality and the sentimental basis of the construct (Douzinas, 2019). However, the moral component of the rights-based approach should always have been on embracing various ethical considerations which form an engaged and informed perspective. This affects the principal credibility of what constitutes rule of law as well, because the positivist character of the system itself is either submerged that makes it invisible, or the construct has over-delegated the responsibilities it should have beholden to. Roger Brownsword explains it clearly under his proposed trifecta of stages of administrative law and technology governance (Brownsword, 2018) as follows: • Coherentist – Governments enforce their positive legal approaches, which are primarily expanding the administrative scope of the existent laws to scrutinize the emerging technological phenomena. Courts also contribute to the process through their vertical approaches to hard law. • Regulatory-instrumentalist – Nuances are developed in the regulation of technological phenomena. There are different ways and factors, through which proper and transparent regulation is expected. For example, data law regulation and the regulation of competition of markets are two distinct cases, which can possibly apply on the same technological phenomena per se. • Technocracy – Establishing that doctrinal integrity is not the goal, and the system itself if is not complied by the legal

10

Regulatory Sovereignty in India: Indigenizing CompetitionTechnology Approaches, ISAIL-TR-001

subject, then the system should subject to hard-line interventionist approaches. One example with Brownsword mentions in his article is of immobilizing a car if the seat belt is just not worn by the driver (Brownsword, 2018). Basing formulations of his understanding of regulatory theory in the domain of administrative law and technology, it is clearly discernible to assert that the regulation of AI technologies, considering AI as a juridical entity (Abhivardhan, et al., 2021), should be strictly based on doctrinal integrity, and effective regulatory frameworks. It means that Coherentist and Regulatory-instrumentalist approaches can go hand in hand, as approaches develop with time. In the context of even Indian administrative law, this approach of combining doctrinal integrity with nuanced approaches to regulatory theory would be of utmost help.

The third question is answerable, throughout the sections of the report, of whose cardinal basis, is enumerated as follows: • The report focuses with a competition law perspective in the

Indian context on how AI technologies are used by companies (especially multinational companies and foreign companies) to subject to digital colonialism, violating competition law practices. • The report also focuses on the Indo-US legal and political ties, in the context of their competition policies and approaches. • The report assesses the limitations in the Indian systems of competition law, technology law and data protection law, especially the I.T. (Intermediary Guidelines and Digital

Media Ethics Code), Rules, 2021 notified by the Government of India in March 2021. • The report thereby analyses the aspects of corporate governance where compliance and resource distribution by technology companies (especially AI-reliant and AI-serving companies) can be reasonably achieved to counter digital coloniality and ensure that the impact of AI technologies is avoided as being counterproductive.

This article is from: