Artificial Intelligence & Policy in India, Volume 3 (2021)

Page 1


{ Artificial Intelligence & Policy in India Volume 3

} (2021) Abhivardhan, Editor

© Indian Society of Artificial Intelligence and Law, 2021.


2

Artificial Intelligence and Policy in India (2021)

Year: 2021 Date of Publication: June 30, 2021 ISBN (online): 978-81-947131-8-0 ISBN (paperback): 979-85-116635-7-9 Editors: Abhivardhan. All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher and the authors of the respective manuscripts published as papers, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to the publisher, ad-dressed “Attention: Permissions Coordinator,” at the address below. Printed and distributed online by Indian Society of Artificial Intelligence and Law in the Republic of India. First edition, Artificial Intelligence and Policy in India, Volume 3, 2021. Price (Online): 250 INR Price (Paperback): 10.8 USD (Amazon.com) Indian Society of Artificial Intelligence and Law, 8/12, Patrika Marg, Civil Lines, Prayagraj, Uttar Pradesh, India – 211001 The publishing rights of the papers published in the book are reserved with the respective authors of the papers and the publisher of the book. For the purpose of citation, please follow the format for the list of references as follows: 2021. Artificial Intelligence and Policy in India, Volume 3. Prayagraj: Indian Society of Artificial Intelligence and Law, 2021. You can also cite the book through citethisforme.com (recommended). For Online Correspondence purposes, please mail us at: editorial@isail.in | executive@isail.in For Physical Correspondence purposes, please send us letters at: 8/12, Patrika Marg, Civil Lines, Allahabad, Uttar Pradesh, India - 211001


Artificial Intelligence and Policy in India, Volume 3

3

Preface The Strategic & Civilized AI Initiative is a policy research initiative started by the Indian Society of Artificial Intelligence and Law in June 2021, which is a replacement to the erstwhile Civilized AI project & the Indian Strategy on AI & Law Programme. The purpose of the renewed initiative is to research on specific and critical role of private and public actors in international AI governance. Specific focus on cultural, educational and policy issues of Indian interests and also in the line of the Indo-Pacific region with respect to AI are covered in this programme. The book presents the Works produced in the research initiative and the erstwhile research initiative since January 2021, which encumbers preliminary and some advanced analysis on the recent developments in the arena of AI Policy and Governance. This book also includes the initial publications in the Civilized AI Project which was started in October 2020. The Programme estimates on AI and Law Governance in their updated frontiers – 1. AI and International Law & Affairs 2. AI and IPR 3. AI Education 4. AI and Civilizational Studies 5. AI and Ecology 6. AI and Information Warfare I would extend my gratitude to Ateka Hasan & Dev Tejnani, the Programme Coordinators for their inputs and suggestions for the policy research initiative and this book.

Abhivardhan Director The Strategic & Civilized AI.


Artificial Intelligence and Policy in India (2021)

4

Table of Contents Reports. 1. 2. 3. 4.

Assessing the Indexes on Determining the Relationship and Trajectories of AI Ethics and Intellectual Property Law AI Education in India: First Analytical Report [2021] Responsible AI in India: First Analytical Report [2021] Preliminary Review Report on Explainable Artificial Intelligence & its Opaque Characteristics

Discussion Papers. 5.

AI and Trademark Law: Is AI akin to a Personal Shopper? Pankhuri Bhatnagar 6. Can Artificial Intelligence save the remnants of Dying Cultures? Niharika Ravi 7. Implementation of Artificial Intelligence in Banking in India: A Critical Review Mohit Agarwal 8. Artificial Intelligence in Indian Healthcare: Possibilities and Roadblocks Karan Ahluwalia 9. The Effectiveness of International Space Law Treaties in the Context of AI: A Critical Review Nalin Malhotra 10. Social Media Algorithms and their Impact on Financial Markets & their Regulation Hemang Arora 11. The Emerging Trends of Chinese Companies’ Usage of AI & the Ethical Trends Nisarg Bhardwaj 12. Artificial Intelligence and Criminology: Prospects and Problems Shiwang Utkarsh Policy Briefs. 13. Explainable AI in India: Policy Brief [2021] Karan Ahluwalia & Nalin Malhotra 14. European Union’s Legislative Proposal on AI Governance: Policy Review Aathira Pillai & Rahul Dhingra


Artificial Intelligence and Policy in India, Volume 3

Research Team The Strategic & Civilized AI Programme Executive Team. Abhivardhan Director of the Initiative Aditi Sharma Chief Research Officer Indian Society of Artificial Intelligence & Law Ateka Hasan Programme Coordinator Dev Tejnani Programme Coordinator

5


6

Artificial Intelligence and Policy in India (2021)

{Reports}


Artificial Intelligence and Policy in India, Volume 3

7

1 Assessing the Indexes on Determining the Relationship and Trajectories of AI Ethics and Intellectual Property Law Rahul Dhingra1 & Aathira Pillai2 1Research 2Research

Intern, Indian Society of Artificial Intelligence and Law Intern, Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. Artificial intelligence (AI) is increasingly driving key technological and business advances. It is used in a wide range of sectors and has an impact on practically every area of production. AI's progress is being fueled by the availability of vast amounts of training data and breakthroughs in affordable high processing power. In a variety of ways, AI and intellectual property (IP) collide. The simulation of human intelligence processes by machines, particularly computer systems, is known as artificial intelligence (AI) though no explicit definition has explained Artificial Intelligence. Expert systems, natural language processing (NLP), speech recognition, and machine vision are examples of AI applications. Learning, reasoning, and self-correction are the three cognitive processes that AI programming focuses on IP should be used as a regulatory structure to encourage invention and innovation through the use of market forces. Justification of IP rights for AI as a tool and AI-generated output in light of IP protection based on a complex study of the outputs, productions, or applications generated by, or with the assistance of, AI systems, tools, or techniques that are susceptible of IP protection, with a focus on the works promoted and created by AI based systems. This report delves into the AI prospects of the Intellectual Property Laws, the efficiency of the various Artificial Intelligence Indices released by the companies, and the relevance of AI Indexes and its coherence in contemporary pertinence with the AI and Intellectual Property Laws.

Introduction AI had gained traction since its beginning in the early 1950s, and it had branched out into three distinct domains. The first is Alan Turing's "symbol processing hypothesis," which he developed as the "founding father" of AI. Robotics was the second branch that drew a lot of attention. The so-called "learning approach" has


8

Artificial Intelligence and Policy in India (2021)

been the subject of the third line of research.(Computing Machinery and Intelligence, 1959) The last branch, also known as machine learning (hereinafter ML), grew to such a large size and widespread usage penetration in the mid2000s that it is now widely regarded as a general purpose technology (hence GPT) and is referred to as the most important AI technology.(Intellectual Property Justification for AI) According to the labour theory, persons are entitled to possess property rights based on the labour they put in to obtain the subject matter, i.e., they are entitled to earn the "fruits of their own labour."(Clearing the Rubbish: Locke, the Waste Proviso, and the Moral Justification of Intellectual Property, 2009) According to the personality concept, creating something and making it available to the broad public is an expression of personality, which is thought to be based on a person's interaction with external objects.(Property and Personhood, 1982) Giving someone a reward for contributing to society's enrichment is fair, according to the reward theory. Its origins in the patent sector can be traced back to John Stuart Mill, who was also a supporter of Jeremy Bentham. (Principles of Political Economy, 1870/1909). Based on the above three theories, IP justification is based on the bottom line with the most prominent rationale that lawmakers and legislators along with scholars maximize the social welfare in totality while shaping the Intellectual Property Rights. AI-induced losses in conventional creative (copyright) or R&Drelated (patent) fields could pose a threat to society. This problem, however, cannot be handled through IP legislation because IP's primary mission is to promote innovation rather than to preserve tradition or to achieve broad social policy goals. Thus, essentially implying Intellectual Property Rights has to be scrutinized in the light of specific situations and in the context of Artificial Intelligence, creation and innovation operate under unequal situations; there may be radically distinct needs for the necessary investments, life cycles, and amortization possibilities for each AI tool and output. The AI Index is a comprehensive annual analysis on AI's present state. It covers autonomous systems, artificial intelligence research and development, the AI market, public perception of AI, and most importantly – technical performance.

Prospects of AI in Intellectual Property Law: An Overview The era of intelligent machines began with the invention of computers. Not only were these machines suitable of automation, but they were also adept of Intelligent Automation (IA). Technology has advanced at a frenetic pace over the last three decades, and this is evident by the number of computer hardware. Faster hardware leads to more complicated software, which results in complex interdependent systems. Patent rules were enacted at a period when technology was defunct, and they have developed as technology has advanced at a much quicker rate. Current patent rules treat AI software inventions primarily as


Artificial Intelligence and Policy in India, Volume 3

9

computer-implemented logical algorithms. While it is true that algorithms are patentable, there is limited guidance on how to deal with intuitive inventions. With today's computer programmes, the underlying programme does not change; instead, the operator just adds more data to the programme to deal with new situations. An AI system's learning experience will allow it to not only expand the program's database, but also to rewrite the core programme itself. Data-privacy and data-ownership problems are another important consideration. Data is accessed and transported numerous times across jurisdictions in a global ecosystem involving various stakeholders. This is especially true when it comes to personal information. Artificial intelligence as a service (AIaaS), which provides access to sophisticated cloud-based AI capabilities, is now being offered by a growing number of large IT corporations and start-ups. Users can produce a wide range of AI outputs with AIaaS without needing to create AI technologies in-house. In terms of IP law, the usage of AIaaS necessitates a shift in the legal analysis' focus from the AI technology's developer to the user (company or individual) who engages the AI system to generate specific output.AI advancements, particularly in the subject of Machine Learning, have paved the way for a slew of new corporate applications. AI has been pushed as a way to speed up drug development because of its ability to discover hidden patterns in vast data sets and automate many predictions. The three main international treaties in the field of copyright (Berne Convention, WCT, and TRIPS) require contracting states – currently 179 countries, including all EU Member States – to provide international protection to eligible authors of "works" in accordance with the treaties' minimum standards. The Berne Convention, on the other hand, exclusively imposes responsibilities on member states. In the execution stage, the AI system has taken over much of the human author's job; nevertheless, this does not mean that the user is completely passive. The user's role is critical, especially in supervised DL systems, in constantly evaluating the output of the process and providing feedback to the AI system. A largely autonomous AI system could qualify as a work protected under copyright law if it was launched and imagined by a human being, and the AI-assisted output was then redacted in a cohesive way. That is, copyright protection could be achieved simply through human interaction throughout the conceptualization and transcription stages. Various enablers are being developed across organisations to support IP management activities, depending on their needs. NLP approaches are used by virtual assistants for previous art search to refine search tactics by offering associated terms from related themes or concepts, as well as search results based on semantic similarities. Challenges in using Artificial Intelligence for Intellectual Property Management Naming the inventor is merely a formal requirement that a human being be named as inventor. Artificial intelligence systems do not have the legal status of a person (i.e. legal personality), and they cannot be named inventors. There is no


10

Artificial Intelligence and Policy in India (2021)

agreed-upon procedure for dealing with this.A large volume of reliable data is critical for improving the accuracy and dependability of the AI system. Collaboration and multilateralism's role are crucial. On a logical level, having free access to data is critical.AI frequently operates in a "black box" that is unfathomable to humans. Due to the different approach that machines take to reducing errors to the maximum extent possible this makes it very difficult for an applicant to meet the disclosure requirement if he doesn't know what he's doing is unable to explain how the innovation works so that a technological solution based on the specifications can be developed. It is a huge problem to build AI capabilities across IP offices. Although AI has been around for a while, it has only now emerged as a viable technological option. The number of people with the necessary training and understanding in this industry is inadequate, making in-house AI capability difficult, especially in the face of contest against better-resourced, higher-paying private firms. Micro IP agencies are bound to experience some difficulties. Ai rely on information and algorithms, and smaller offices, by definition, have access to fewer sets of data. This creates a volumetric constraint, which forces the design and implementation of AI applications in larger offices. It is less efficient in micro offices where application importance is still manageable. Open access to data linked with IP registrations for patents, trademarks, and designs is a widely acknowledged norm in the IP field. It will benefit smaller IP offices, which can, in theory, access these data. Overcoming these obstacles will necessitate a stronger focus on integration and synchronization. (ARTIFICIAL INTELLIGENCE’S ROLE IN THE FIELD OF INTELLECTUAL PROPERTY, 2020) When third-party companies demand relevant data sets in order to define themselves in the industry or get appropriate rights over the solution, many Software engineers confront challenges. Acquiring intellectual property rights over comprehensive technologies ensures that possession and licensing become paramount, eliminating the need for third-party involvement where Software companies pay for a license to obtain data sets, and Intellectual property rights ensure that Tech companies collude with collaborators to protect their software initiatives. Another challenge of using AI for intellectual property management is of Legislation. For patented Artificial intelligence discoveries to be recognized legally, IP laws must be regularly updated. There have been significant changes observed by the IP industry and the complexities that new innovations and their proprietors face, situations that alter the prospective of the sector and necessitate policy changes so that rightful owners can copyright their inventions. If there are gaps between Intelligence and IP, there will be no harmony amongst AI inventions and IP laws. There is an ongoing need to build forums that deal solely with AI and IP problems. Challenge of client data, clients who need the assistance of certain training datasets to function in harmony with the vendor's program to adjust to clients'


Artificial Intelligence and Policy in India, Volume 3

11

business services are provided with efficient learning database authorization by the vendor. Challenges occur when an error happens in the cyber security system of the client's application code provided by the vendor, which frequently raises the issue of property or patents. If the clients seek to resell the technology to some other provider, they will face additional legal problem. Contract disputes, previously AI employed specialised computer hardware that reproduced the same activities of a human mind, but today software visual modules are applied, leading to an increase in the utilisation of processors. As a result, IP-related issues emerged beyond the realm where technologies other than the aforementioned pieces were employed. Corporate deals confront difficulties when there is no section in the contracts stating the newest evolving software, covering ownership and license issues. All critical provisions including new IP development software or indemnity connected to 3rd party authorisation must be included in contracts.(Artificial Intelligence and Intellectual Property Rights, 2020) Is Intellectual Property Law Equipped for the Age of Artificial Intelligence? The notion that Artificial Intelligence (AI) will modify the law is now banal and self-evident. This in itself describes the incapability of the present-day IP law to deal with artificial intelligence. The distinct challenges discussed in the previous section will continue to be a source of consternation for multinational enterprises, administrations and lawmakers alike, as well as potentially impacting the destination of the next generation of AI developments. There are a slew of other factors to consider when it comes to patenting AI technologies and their suitability. Should AI technology be designated as the creator of a patent under intellectual property law? Or should it require that only people be innovators in order to eliminate uncertainty in the ownership of the discovery? Should the law regulate how inventions are seen if the legal provisions believe that the latter is the most appropriate regulation? These are some important questions that are yet to be answered. Though IP law is not yet equipped to deal with AI, a continuous discussion on these issues will enhance and challenge our present knowledge of intellectual property law as we prepare for the disruptions, AI inventions may make to present IP rules.

Analysis Of AI Indices AI Indices makes an annual endeavour to assess the most important trends affecting the AI industry, innovative research, and AI's social impact.


12

Artificial Intelligence and Policy in India (2021)

Artificial Intelligence Report by Stanford University on HumanCentered Artificial Intelligence The AI Index Report keeps track of, collects, distils, and visualises artificial intelligence-related data. Its purpose is to give unbiased, extensively validated, and globally sourced data to policymakers, academics, executives, media, and the public at large to enable them form opinions regarding the challenging subject of AI. (Artificial Intelligence Index Report, 2021) Highlights of the Report 1. Research and Development - From 2019 to 2020, the number of AI journal papers increased by 34.5 percent, a substantially greater rate than from 2018 to 2019. (19.6 percent).Academic institutions produce the majority of peerreviewed AI publications in every major country and region. However, distinct European Union member states are the second most important originators (17.2 percent). China (20.7 percent) passed the United States (19.8 percent) in terms of the highest share of AI journal citations in 2020 for the first time, while the European Union continued to lose total share. The number of AI conference papers climbed fourfold between 2000 and 2019, albeit the trend has slowed in the last ten years, with the number of publications in 2019 just 1.09 times greater than in 2010. 2. AI Patents - In the last two decades, the total number of AI patents published around the world has continuously increased, rising from 21,806 in 2000 to more than 4.5 times that, or 101,876, in 2019.Only 8% of the dataset in 2020 includes a nation or regional association, indicating that the AI patent data is incomplete. Between 2015 and 2020, the number of papers in Robotics and Machine Learning in computer science increased by 11 times and 10 times, respectively, among the six topics of research connected to AI on arXiv. In 2020, cs. LG and Computer Vision (cs.CV) will dominate the overall number of patent publications in 2020, with 32.0 percent and 31.7 percent, respectively. 3. In 2019, global private AI investment totaled more than $70 billion, with $37 billion invested in startups, $34 billion in mergers and acquisitions, $5 billion in IPOs, and $2 billion in minority stakes. Autonomous vehicles received the most funding ($7 billion) in the previous year, followed by medication and cancer research, facial recognition, video content, fraud detection, and finance. 4. The paper also highlights examples of AI systems performing at human levels, such as DeepMind's AlphaStar defeating a human in Starcraft II and deep learning identification of diabetic retinopathy in eye scans. 5. The AI Index Report ponders upon the Indian plan focuses on both economic


Artificial Intelligence and Policy in India, Volume 3

13

growth and ways to use AI to enhance social inclusion, as well as study into crucial AI-related concerns including ethics, bias, and privacy.In 2019, the Ministry of Electronics and Information Technology proposed establishing a national AI initiative with INR 400 crore in funding (USD 54 million). In late 2019, the Indian government launched a committee to advocate for a well-organized AI policy and to define the precise functions of government organisations in order to further India's AI agenda with funding of INR 7000crore (USD 949 million) as on December 2020 conversion. 6. The industrial strategy of the United Kingdom emphasises a strong alliance between business, academic institutions, and government, and identifies five underpinnings for a productive industrial strategy: becoming the world's most innovative economy, generating employment and higher earning potential, infrastructure improvements, and so forth. The Select Committee on AI in the United Kingdom published an annual report on the country's development from 2017 to 2019. The administration will announce its plans in November 2020 announced a significant boost in defence spending of $1 billion. 7. The American AI Initiative emphasises the need of the federal government investing in AI research and development, lowering barriers to federal resources, and ensuring technical standards for the safe creation, testing, and deployment of AI technology. The White House also highlights the development of an AI-ready workforce and demonstrates a commitment to working with international partners while supporting AI leadership. Methodology 1. Each large topic area in AI Policy and National Strategy is organized around a set of underlying keywords that explain the publication's content. Health & Biological Sciences, Physical Sciences, International Affairs & International Security, Energy & Environment, Social & Behavioral Sciences, Ethics, Public Administration, and other topics that dominated AI discourse in 2019-2020. 2. Ethics in AI has been designed Ethics-related words are searched for in the titles of papers in flagship AI, machine learning, and robotics conferences and journals to get a sense of how extensively Ethics in AI is discussed. The classic and trending keyword sets were created by curating terms from the most often referenced AI book, Russell and Norvig [2012], as well as the keywords that appeared most frequently in paper titles through time in the venues. 3. In respect of AI and Economy, the survey conducted by McKinsey was undertaken by the poll was performed online from June 9, 2020, to June 19, 2020, and received responses from 2,395 people from all over the world, representing a wide range of geographies, industries, firm sizes, functional specialties, and tenures. Of those that responded, 1,151 claimed their companies had implemented AI in at least one function, and they were


14

Artificial Intelligence and Policy in India (2021)

questioned about it.The data are weighted by the contribution of each respondent's country to global GDP to account for disparities in response rates.

Government Artificial Readiness Index By Oxford Insights With the help of the International Development Research Centre (IDRC), Oxford Insights improved the approach and expanded our scope to include all UN countries in the 2019 Government AI Readiness Index (from the previous group of OECD members). It assigns a score to 194 nations and territories based on their readiness to deploy AI in public service delivery. The overall score is made up of 11 input variables that are divided into four categories: governance, infrastructure and data, skills and education, and government and public services. A quantitative toolkit that may be used to assess a government's readiness to employ AI is Government Artificial Readiness Index. The Index takes a wide range of information and distils it into a single figure, including desk research on AI strategy, Crunchbase statistics on AI firms, and UN indices. This makes it easier to make worldwide comparisons and track a government's progress.(Government Artificial Intelligence Readiness Index, 2019) Highlights of the Report 1. Singapore tops the list for AI preparedness, with Western European nations, Canada, Australia, New Zealand, and four more Asian economies rounding out the top 20. In the top 20, there are no Latin American or African countries with China handling the lowest position at 20th. 2. The Government AI Readiness Index takes a global look at AI and offers results that are compatible with other technology indicators in Africa. The prospects for AI in Africa are optimistic, as institutional research centers and informal developer groups are both showing interest in the field. 3. The biggest firms and governments that are willing to invest extensively will get the most rewards from AI. While the focus has so far been on investment in AI adoption, these governments will need to do much more to prepare their society to both benefit from and avoid the possible disruptions caused by AI growth.Strong cross-border academic collaborations between China, Singapore, and Australia would aid the region's progress in AI research, both basic and application-driven. 4. The low number of AI companies in Australia compared to other nations hinders the country's position, since many Australian entrepreneurs travel overseas for good prospects and funding. New Zealand's ongoing excellent performance in the Index is contingent on the country's government developing an artificial intelligence strategy and action plan.


Methodology

Artificial Intelligence and Policy in India, Volume 3

15

1. The hypotheses devised making a government ready to use AI in public delivery a well-coordinated national AI strategy is a solid indicator of the effectiveness of AI-focused government. Data is the foundation of artificial intelligence systems. A government that is AI-ready will demonstrate both strong political will and competence to push for innovation, as evaluated by efficacy proxies and the degree of innovation currently in place through digital public services. 2. To evaluate a government's AI-related ambition, regulations, and moral coupled with frameworks, all of which are essential prerequisites for pervasive AI implementation in public service delivery, a structure of highlevel 'clusters' containing descriptive statistics or proxies for measuring government AI readiness was developed.(Government Artificial Intelligence Readiness Index, 2019) 3. To make the scores for each country comparable, the data sets for each indicator were normalized between zero and one. We combed the Crunchbase database for AI startups. To reduce the effects of this, the scores were given a logarithmic scale (base 10) before being normalized to give a more accurate picture of the relative intensity of private sector capacity in each country. To get our final scores for government AI readiness, we put the figures for each indication together. We chose to give each indicator the same weight.

The Global AI Index (GAII) by Tortoise Media The GAII was the first to rate countries based on artificial intelligence ability, specifically by measuring investment, innovation, and implementation levels. Tortoise has worked to broaden the index's coverage and analyse the global landscape in areas such as talent, infrastructure, operating environment, R&D, commercial initiatives, and government policy.(Global AI Index, 2019) Highlights 1. As per a new index by Tortoise Intelligence, over 10,000 artificial intelligence (AI) companies have been formed since 2015, attracting $37 billion in private funding, and thousands of additional software developers have been drafted onto AI projects internationally in the last three years as demand for the technology grows. 2. Tortoise Intelligence analysis indicates that after the Canadian government released the first national AI policy in 2017, at least 30 more countries have done the same. In just four years, the number of AI businesses has doubled, with around 20,000 currently working on everything from self-driving cars to illness detection systems. Last year, a total of $26 billion was invested in AI companies, up from $7 billion last year.


16

Artificial Intelligence and Policy in India (2021)

3. Britain is ranked third attributable to a thriving AI talent pool and a strong academic reputation. DeepMind, a business created in 2010 that was purchased by Google four years later for $500 million, is one of the most successful AI businesses to emerge from this country. 4. Countries employ AI in a variety of ways. Russia and Israel are two countries that are concentrating AI research on military applications. Japan, on the other hand, is heavily reliant on technology to deal with its ageing population. 5. According to the Index, the United States is the unchallenged leader in AI development. As a result of the quality of its research, skill, and private investment, the western giant scored nearly twice as high as China, which came in second. 6. According to GAAI Index, China is the fastest-growing AI country, having surpassed the United Kingdom on criteria ranging from code contributions to research publications in the last two years. China filed 85 percent of all facial recognition patents last year. Methodology 1. To define and explain an underlying, multifaceted idea, the Global AI Index employs a number of interconnected measurements. This notion is referred to as ‘capacity for artificial intelligence' in The Global AI Index, and it is defined as a collection of interconnected factors that fall under the areas of innovation, implementation, and investment. The three major pillars of the Global AI Index are investment, innovation, and implementation. Commercial ventures and government strategy are two sub-pillars of the investment pillar. Research and development are two sub-pillars of the innovation pillar. Finally, there are three sub-pillars in the implementation pillar: talent, infrastructure, and operating environment. 2. The Global AI Index is organized around the concept of capacity, which is defined as the amount of whatever a system can hold or produce. It's a good way to think about the interaction between the various relevant components that exist within a country. Artificial intelligence capacity refers to both the breadth and depth of adoption - a quantitative factor - and improvements in a country's ability to operate and sustain AI systems in a productive, safe, and equitable manner - a qualitative factor. 3. Only one of the sources used for The Global AI Index is proprietary. The great majority of the sources utilized for The Global AI Index are publicly available and open source. The Crunchbase API was used to gather data for the ‘Commercial Ventures' sub-pillar. 4. The weighted normalized sum of a country's sub-pillar scores determines its total score. The normalized weighted sum of all the indicators inside a subpillar is then used to get the score for that sub-pillar. Rather of comparing all individual indicators, we might compare indicators inside a certain sub-


Artificial Intelligence and Policy in India, Volume 3

pillar, such as talent.

17

US Chamber International IP Index It is more vital than ever before to be able to quantify how well-equipped economies are to produce and protect innovation and creativity, according to the International IP Index. The International IP Index compares the IP framework in 53 worldwide economies using 50 distinct metrics. India had been rated 40 out of the 53 countries globally with areas of weaknesses recognized as strict registration requirements, for example, are a barrier to licensing and technology transfer. Patentability restrictions outside of international standards provide a limited framework for the protection of biopharmaceutical IP rights. The report also observed that in the ninth edition, India's overall score dropped slightly from 38.46 percent (19.23 out of 50) in the eighth edition to 38.40 percent (19.20 out of 50) in the ninth edition. This represents a slight drop in indication 32 score. Methodology Baseline values, metrics, and models are used in the Index. These principles are based on national and international best practices in terms of protection, enforcement mechanisms (both de jure and de facto), and/or model pieces of primary and secondary legislation. Where international law or treaties do not provide adequate baselines, the baselines and values employed are based on what rights holders consider to be an appropriate environment and degree of protection. The total score of the Index runs from a minimum of 0 to a maximum of 50, with each indicator scoring between 0 and 1. With three different ways to score indicators: binary, numerical, and mixed. When an indication is binary, it is assigned either the value 0 if the IP component does not exist in a given economy or 1 if the IP component does exist in a given economy. The Index assigns a score based on both qualitative and quantitative data. This evidence is taken from a variety of sources in order to present as full a picture of an economy's IP environment as feasible With all of the sources used are freely available and open to the public. Critical Evaluation of the AI Indices Methodology

Illegal actions are notoriously difficult to accurately measure and quantify. Estimates will be based on characteristics such as physical seizures and questionnaires by necessity. This is evident in the number of online piracies. Rates of piracy and counterfeiting are either specific to one or handful of economies or are worldwide and do not provide data at an individual economic level. The result is a relative shortage in the number of studies that quantify and compare levels of piracy and counterfeiting with a sample of economies sufficient to make large scale comparisons empirically reliable.


18

Artificial Intelligence and Policy in India (2021)

Thus, measuring counterfeiting and Piracy with the indicators and baselines used by various companies are not expected to protrude reckonable measures in conformity to Intellectual Property Rights. Finding high-quality datasets that cover as many nations as feasible is a difficult task. A comprehensive search for better indicators or proxies to capture the parameters being measured is underway. In the absence of a better option, the corporations were forced to rely on less comprehensive but high-quality databases. IP rights that are transparent and predictable have created the legal and economic foundation for an unprecedented number of very successful cooperation between government, industry, academia, and nongovernmental groups, which have mushroomed globally. There’s a chance that indices like these will spark a worldwide race for AI. Higher ranks are dominated by countries from the Worldwide North, highlighting the danger of countries with a history of sponsoring scientific and technical research and development solidifying their global dominance. When it comes to intellectual property rights, one concept that users are unwilling to accept is the idea of a technology with the ability to think creatively. AI also has a number of other technical capabilities that have immediate ethical implications. Delving into the methodology of the AI Indices, internally developed software/machine learning technologies trained with circuitous algorithms were used to create advanced hierarchy classification models for analyzing the unstructured predictive models. Reviewing the Global Intellectual Property system, with the prevalent conundrum of first to file and first to use debate; unique indicators facilitating and promoting licensing and technology transfer would be not proficient enough to trace the national Intellectual Property frameworks governing the particular country. Antiquated Machines functioned under rigid and austere parameters, and while reaching to disparate conclusions based on similar or slightly different facts, it was feasible to forecast all possible future events and judgments if the computer's original coding and the information intake were designed in accordance to the intention of the users. With the current bolster of importance of Intellectual Property Rights and Artificial Intelligence; the figures of cyber theft and terrorism is ascending exponentially. For rights holders operating in worldwide marketplaces, enforcement against IP theft continues to be a concern with the indices not stressing on the indicators where the custom actions taken by the legislative machinery of the particular governments and lack of transparency measures seeking to detain and prevent such occurrences with stringent legal measures. The power of the results suggesting assertions is dubious, given that both the Index and the Intellectual Property are unique and constantly evolving instruments assessing the state of the international IP environment, with the national scheme constantly amending and revising in accordance with the


Artificial Intelligence and Policy in India, Volume 3

19

dynamics of the country with the contribution of geographic diversity. In the light of COVID -19 pandemic; many technology-based companies and IP-intensive industries are feeling the impact of the pandemic's ramifications, whereas certain pharmaceutical companies are prospering in the drug and essential medicine supplies whilst still facing repercussions and hindrances to compulsorily license and patent their creations; the automatic statistics and indicators would be skewed and distorted.

Suggestions on Evolving an AI Ethics Index Artificial Intelligence has already sparked a revolution. It has the potential to improve the lives of people. However, it would be naive to dismiss concerns about a lack of AI ethics and risk assessment. Because of the public's overwhelming faith in AI, unethical use of AI can result in horrifying consequences. Many complex operations, including hiring, medicinal chemistry, decision-making, etc, are already being automated by technology. Artificial intelligence has proven to be so dependable that judges and attorneys are utilising it to establish law and order. When AI takes over power at breakneck speed, we must be prepared to cope with the ethical issues that it poses.(From Inclusion To Influence: How To Build An Ethical AI Organization, 2021) Asking modern day organisations to stop the adoption of artificial intelligence is not an option. Transparency, governance, and access to remedy are just a few of the instruments that will aid in holding algorithms accountable and therefore making their use ethical. This necessitates a thorough and comprehensive annual evaluation of AI Ethics. Some suggestions for evolving an efficient AI ethics index are listed below: 1. A quantitative depiction of algorithmic progress impacting national IP environments' strength and several aspects of economic activity, including asset prices of spending R&D investment, innovation, technology creation, and creativity can all be improved by combining intelligent data protection and data analysis with privacy-preserving solutions that conceal the identities of those who divulge the statistics. 2. Innovation Competence Model can be devised beyond the factual control of parameters with optimization of contours of productivity presaging the correlation of creative outputs; extrapolating the persisting trends in the Intellectual Property trends by assessing the algorithmic efficiency by slender accumulation of data rather than a surveillance manipulating the outcomes. 3. Training AI Models for a broad elimination of liability of negative aggregated results, to include several international trade agreements have concluded substantial IP provisions, to devise predictable and transparent rule of law, and the use of industrial policies can be undertaken by considering indicators of average rates of top performing economies with


20

Artificial Intelligence and Policy in India (2021)

efficiency benchmarks defining clear set of directives for accountability. 4. The long-standing issue of counterfeiting should be detected and including while developing an interconnected measure to eliminate anomalies by detecting deviations from the existing model governing the shifting sales, renewable periods of creation in the economies, unpublished sensitive information of the business enterprises, etc.

Conclusions Artificial intelligence (AI) is often recognised as one of the most significant technology of our time. Recent advances in artificial intelligence (AI) are generating commercially important applications across a wide range of industries and professions. Machine learning, picture and audio recognition, language processing, and data analytics are examples of technological innovations that have enabled computers to match, if not outperform, human abilities in specific fields. The nature of the consequences of AI on intellectual property rights, on the other hand, is unfavourable. Artificial intelligenceinduced losses in traditional creative (copyright) or R&D-related (patent) areas may constitute a threat to society. Data and algorithms create various fundamental IP-related challenges, such as how to create intellectual rights in a constantly changing algorithm. Globally, demand for intellectual property rights continues to outpace economic growth rates. The IP system, as it is known, is not going out of style. However, new difficulties are emerging, which may necessitate the inclusion of an extra layer of IP rather than the replacement of the present system. The incapacity of current intellectual property laws to deal with AI-induced losses in the copyright and patent areas necessitates a close examination of Intellectual Property Rights. Also it necessitates a rigorous annual assessment of AI's current state. That's where the AI index comes into the equation, which covers autonomous systems, artificial intelligence research and development, the AI industry, public perception of AI, and, most crucially, technical performance. In the next few years, AI systems will become incredibly valuable in IP administration. AI solutions for day-to-day IP management activities are already proving useful to IP strategists. However, as artificial intelligence advances, it will become more difficult for IP portfolios to handle such large databases, and it will become more difficult for strategists to fill the gap between tech and security. Over time, the IP sector has acknowledged the issues and has updated their strategy to respond to AI discoveries in order to find a position within this system. Despite the challenges, it would be fascinating to see how intellectual property laws evolve from here to incorporate artificial intelligence.


References 1.

Artificial Intelligence and Policy in India, Volume 3

21

Artificial Intelligence and Intellectual Property. Artificial Intelligence and IP [online]. [Accessed 10 June 2021]. Available from: https://www.wipo.int/aboutip/en/artificial_intelligence/ 2. CHINA SECURITIES INDEX CO. LTD. Methodology of CSI Artificial Intelligence Index. CSI Artificial Intelligence Index [online]. December 2020. [Accessed 10 June 2021]. Available from: http://www.csindex.com.cn/uploads/indices/detail/files/en/433_930713_Index_ Methodology_en.pdf 3. AI and Intellectual Property Rights. Home [online]. [Accessed 10 June 2021]. Available from: https://indiaai.gov.in/ai-standards/ai-and-intellectual-propertyrights 4. TRIPATHI, Swapnil and GHATAK, Chandni. Artificial Intelligence and Intellectual Property Law. Christ University Law Journal. 2018. Vol. 7, no. 1p. 83–98. DOI 10.12728/culj.12.5 5. Rating the Rankings: A Method for Measuring the Effectiveness of Head-to-Head Ranking Systems. NYC Data Science Academy [online]. [Accessed 10 June 2021]. Available from: https://nycdatascience.com/blog/student-works/rating-rankingsmethod-evaluating-effectiveness-head-head-ranking-systems/ 6. Daniel Zhang, Saurabh Mishra, Erik Brynjolfsson, John Etchemendy, Deep Ganguli, Barbara Grosz, Terah Lyons, James Manyika, Juan Carlos Niebles, Michael Sellitto, Yoav Shoham, Jack Clark, and Raymond Perrault, “The AI Index 2021 Annual Report,” AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA, March 2021. https://aiindex.stanford.edu/wpcontent/uploads/2021/03/2021-AI-Index-Report_Master.pdf 7. KENNETH BENSON and NEIL WEINBERG. INSEAD (2020): The Global Talent Competitiveness Index 2020: Global Talent in the Age of Artificial Intelligence, Fontainebleau, France. MICHAEL FISHER and HOPE STEELE (eds.), Insead [online]. 2020. [Accessed 10 June 2021]. Available from: https://www.insead.edu/sites/default/files/assets/dept/globalindices/docs/GTC I-2020-report.pdf 8. https://www.theglobalipcenter.com/wpcontent/uploads/2021/03/GIPC_IPIndex2021_FullReport_v3.pdf 9. ALEX MOLTZAU. The Global Artificial Intelligence Indexes 2018–2019. Medium [online]. 4 December 2019. [Accessed 10 June 2021]. Available from: https://towardsdatascience.com/the-global-artificial-intelligence-indexes-20182019-1b0d0dce5f60 10. Government AI Readiness Index 2019 - Oxford Insights. Oxford Insights [online]. [Accessed 10 June 2021]. Available from: https://www.oxfordinsights.com/aireadiness2019 11. ALEXANDRA MOUSAVIZADEH. The Global AI Index . tortoisemedia [online]. 2020. [Accessed 10 June 2021]. Available from: https://www.tortoisemedia.com/wp-content/uploads/sites/3/2020/12/GlobalAI-Index-Methodology-201203.pdf 12. SANTOSH MOHANTY. Understanding the Dynamics of Artificial Intelligence in Intellectual Property. Tata Consultancy Service [online]. [Accessed 10 June 2021].


22

13.

14.

15.

16.

Artificial Intelligence and Policy in India (2021)

Available from: https://www.tcs.com/content/dam/tcs/pdf/discover-tcs/aboutus/press-release/understanding-dynamics-artificial-intelligence-intellectualproperty.pdf CHRISTIAN HARTMANN, JACQUELINE E. M. ALLAN, P. BERNT HUGENHOLTZ, JOÃO P. QUINTAIS and DANIEL GERVAIS. Trends and Developments in Artificial Intelligence Challenges to the Intellectual Property Rights Framework. IVIR [online]. [Accessed 10 June 2021]. Available from: https://www.ivir.nl/publicaties/download/Trends_and_Developments_in_Artifici al_Intelligence-1.pdf JOHNSON, Khari. AI Index 2019 assesses global AI research, investment, and impact. VentureBeat [online]. 11 December 2019. [Accessed 10 June 2021]. Available from: https://venturebeat.com/2019/12/11/ai-index-2019-assessesglobal-ai-research-investment-and-impact/ SUKHADEVE, Ashish. Council Post: From Inclusion To Influence: How To Build An Ethical AI Organization. Forbes [online]. 7 June 2021. [Accessed 10 June 2021]. Available from: https://www.forbes.com/sites/forbesbusinesscouncil/2021/06/07/frominclusion-to-influence-how-to-build-an-ethical-ai-organization/?sh=1ee72caa1659 Artificial Intelligence's Role in the Field of Intellectual Property. Artificial Intelligence, Big Data Analytics and Insight [online]. 22 February 2021. [Accessed 10 June 2021]. Available from: https://www.analyticsinsight.net/artificial-intelligences-role-in-the-field-ofintellectual-property/


Artificial Intelligence and Policy in India, Volume 3

23

2 AI Education in India: First Analytical Report Gyanda Kakar1 & Hemang Arora2 1Research 2Research

Intern, Indian Society of Artificial Intelligence and Law Intern, Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. Artificial intelligence (AI) is increasingly driving key technological and business advances. It is used in a wide range of sectors and has an impact on practically every area of production. AI's progress is being fueled by the availability of vast amounts of training data and breakthroughs in affordable high processing power. In a variety of ways, AI and intellectual property (IP) collide. The simulation of human intelligence processes by machines, particularly computer systems, is known as artificial intelligence (AI) though no explicit definition has explained Artificial Intelligence. Expert systems, natural language processing (NLP), speech recognition, and machine vision are examples of AI applications. Learning, reasoning, and self-correction are the three cognitive processes that AI programming focuses on IP should be used as a regulatory structure to encourage invention and innovation through the use of market forces. Justification of IP rights for AI as a tool and AI-generated output in light of IP protection based on a complex study of the outputs, productions, or applications generated by, or with the assistance of, AI systems, tools, or techniques that are susceptible of IP protection, with a focus on the works promoted and created by AI based systems. This report delves into the AI prospects of the Intellectual Property Laws, the efficiency of the various Artificial Intelligence Indices released by the companies, and the relevance of AI Indexes and its coherence in contemporary pertinence with the AI and Intellectual Property Laws.

Introduction Youth in every country is the largest and most promising part of the population. As a country that has the world's highest youth population, India has the benefit of being a global leader compared to other nations, because we efficiently harness youth talent. (Arora, 2020) Quality education is the only way to do this. With the increasing scope of artificial intelligence in India, it is time to introduce AI in education, to benefit and make young people AI ready for the future. Since the education sector has a link with complex, informationally controlled business


24

Artificial Intelligence and Policy in India (2021)

environments, recent developments in technology and the growing rate of adoption of AI technologies make it important for issues concerning implementation within the education sector to be recognized and evaluated. (Ketamo) AI technology in India has the capacity to automate both complex and worldly activities that in turn can lead to optimum use of human capital, allowing them to focus on areas where their overall productivity is greatly enhanced. (Arora, 2020) As mentioned at the outset, youth is the biggest part of any country's population and quality education plays an important role in helping the country to achieve a better future. With a growing number of artificial intelligence services in India, the education sector must update its strategies by taking into account the effect of AI on education in India and how it can help young minds today becoming tomorrow's capable leaders and innovators. (Arora, 2020) The recently launched SDG Index 2019-2020 by Niti Aayog assigned a composite score of 58 to India under the SDG on Quality Education, with only 12 states/UTs having a score of more than 64. The government's actual education expenses are below 3 per cent of GDP, while in primary school pupilteacher ratios are below 24:1, which is considerably lower than comparable countries such as Brazil and China. It will also not be possible to meet teachers' demand due to the increasingly growing population and declining resources. (Arora, 2020) In the academic world, artificial intelligence has already taken a long step forward, transforming the conventional methods of information transmission into a complex system of simulation and Augmented Reality (AR) instruments. Interactive research materials comprising text and media files can very easily be exchanged between interest groups and can be used with the aid of intelligent devices in a more efficient way. (Stempedia, 2020) In education, there are vast and complex benefits to the use of artificial intelligence. Everything here can be considered helpful, for instance, if we think about something that can perform efficiently any tasks usually rely on human intelligence, such as a computer program. Based on the state-of-the-art research in this field, we identify nine areas in which AI approaches can add value for learning and teaching. (Dharmadikari, 2020) In this report, we argue that in educational and learning contexts, the function and effect of artificial intelligence is increased. In academia, on the one hand, the context of intensity (multi-cultural) and asynchronous is becoming more powerful and individual. The intersection of three fields, namely data, computing and education, has dramatically affected the essence of telecommunication: what is being learned, when it is taught, how. India, is facing major challenges like: - (DIPT, 2021) 1. Lack of Critical Thinking Skills and increased emphasis on rote-learning 2. Outdated Pedagogy


Artificial Intelligence and Policy in India, Volume 3

25

3. Lack of Vocational Training 4. Unequal access to educational opportunities 5. Lack of teaching resources and Individual Attention1 Objectives Following are the objectives for this report: ▪ How Education has or should be democratized in India through AI under its SOTP classification? ▪ How human privity and autonomy is preserved from the interference of AI? ▪ How sensitive data/information is pseudonymized and kept in presence? ▪ How pedagogy standards should be translated into AI? ▪ How should the cultural aspects of technology be interpreted to ensure non-interventionist ed-tech ethics?

How sensitive data/information is pseudonymized and kept in presence? In India, currently there exists no particular legislation on protection of personal data like GDPR or the Data Protection Directive. (Talwar Thakore & Associates, 2020) However, the Information Technology Act (2000) include Section 43A and 72A which allow a right to compensation in case of improper disclosure of personal information. The term ‘Pseudonymization’, refers to the process wherein that requires the source collecting data to process it in such a way so as that the personal information is no longer identifiable to a certain natural person. () The only way in which this pseudonymized data can be linked to back to an identifiable to a specific data source is through combining other pieces of information which is protected individually in different parts. () The Committee of Experts under the Chairmanship of Justice B.N. Srikrishna recommended the technique of ‘Pseudonymization’ under the term ‘Deidentification’ in the Personal Data Protection Bill, 2020. (Justice B.N. Srikrishna, 2018) It also goes on to specify that this de-identified data shall also be treated as personal information and thus be inflicted to the same standards. The National Education Policy, 2020 has emphasized on the use of AI in education. (Mehra, 2020) But, as the authors of this report have gone through many AI-EdTech Start-ups especially those which have been supported or awarded by the Central/State government under various programs promoting AI education, it is hard to find any organisation which explicitly explains how the collected data shall be processed. Moreover, only a few websites explicitly Online, D. (2020). How Artificial Intelligence is transforming the education system. Retrieved 15 March 2021, from https://www.dqindia.com/artificial-intelligence-transforming-educationsystem/ . https://dipp.gov.in/whats-new/report-task-force-artificial-intelligence 1


26

Artificial Intelligence and Policy in India (2021)

mentioned that the data collected and that it won’t be shared with third parties, but had no mention of how it will be used by the parent company and what procedure is being followed so as to keep it safe. The objective behind the process of ‘pseudonymization’ is to ensure that organisations processing that personal data implement enough measures so as to keep this information safe. The question to be answered is that how is data pseudonymized in the AI EdTech sector in India, but as mentioned before unlike jurisdictions like the EU there exists no specific legislation like GDPR that mandates this process in India. (Shubham Singh, 2020) Thus, no mention of this term could be found on websites of most of these AI-EdTech start-ups. We need to look into what all personal data can be collected by such organisations and whether they are using pseudonymization so as to secure subject personal data. (Deloitte, 2019) We will be looking into different organisations in the filed and analyse the information collected by them. However, before looking into the practices followed by several players in the AI-EdTech sector it becomes essential to understand what data can be collected. This although can be listed according to the practice followed by each firm but it varies a lot from firm to firm so it would be better to list all the ways in which information can be collected. It starts with basics like understanding conceptual clarity based on the responses of each student and then progressing forward so as to strengthen the weak-concepts. It can also be as advanced as tracking the emotions through facial tracking so as to understand a student’s interest, understanding, clarity etc. Thus, it can vary exponentially and be based on several parameters like solve-rate, attendance etc. Now, looking into a few AI-EdTech organisations in India1. Analytixlabs (AnalytixLabs) - this organisation is a start-up by several MIT, ex-McKinsey graduates which offers courses using big-data to enhance the process of field switching. For example, if a law graduate at the age of 30 wishes to switch to a job in the field of Big-data analysis then they can choose a course and after completing it they shall be well versed with knowledge about that particular field. This firm is based in Gurugram, Haryana in India and its services are currently being used by a large number of individuals and organisations. Moving on to the privacy section on its website, it says that all personal data collected is safe and secure, and subject to best practices in the field for ensuring the same. Section-6 of their privacy policy states that ‘To protect your personal information, we take reasonable precautions and follow industry best practices to make sure it is not inappropriately lost, misused, accessed, disclosed, altered or destroyed.’ However, it should be noted that there is no mention of how and which processes are used to ensure the same. 2. Deepgrade (Smartail) – is a firm in India offering its services in the field of AI Education, to figure out whether it uses or plans to use pseudonymization for data processing their FAQ section needs to be looked at. Under the


Artificial Intelligence and Policy in India, Volume 3

27

privacy sub-section, merely two questions related to privacy have been answered, first whether personal data is safe with Deepgrade and whether data is shared with any third-party. The website states that it uses AWS services which means that all data stored is under S3 encryption and thus, it is very secure. And with regards to the latter, it states that they offer no personal information to third-parties. The S3 storage offered by Amazon in AWS although GDPR compliant (which mandates pseudonymization) but it is limited only to jurisdictions where is GDPR is in force. (2020) Thus, in coherence with the practice followed by most of the firms in India collecting personal information for Artificial Intelligence no particular method has been mentioned, stating how this information will be kept safe. 3. Embibe (Embibe) - it is among the most subscribed AI EdTech services in India, it has also won several awards like ‘Amazon Award for the best AI for the education sector’. Although, under the terms and conditions of the services offered data privacy has been mentioned, it only states that all data shall remain secure and it shall be subject to laws applicable in India. Considering, in India no legislation like GDPR is applicable yet it is very difficult to know whether practices like ‘Pseudonymization’ are followed by the firm despite and explicit requirement under present laws in India. Thus, it becomes very clear that unless and until proper rules and regulations regarding ‘Pseudonymization’ are implemented in India it will be a rare occasion to see firms follow such practices. Moreover, it can also be seen that as of now such organisations doing very less so as to focus on data privacy which unfortunately, means that most of the consumers have other priorities while choosing such courses offered. It also becomes clear that a Personal Data Protection Bill needs to immediately be put in place as if this information if leaked into the wrong hands can be misused for several purposes, violating the right to privacy enshrined under the Constitution of India.

How human privity and autonomy is preserved from the interference of AI? Many look to AI-powered tools to solve the need to scale high-quality education and with good reason. An increase in educational content from online courses, increased access to digital devices, and the contemporary revival in AI seem to provide the pieces required to offer personalized learning at scale. However, technology has a poor track record for addressing social problems without causing unintended damage. (The Impact of Artificial Intelligence on Learning, Teaching, and Education, 2018) For decades, AI has been designing an autonomous tutor in the sacred grail of education: an algorithm which can monitor the progress of students, understand what they know and what drives them and provide an optimal adaptive learning experience. Students can learn from home anywhere in the world with access to


28

Artificial Intelligence and Policy in India (2021)

an autonomous tutor. But 2020 self-employed tutors look very different from this ideal. Self-tutor education typically includes students with problems that are easy to interpret for the algorithm—as compared to the learner’s joy. (Garcia, 2019) Current algorithms are incapable of understanding and far from generating longterm learning advantages, rather than concentrating on short-term students. The technological challenges are immense. It may be as difficult to develop the perfect auto-tutor as to achieve true AI. Students attend school for many other causes, including socio-emotional skills growth, human mentoring, and human culture. The displacement of these institutions costs all the possible shortcomings of human teachers and the existing classes. Many of us recall learning from teachers who were responsible for mentoring and teaching well beyond the subject. (Chris Piech, 2020) In addition, isolation is increasing, younger generations being lonely than older generations. One study showed a correlation between depression in teens and time screen compared to young people who spent time on offscreen activities such as social interactions in person, sports or homework. Reduced screen time can lead to significant empathy gains. (Chris Piech, 2020) UNESCO’s mandate calls inherently for a human-centred approach to AI. It aims to shift the discussion to include AI’s position in addressing existing disparities surrounding access to information, research and the diversity of cultural expressions and to ensure AI does not expand the technological gaps within and between countries. The promise of "AI for all," in particular in terms of innovation and awareness, must be that each individual will benefit from the new technological revolution under way and access its fruit. (UNESCO, 2020) Since UNESCO considers the creation of socio-emotional competences to allow peaceful and sustainable societies as reorientation goals of the education sector, compelling children towards the screen can undermine those goals. In the meantime, the AI disturbance in the classroom could spread to homes and communities. Regardless of how much "humanity" such technologies exhibit, authority figures such as teachers and parents may not easily adapt to a curriculum entirely carried on a digital device. In traditional poor communities, where some see the greatest potential effect of AI technology, this resistance may be harshest as families who don't accommodate children who spend time on screens can avoid changing their mentored work to AI tutors. (UNESCO, 2019) We must also recognize the likelihood of newly effective and inspiring educational instruments being used by evil actors to teach violent subjects. Much as the increase in Facebook improved both destructive and democratic organization, new effective teaching tools could assist terrorists in developing training on destructive actions. In addition to the objective of building human empathy in AI tutors, deep personal data on emotional and psychological circumstances of learners will be processed. Can authoritarian regimes use psychological data for people from the time of repression or power accumulation by schoolchildren? (Chris Piech, 2020)


Artificial Intelligence and Policy in India, Volume 3

29

While researchers support the ability of AI-enabled education resources to democratize education globally, they have to look at how these instruments can sustain or increase inequality. The source of trained data on current AI algorithms becomes privileged groups having access to digital resources. When machine learning algorithms train on such datasets – maybe a group with overrepresented white students from the US – the result could be biased towards groups from other backgrounds and therefore inefficient or even discriminatory if it is used in a different group. (Chris Piech, 2020) When AI aims at young learners who do not yet agree to the collection of their personal data and at learners in higher need, who do not understand the risk of sharing their personal data or communicating online with anonymous foreigners, the challenge is how to use vast quantities of personal data for personal learning while preserving their personal privacy and preferences. Platforms connecting educators around the world are no utopian way to moderate online experiences. (Chris Piech, 2020) • Facilitate more human learning interactions Instead of replacing teachers, AI could be built to help educators and educational systems by automating activities and creating exciting issues. Teachers and tools can work together: teachers filter valuable proposals from AIs and tools that help teachers in graduation and student tracking. • Inspiring problems AI may also contribute to the development and dissemination of local problems. This synthesis could create a rich teaching and learning environment, supported not replaced by technology through social and emotional interactions. Inspiring, open-ended work offers discovery and innovation opportunities. However, existing methodologies involve data sets of a large number of students, so that accurate, AI-led input on open-ended work is possible. AI should help the teachers if these questions are to provide feedback on them. • Risk Detection for Child Safety In order to ensure that online learning spaces are secure for all learners, particularly children or those in vulnerable contexts, a good amount of energy should be put in developing content moderator tools.

How pedagogy standards should be translated into AI? Pedagogy, is the study of approaches for education, including education objectives and approaches to achieve these goals. It is the art, science or profession of teaching. The area relies heavily on educational psychology, which includes science-based hypotheses and, to a certain degree, educational theory that takes the goals and meaning of education from a philosophical point of view. (Peel, 1998) 5 principles of effective pedagogy


30

Artificial Intelligence and Policy in India (2021)

The teachers use pedagogical techniques while they are in the classroom. The activity of providing successful pedagogy typically depends on the particular subject to be taught, the identification of the varied needs of the students and the adaptation of the conditions around the classroom. (Nisai Learning, 2019) The five principles of pedagogy include joint productive action (JPA), language and literacy development (LD), meaning making (MM), complex thinking (CT), and instructional conversation (IC). (Nisai Learning, 2019) These standards are based on realistic principles that have proved popular over a number of decades in teaching and learning environments with a majority and minority at-risk students. For each standard indicators are implemented that show action components of the standards and their teaching and learning functions. (West Leederville Primary School) In order to support the universality argument for such standards, illustrations and examples representing standards and their measures in the majority and minority at-risk student classrooms are seen in various classroom settings. The aim is to call for standard-based reforms to represent their own suggestion that pedagogy should be fundamental to the achievement of all student learning. Successful learning has become an increasingly effective way of enhancing classrooms, achieving national education objectives and making sure all students succeed. The new reform movement continues the mistake of previous decades, failing to analyze how standard statements react to the main teacher issue—how students can understand what they are supposed to know. Reports show that the concepts of pedagogy and their relation to philosophy of education and learning are often lacking and rarely modelled within the spectrum of the production of teachers, from service to inservice. (Arora, 2020) Statistics show that one-quarter of newly recruited teachers were unprepared during 1991 to provide successful training in rural and urban isolated schools serving minority at danger. Learners report that their professional development is usually consistent with changed objectives, but their opportunities for nuanced skill building are hindered by the excessive number of topics and the confounding range of the trainers. National policy surveys and case studies reveal that efforts are likely to be desultory rather than systematically or centred when schoolbased reform is aimed at improving education. (Preetipadma, 2020) Teaching and pedagogy mean more than ever that teachers continue to assist students in the ongoing social activities in classrooms through their engagement and movement. For instance, all exchanges between teachers and their students in oral language developments under girds not only in lessons. Pedagogy also ensures teachers can understand how to use the local awareness funds for academic learning to learn about student homes and neighbourhoods. Today pedagogy applies research principles and findings that are promising to achieve all students, for example, learning groups, linguistic development, directed engagement, emerging literacy, information funds, cultural compatibility and


Artificial Intelligence and Policy in India, Volume 3

educational dialogues. (S. Amershi, 2005)2

31

In education, evidence-based training (EBP), where the need to provide teachers with a capacity to autonomously produce proofs of their best practices in situ is increasingly stressed. Such contextualized evidence is seen as the key to informing education. One of the main problems of EBP is the lack of methods for teachers to provide proof of their practice at a low level of detail in a way that is inspectable and reproducible by others. (AI in Education as a methodology for enabling educational evidence-based practice, 2016) In order to contribute to education, learning aid must provide simple and definite pedagogical advantages over conventional methods of learning. In such a way it is critical that it supports these different skills and learning styles to meet a broad range of students who might have a classroom. (Arora, 2020) Increase student awareness of the domain objective. In the AI domain, this encompasses both the abstract information maps to graphical representations and the different AI algorithms based on these maps. Differences between individuals Theory indicates that visualization tools can affect learners differently in accordance with their various skills and styles of learning. Different students may demonstrate a different degree of preference for a visualization method, or may experience different levels of betterment. In addition, the knowledge of a subject will change with time and with different rates of each student. A tool should also take into account the individual speed of learning and encourage beginners, but still continue to facilitate learning as the learner awareness grows. (AI in Education as a methodology for enabling educational evidence-based practice, 2016) Many works on visualization tools have been aimed at calculating learning gains to demonstrate performance. However, findings from these studies remain mixed, unlike educators’ intuition that visualization of algorithms is pedagogically advantageous. Alternatively, findings of preliminary research on other factors, such as the ability of a tool to stimulate student motivation and indirectly boost learning results seem encouraging. (AI in Education as a methodology for enabling educational evidence-based practice, 2016) The active involvement of students in the study process is one way to encourage students to learn. This can be accomplished by promoting the interactions between the student and the tool within the framework of visualization software. Active participation not only improves motivation but significantly enables students to consciously build awareness and new insight on the pedagogical impact of a visualized method. Many educators understand the potential advantage of using visualizations for Indian Education Sector is Ripe for Disruption by Artificial Intelligence | NITI Aayog. (2021). Retrieved 15 March 2021, from https://niti.gov.in/indian-education-sector-ripe-disruptionartificial-intelligence 2


32

Artificial Intelligence and Policy in India (2021)

classroom presentations, but using visualizations for courses such as tasks or individual explorations can contribute to higher levels of involvement. In this activity, students will actively participate by answering questions about visuals or underlying principles, modifying the input of algorithms, evaluating associated behaviour changes or creating new visualizations. These activities should be more active than passive activities, such as watching class visualizations, because students need more effort. The use of visualization software will improve the educational benefits of the tools in all the learning activities. AI methods, which are used as a means of generating and computerizing teaching and learning knowledge, provide the resources necessary for teachers to systematically, detailing and incrementally collect information, and can also be exchanged and examined by others. The analytical consideration of AI's contribution to educational affairs provides an important prospect of the potential role AI has played in education. Representation of knowledge (KR) is central to AI and, perhaps, to any scientific endeavour, for it is an instrument of conceptual explanation and reasoning of the world we live in at its most basic (and general). Scientific theories are basically ways of describing knowledge of the universe, although at various levels. In AI, the representation of information is necessarily an intelligence theory, or more specifically, intelligent reasoning, by definition. Elicitation of knowledge (KE) is an inseparable complement to the portrayal of knowledge by focusing on the universe through KE. KE means that we can participate alone or as respondents to query of someone else, either collaboratively, and the process can be formal, informal, organized or unstructured. KE is a process that involves ourselves or others. In the sense of AIEd, different types of KE instruments have been adopted, produced and tested. For example, questionnaires or interviews have been borrowed directly from social sciences, whereas approaches such as cognitive improvements in power and applicability have been acquired with AV. By applying AI as a methodology to promote educational practice based on facts, the partnership between AIEd and education can be improved. AI provides educators with basic tools that can be inspected and repeated by the larger educational group in order to provide proof of their activities. Knowledge collection and representation methods by AI will enable practitioners to participate in computer design thinking, which can create independence for practitioners when identifying, designing and inspecting their real-world practices at a low representation level. In the cognitive age, AI brings education. In the educational environment and through learning, new levels of personalization are being transformed. Cognitive tools which now allow educators to understand, learn, and explain the styles and preferences of individual students to spread all levels of learning skills. The outcomes are holistic pathways over the lifetime of each learner.


Artificial Intelligence and Policy in India, Volume 3

33

The functioning of a democratic society is determined by education. The need for and value of a quality public education system has been changed by every great political thinker, for this form of system ensures that the people of a society are sufficiently prepared to make decisions in a democratic political system that favour society as a whole rather than themselves. A quality public education system should ensure the development of key competences, such as critical thinking, judgment and citizenship, during education, that children and adults grow and contribute to their development in a democratic society.

How should the cultural aspects of technology be interpreted to ensure non-interventionist ed-tech ethics? The functioning of a democratic society is determined by education. The need for and value of a quality public education system has been changed by every great political thinker, for this form of system ensures that the people of a society are sufficiently prepared to make decisions in a democratic political system that favour society as a whole rather than themselves. A quality public education system should ensure the development of key competences, such as critical thinking, judgment and citizenship, during education, that children and adults grow and contribute to their development in a democratic society. Young people in their lives are especially malevolent at this age and educational stuff can really put them on a particular track. We see AI instruments which can take a great deal of knowledge on career paths and suggest what students can learn. So, we have AI informing young people's decisions. Ethical effects are more live and quickly acquired for cases like this. We trust a lot in technology designers' skills, who often have a limited understanding of pedagogy or how universities or classrooms work. We trust them to create a tool that is parachuted into an educational setting. It is crucial to understand the current and historical attempts to use edtech to automate and automate education. This understanding will provide administrators and clinicians with insights into the types of policies and activities supported by edtech. Such technologies are seldom impartial and understanding their preference – particularly their preference for packaging and automated delivery of education – can lead us to ask important questions as to their objectives and consequences. The Edtech programmes should promote an appreciation of their ideological, social, political and economic backgrounds. Although edtech initiatives must keep people informed for the design, creation, evaluation, implementation and management of edtech and related educational practices, we cannot forget that this technology is not impartial, either political or ideological. Organizations should cooperate in the development of edtech. In order to carry


34

Artificial Intelligence and Policy in India (2021)

out independent testing, assessment or training criticisms of products based on their experience and expertise, for example, researchers may collaborate with edtech companies. Persons working in edtech companies should take action to close the gap between their practice and current research in edtech. Outcome Analysis In less Internet-friendly societies, each site experiences different forms of child abuse, and exploitation is likely to be intensified. Researchers must take responsibility for the safety of potential users of online tools against malware experts to use even the best-sophisticated users in order to fulfil the pledge of online education to help the least developed countries. Initiatives like the ATL AI-Base Module of the Atal Innovation Mission, the planet code, launch of INDIAai as the centrepiece for all AI in India and beyond, and CBSE's efforts to introduce AI in schools are some of the numerous commendable measures towards the implementation of AI training in India and preparation for the future of youth in India. Artificial intelligence will also play a large part in the achievement of the country's targets for 2030 in line with United Nations Sustainable Development Goals. One of these goals is to increase the number of eligible professors significantly. AI promotes advanced techniques to support the education sector which make students and teachers simple, engaging and productive learning. The increasing compatibility between human and artificial intelligence helps society to make the most of the educational system without incurring unnecessary cost of service. All changed to personal tastes in this period of personalization, be it recommendations from Netflix or Facebook advertising. AI empowers educators right from the outset of a semester to meet the unique needs of all students. This gives an instructor plenty of time to help his students improve cognitive skills. (Dharmadikari, 2020) AI in education can help boost our teachers' effectiveness through various AI applications, such as the automation of mundane and repetitive, routine work such as attendance, the automation of graduation, the personalization of learning journeys, based on knowledge, skill and comprehension, etc. AI has allowed classification automation and document processing. These activities must no longer be performed manually by teachers and administrators. In reality, AI will create more effective registration and admission processes in the coming years. AI solutions might include some services at a time when many educational institutions are extended beyond their capacity and learners are waiting long for local therapy. It is therefore advisable, in the interests of improving the education quality and minimizing errors in the dissemination of the administrative documents and in studying, that organizations use solutions which are assisted by the latest technologies.


Artificial Intelligence and Policy in India, Volume 3

35

Case Studies IIT BOMBAY-AI PROCTORED ‘INVIGILATOR’ AND PRIVITY CONCERNS A proposal at IIT-Bombay to build an AI model (artificial intelligence) for automated student protocols raised concerns among several faculties that claimed the institute was nodded to the use of captured student videos without student agreement. Currently, IIT-B Faculty monitors remote proctor tests that require students to sit in full view in front of the camera. AI proctoring would automatically identify cheating cases during examinations. When students received a long collection of instructions on how to proceed, nobody said that they sign up to allow their videos for postprocessing. At the time the email was drawn up for students, there was a plan to work on this kind of initiative. Those undertaking this project assured the Ethics Committee that the purpose of the study is being prevented if consent is taken. (Goradia, 2020) Possible solutions are provided by Tharun Komari. Students with high test anxiety perform poorer in a proctored online setting, according to Woldeab and Brothen's study. Students and colleges face a dilemma: data protection and cheating prevention. The development of this tool has shown us that, whatever the solution we introduce, it must not unintentionally impact our students by violating their right to privacy but must also be effective enough to prevent students from outsmarting them. During the test, his model prevented students from visiting social media sites, and some students fooled the system using their mobile telephones. The model was nevertheless high in accuracy and was more efficient and privacy-friendly in general. With the growth of online training, there is no perfect way to cheat online and the use of AI is just one way. (Komari) The main factor behind this project is perhaps that a behavioural approach is perhaps the best way to avoid surveillance tactics being broadly used in online learning. The universities should also reassess how they measure the knowledge of students. ATL-AI STEP-UP MODEL The basis module was designed specifically to consider students younger than 12 years old who did not have an AI background to allow them to enhance their curiosity about AI and to contribute to an innovation ecosystem. The step-up Module was planned and submitted to involve young people throughout the country in the promotion of inclusive learning and in the development of integrated AI technologies among the young. These modules will allow youth to meaningfully engage with AI-based technologies and enhance learning through digital literacy, coding, and machine thinking. Competencies such as logic, critical thinking and problem solving would be in-demand in the coming decades. These modules will allow young people to engage meaningfully with AI-based technology and improve learning by incorporating digital literacy, coding and computational thought. Competences such as rational reasoning, critical thought and problem-solving would be the most significant competencies for success in the next decade. (Dharmaraj, 2020)


36

Artificial Intelligence and Policy in India (2021)

Real-time text to speech and text translation systems In accordance with the National Education Policy 2019 Draft, real-time text to speech and text translation systems can be used for seamlessly disseminating knowledge in the regional language, which has fostered the learning of mothertongue. DISKHA, the digital knowledge sharing infrastructure, that MHRD has set up or e-PATHSHALA will incorporate these translation systems (initiatives under the Sarva Shiksha Abhiyaan). For example, if an E-PATHSHALA textbook is only available in Hindi, it can be available in other regional languages and made accessible by text translation services. The language barrier can be eliminated and the interoperability of teachers in the states is achieved through this language translation system which helps meet the demand better than ever. (Arora, 2020) Automated Grading The draft 2019 national education policy prioritizing online learning could be used for automated grading on large- scale tests on platforms such as DIKSHA, E-PATHSHALA and SWAYAM - not only objective but also subjective - by engineering methods such as natural language processing. Automated content development is another area where AI can participate, as NLP can make use of Automatic Text Summarization in order to create crisp content and publish it on these e-learning sites provided broad sources of internet knowledge. This uniform unified curriculum, which is built with ML-based methods, will be consistent with nationally established learning findings (MHRD has developed the Performance Grading Index (PGI) 70 indicator-based matrix for states and UTs) and will critically assist in determining indicators for the percentage of students at a minimum level of competence. (Arora, 2020) VISHAKHAPATNAM, ANDHRA PRADESH-17 DISTRICTS-FINDING OUT THE LOOPHOLE In 17 districts, the government of Andhra Pradesh conducted an experiment. A Machine Learning Technology framework collected and analyzed student data concerning different dynamics, such as academic success, the cause of school drops, teachers' qualities and skills, culture, gender etc. This application has found predictive trends including likely students dropping out. A list of thousands of students who will leave schools in the 2018-2019 academic year has been purchased by the state government. These studies demonstrate that AI works as a catalyst to streamline the education system and enable organizations to make informed decisions. (Dharmadikari, 2020) SUPERVISED CLASSIFICATION MODELS TO REDUCE DROP-OUT RATES With personalised input from AI programs, we can curb the drop-out rates in all of India, which rise to 4% in the primary level but up to 20% in higher education.


Artificial Intelligence and Policy in India, Volume 3

37

As individual tutors continue to gather data points every moment of the journey through the child's education, classification ML models can be used to forecast the likelihood of children leaving and proper remedial mechanisms may be introduced. Such a completion will contribute to an enrolment level for higher education and ensure that a substantial proportion of adult education reach mandates in line with the SDG's objectives. (Arora, 2020) (Stempedia, 2020) PATTERN DETECTION TO INCREASE INCLUSIVENESS The use of AI will positively impact on the goals of reducing gender differences in education and the integration of people with disabilities, which not only enables visual impaired users to be more participatory but also allowing mute people to be heard in almost real-time texts in language systems, which make the use of AI more inclusive. For supporting Inclusive Education for Children with Special Needs Children under the Samagra Shiksha program of MHRD, children affected with disorders that results in a speech disorder could benefit by integration of machine learning models in the e-learning websites that detect speech patterns, augment the speech by correcting mispronunciations or broken words and then output the same in an audio or a text format. Moreover, some education institutions in the present system may, consciously or otherwise, have inherent implicit prejudice, for instance, in the election of more students with policies that prevent the equal opportunities of certain indigenous groups. Selection criteria for employment, for example, can include the monitoring of certain AI processes in order to ensure that the process is a fair one and is a precursor to inclusive education. (Arora, 2020) CHATBOT In a country such as India, chatbots can be integrated or made accessible via IVRS framework education – they can be trained on subjects and a good percentage of student questions can be answered instantly, reducing the workload of teachers who can concentrate on more innovative activities. They can be integrated in digital infrastructure. The number of internet users in rural India is projected to exceed 290 million in Rural India by late 2019, so mobile penetration is not an obstacle here. (Arora, 2020) AI IN ADAPTIVE LEARNING (Li, 2020) AI is a vital element in adaptive training thanks to smart tutoring programs like Mastery learning and Carnegie Learning. Carnegie Learning provides individualized tuition and real-time input to post-secondary students through the use of cognitive science and AI technologies. Mastery Learning, through incorporating curricula around a student's success and combining timely, tailored input, immediate opportunities for correct practice and improvement activities, promotes the efficacy of individualized training in the school.


38

Artificial Intelligence and Policy in India (2021)

PICTOBLOXAI India's first immersive IT training platform, PictoBlox AI, is a complete projectbased learning environment with its integrated graphical programming interface artificial intelligence and machine-learning software. (Stempedia, 2020) BIOMETRIC AUTHENTICATION-Monitoring Education AI will take over the teacher's ordinary and administrative tasks - . For example, the implementation and incorporation of UDISE+ (Unified District Information System for Education) is possible for biometric authentication for students – an application that is one of the largest school education management information systems. The biometric attendance data may also be used as a proxy for district / state/ block inclusiveness and can easily be monitored to support the monitoring of national indicators such as youth and adult participation rates and proportion of men and women enrolled in higher education, technical and professional training. An example is Presentation Translator, a free PowerPoint plug-in, which provides subtitles to what the instructor is saying in real time. In addition, Azure Cognitive Services helps students to hear or read what is said in their own native language, thanks to AI speech recognition and translation. Well, the above cases offered opportunities to students who cannot attend school due to sickness, who need to study at a different stage, or who are not in a school or do not understand the language of others. (Arora, 2020) PERSONALIZATION OF CONTENT If such material is contained on these e-learning sites, individual reviews and suggestions may be possible to a wide range: At present, no individual consideration can be provided to each student. However, the development and grading of content by AI will ensure that children have customized pathways to study by recognizing pain points for their students and making suggestions accordingly. Essentially, AI-powered education infrastructure will offer a personalized tutor to every student in India. The Cram101 that uses AI to disseminate and break down content of textbooks into a comprehensive study guide is an example of such mechanisms, which makes browsing the chapter, flashcards and realistic test summaries easier. Netex Learning is also a useful AI interface. It allows educators to device-wide digital curricula and materials, incorporate rich media such as audio and video, as well as evaluate themselves or teachers online. JustTheFacts101 has a similar design that highlights and creates text-specific summaries, which are then archived and made available in a digital collection on Amazon. Several companies including Content Technologies Inc. and Carnegie Learning are currently creating smart teaching design and interactive tools using IT to provide pre-K students with learning, testing and feedback. (Arora, 2020) AI is a vital element in adaptive training thanks to smart tutoring programs like Mastery learning and Carnegie Learning. Carnegie


Artificial Intelligence and Policy in India, Volume 3

39

Learning provides individualized tuition and real-time input to post-secondary students through the use of cognitive science and AI technologies. Mastery Learning, through incorporating curricula around a student's success and combining timely, tailored input, immediate opportunities for correct practice and improvement activities, promotes the efficacy of individualized training in the school. (Dharmadikari, 2020) EduGorilla is another company that uses AI to analyse Big Data in the education industry in India. It analyses data from 600,000 schools and 70,000 plus coaching centres to provide top quality results for students. This acts as a one-stop shop for all things education-based in India. Students can rely on this platform only. (Velayanikal, 2020) Inferences Delivered The education sector in India has opened many avenues for AI involvement and will continue to open these avenues. It could be a tough route for the country to meet the quality education goals under the SDG without using this superpower, which is often thought to be the new power for the 21st century. A bottom-up approach is required - the SDGs must be located at the grass root stage. In the last decade, in order to achieve the UN objectives, it is necessary to encourage progress towards these objectives and to follow up the UN indicators on a realtime basis would be an important step towards achieving these objectives. (Arora, 2020) The 'Transforming aspirational districts' programme has demonstrated how monitoring of district objectives can foster healthy competition between districts, enabling each district to achieve its mandates. When the AI-powered systems are started and operationalised, they can only develop when more and more data is available. (Kant, 2019) One of the goals of the Quality Education agenda is to increase our nation’s supply of trained teachers significantly by 2030. Although the wide supply gap that exists cannot be filled, teaching can be made more effective using AI applications. There is no need for the existence of risks to eliminate our hope but rather to create a mature target. Many have drawn a vision for a better system of education. In the next few years, due to AI-powered technology there will be several improvements to the work of an educator and educational best practices. Today's students will live and work in a world in which AI is the real thing and it is necessary for our educational institutions to reveal and use the technology. Although significant improvements in the system can take years, artificial intelligence has the ability to change how we think about education dramatically. In the meantime, AI will help meet need gaps in learning and teaching, as artificial intelligence-powered educational solutions are growing, allowing schools and teachers to achieve more. This will mean that India, having a significantly increasing youth population, can harness the immense potential of AI application in education, to achieve her goal of being a ‘superpower’, or a 5trillion-dollar economy.’ (Maskey, 2020)


40

Artificial Intelligence and Policy in India (2021)

References 1. 2. 3.

4. 5. 6. 7. 8. 9.

10.

11. 12. 13. 14.

15. 16.

AI in Education as a methodology for enabling educational evidence-based practice. Porayska-Pomsta, K. 2016. 2016. Arora, Abhijay. 2020. Publications: Niti Aayog. Niti Aayog Website. [Online] January 14, 2020. [Cited: March 15, 2021.] http://niti.gov.in/indian-educationsector-ripe-disruption-artificial-intelligence. Chris Piech, Lisa Einstein. 2020. Scientific American-Observations-Opinions. Scientific American. [Online] February 26, 2020. [Cited: March 13, 2021.] https://blogs.scientificamerican.com/observations/a-vision-of-ai-for-joyfuleducation/. Deloitte. 2019. UNLOCKING THE POTENTIAL OF INDIA’S DATA ECONOMY: PRACTICES, PRIVACY AND GOVERNANCE. 2019. Dharmadikari, Swapnil. 2020. BLOG-EPRAVESH-Guest Post. [Online] 2020. [Cited: March 15, 2021.] https://www.blog.epravesh.com/artificial-intelligence-aiin-indian-classrooms-a-need-of-the-hour/. Dharmaraj, Samaya. 2020. OpenGov-. [Online] August 17, 2020. https://opengovasia.com/india-launches-ai-step-up-modules-to-studentsnationwide/. DIPT. 2021. Report of Task Force on Artificial Intelligence. s.l. : Department for Promotion of Industry and Internal Trade, MoCI, GoI, 2021. FAQ's. Smartail. [Online] [Cited: March 15, 2020.] https://smartail.ai/faqs/. Garcia, Dr Elaine. 2019. OpenAccess Government-Technology News. OpenAccess Government. [Online] August 09, 2019. [Cited: March 15, 2021.] https://www.openaccessgovernment.org/artificial-intelligence-ai-ineducation/66346/. Goradia, Abha. 2020. Indian Express-Cities-Mumbai. Indian Express. [Online] November 2, 2020. [Cited: March 15, 2021.] https://indianexpress.com/article/cities/mumbai/iit-bombay-proposal-on-aibased-proctoring-raises-concern-among-faculty-7048382/. 2020. How can I confirm that my AWS infrastructure is GDPR-compliant? AWS Amazon. [Online] Amazon, September 18, 2020. [Cited: March 14, 2021.] https://aws.amazon.com/premiumsupport/knowledge-center/gdpr-compliance/. Justice B.N. Srikrishna. 2018. A Free and Fair Digital Economy Protecting Privacy, Empowering Indians. 2018. Kant, Amitabh. 2019. Hindustan Times. [Online] September 19, 2019. [Cited: March 15, 2021.] https://www.hindustantimes.com/opinion/the-aspirationaldistricts-programme-is-transformative/story-zl34THWuh2sjpEnbVL7LwK.html. Ketamo, Harri. UNESCO-MGIEPSD-Opinion Piece. UNESCO-MGIEPSD WebSite. [Online] [Cited: March 15, 2021.] https://mgiep.unesco.org/article/dreams-and-reality-how-ai-will-changeeducation. Komari, Tharun. Rewire Mag-Data&Tech. Rewire Mag. [Online] https://rewire.ie.edu/ai-spot-cheating-breaching-student-privacy/. Li, Derek Haoyang. 2020. Forbes Insights-Insights Contributor. Forbes Insights. [Online] March 26, 2020. https://www.forbes.com/sites/insightsibmai/2020/03/26/how-ai-can-realize-the-promise-of-adaptive-


Artificial Intelligence and Policy in India, Volume 3

17. 18.

19. 20. 21. 22. 23. 24. 25.

26.

27. 28. 29. 30. 31. 32. 33.

41

education/?sh=4de819ad12b3. Maskey, Sameer. 2020. Forbes. [Online] March 3, 2020. https://www.forbes.com/sites/forbestechcouncil/2020/03/03/ais-potential-ineducation/?sh=cd8ac8b52016. Mehra, Samiksha. 2020. How India is integrating AI in the New Education Policy? Indiai. [Online] Meity, NeGD, NASSCOM, August 3, 2020. [Cited: March 15, 2021.] https://indiaai.gov.in/article/how-india-is-integrating-ai-in-the-neweducation-policy. Nisai Learning. 2019. Nisai Learning. [Online] Medium&Co., June 28, 2019. [Cited: March 15, 2021.] https://medium.com/@NisaiLearning/5-principles-foreffective-pedagogy-8d8ba1130430. Peel, Adwin A. 1998. Britannica-Pedagogy. Britannica. [Online] 1998. [Cited: March 15, 2021.] https://www.britannica.com/science/pedagogy. Preetipadma. 2020. Analytics Insight. [Online] June 9, 2020. https://www.analyticsinsight.net/india-trails-behind-adopting-ai-education/. PRIVACY POLICY. AnalytixLabs. [Online] [Cited: March 15, 2021.] https://www.analytixlabs.co.in/privacy. Pseudonymization. TREND Micro. [Online] https://www.trendmicro.com/vinfo/us/security/definition/pseudonymization. S. Amershi, N. Arksey, G. Carenini, C. Conati, A. Mackworth, H. Maclaren, D. Poole. 2005. Designing CIspace: Pedagogy and Usability in a Learning. 2005. Shubham Singh. 2020. India’s AI Mission Gets Go-Ahead: Will It Change The Future Of Education? inc42. [Online] inc42, December 26, 2020. [Cited: March 13, 2021.] https://inc42.com/buzz/indias-ai-mission-gets-go-ahead-will-it-changeeducation-future/. Stempedia. 2020. Stempedia-Shop. Stempedia-Website. [Online] July 8, 2020. [Cited: March 15, 2021.] https://thestempedia.com/blog/ai-education-scopeartificial-intelligenceindia/#:~:text=Artificial%20intelligence%20in%20education%20in,the%20number %20of%20qualified%20teachers.. Talwar Thakore & Associates. 2020. Data Protected - India. Linklaters. [Online] March 2020. https://www.linklaters.com/en/insights/data-protected/dataprotected---india. Terms of Service. Embibe. [Online] [Cited: March 13, 2021.] https://www.embibe.com/tos. The Impact of Artificial Intelligence on Learning, Teaching, and Education. TUOMI I., CABRERA GIRALDEZ, Marcelino VUORIKARI, Riina. PUNIE Yves. 2018. 1831-9424 (online), s.l. : Publications Office of the European Union, 2018. UNESCO. 2019. [Online] 2019. [Cited: March 15, 2021.] https://en.unesco.org/news/how-can-artificial-intelligence-enhance-education. —. 2020. UNESCO. UNESCO. [Online] 2020. https://en.unesco.org/artificialintelligence/education. Velayanikal, Malavika. 2020. LiveMint-Start-ups. LiveMint. [Online] June 09, 2020. https://www.livemint.com/companies/start-ups/entrepreneurs-experimentas-edtech-catches-on-in-india-11577626241123.html. West Leederville Primary School. West Leederville-PEDAGOGICAL APPROACHES. West Leederville. [Online] https://wlps.wa.edu.au/pages/pedadogical-approaches/.


42 Artificial Intelligence and Policy in India (2021) 34. What is Pseudonymisation? Thales. [Online] [Cited: March 14, 2021.] https://cpl.thalesgroup.com/faq/data-protection-security-regulations/whatpseudonymisation.


Artificial Intelligence and Policy in India, Volume 3

43

3 Responsible AI in India: First Analytical Report Ishan Puranik1, Aathira Pillai2 & Rahul Dhingra3 123Research

Intern, Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. This analytical report is one of the key analytical reports on India and the impact of AI technologies in India Inc. in general. These are the considerations that must be established from the report: o Risk assessment over the impact of incorrect predictions and, when reasonable, design systems with human-in-the-loop review processes must be indigenized with Indian standards; o Standards on bias evaluation must be essentially developed, where the bargaining power of Indian companies and governments must prevail; o The hierarchies of Fairness, Responsibility, Reproducibility, Transparency and Accountability- vertical and horizontal must be thoroughly assessed to prevail the bargaining power of Indian entities and the State; o The dynamics of automation of services must be thoroughly assessed and human privity cum autonomy must be respected in the context of India’s respect of fundamental rights and civil cum individual liberties (and creative liberties); o The hierarchies of privacy by design and default cum the pseudonymization of data must be assessed and evaluated in terms of their generality, indifference and diverse approaches; o AI services and products being procured (and democratized + localized) in India must be scrutinized on the basis of the case by application - by – case by application approach (we call it a Species approach); o AI’s status as an electronic legal personality or a limited juristic entity has to be assessed on the basis of trends and data evaluation, wherein the report will help us to further inspect the attributions which work in India;

Introduction India is one of the worlds’ fastest-growing economies and is in an influential position globally, this has given India the potential to make a big contribution in reinforcing the AI revolution. India has a history of breathing new life into struggling and slow-moving industries by bringing in innovative technology solutions in a secure and reliable manner that is optimized for the needs of the


44

Artificial Intelligence and Policy in India (2021)

Indian Market, the Indian People the Indian Economy. India has also set up a strong and supportive innovation infrastructure in the realm of research and development in the technology sector. Technological innovations and revolutions aid humans in being more efficient and productive. These innovations come from the creation of new knowledge and the better application of existing knowledge. While humans are extremely good at research there are a number of tasks that are time and resource consuming, such as the selection of certain variables from a group of options, etc. Such tasks can be delegated to machines (Predictive AI Algorithms), this will reduce the amount of time between now and the next innovation, it can also reduce the amount of time that is spent on administrative tasks across the business chain. The acceptance and adoption of AI across the business chain, including startups, the private sector, public sector organizations, and government bodies, will unleash the potential by generating a virtuous demand and supply cycle by helping these bodies make informed decisions, preparing for crisis and choosing the best route for expansion. The government must act as a catalyst in the process of incorporating AI into the economy by encouraging collaboration, providing infrastructure, stimulating innovation through research and development, and by providing protection and safeguards to those parties that are ‘taking the AI leap’. India’s AI Habitat AI systems have risen in popularity over the last few years owing to their enormous potential to unlock economic value and aid in the mitigation of social difficulties. This has led to the widespread adoption of Artificial Intelligence in various different contexts and use-cases in multiple domains. A paradigm change of this density in the progress of AI inspires both optimism and alarm. AI has the potential to eradicate illness and poverty throughout the world, improve productivity, and boost economies, but it also has the potential to take jobs away from hardworking citizens, and impoverish thousands. Although the precise form of the changes that AI will bring to many industries is yet unknown, considerable disruption to the workforce is almost certain. (Chakrabarti, Sanyal 2020) India ranks third in the world in terms of providing high-quality research in the realm of AI.3 India has developed research in the realms of unsupervised learning, reinforcement learning, explainable AI, casual modelling and blockchain. As of August 2020, the Indian AI Market is worth $6.4 Billion. The majority of AI market share and size comes from the multi-national corporations (MNCs) in the Information Technology (IT) (41.1% market share), Technology (Software + Hardware, 23.3% Market Share) and electronic categories. AI is also used in the banking, financial services and insurance industry, this industry has a 9.6% 3Computed

as the number of citable documents in peer reviewed journals.


Artificial Intelligence and Policy in India, Volume 3

45

market share of AI Services. (Thomas 2020) There are around 91,000 AI Professionals working in India, with a typical income of Rs. 14.7 Lakhs, with the highest earners earning Rs. 16.7 Lakhs in Mumbai. Creation and Basis of AI Eco-Systems An Eco-system is a group of independently functioning, but inter-dependent bodies that aid each other in the growth and development of the community as a whole. In a jungle eco-system, the plant provide food to the herbivores, that become food for the carnivores that in turn fertilize the soil, etc. Similarly, an AI Eco-system is a set of independent bodies that – when working smoothly – give impetus to the growth of India’s AI infrastructure and power. As per a report by Accenture, the five pillars of an AI Eco-System are: 1. Universities (For Research and Development) 2. Major Corporations (For Investment and large-scale adoption, and lobby for policy) 3. Start-ups (For local level optimization, also known as Entrepreneurial dynamism) 4. Policy Makers (Manage fears and incentivize further adoption, innovation, etc.) 5. Multi-Stakeholder collaborations (To generate a consensus about where we want AI to take us) The relative importance of these 5 pillars varies from market to market, due to factors like the maturity of individual businesses, and the region’s political culture. (Trapasso, Vujanic, Accenture 2010) India’s AI Eco-System is healthy, but can do much better. Universities: In 2016, the country generated 2.6 million STEM (Science, Technology Engineering, and Mathematics) professionals through its robust Engineering and Technology Universities. Although this makes them qualified for the core values required for AI R&D (i.e., there is a large number of Knowledge Makers), the level of practical competency and employability have remained poor. Further, there are a lack of research-based opportunities within the country that utilize the skills of and award professionals properly for their efforts (i.e., there is a low number of Knowledge Takers). The mismatch between knowledge makers and knowledge takers results in issues like brain drain, and the suboptimal usage of the human capital that is generated every year. Major Corporations: Leading Indian banks have rolled out, or are pilot testing, AI-powered conversational chatbots for their websites and/or mobile applications. Tata Motors has collaborated with Microsoft to leverage the latter’s connected vehicle technology that uses AI, advanced machine learning and the


46

Artificial Intelligence and Policy in India (2021)

Internet of Things to enhance driving experience. Startups: In 2016, India placed third among G20 nations in terms of the number of AI startups, with a compound annual growth rate of 86 percent since 2011, which is greater than the worldwide average. Policy Makers: In June 2020, the Indian government announced www.ai.gov.in, a dedicated artificial intelligence (AI) platform jointly created by the Ministry of Electronics and IT (MeitY) and the national IT trade organization NASSCOM (The National Association of Software and Service Companies). It is being marketed in India as a one-stop shop for AI breakthroughs. Multi Stakeholder Collaborations: In India, the Ministry of Electronics and Information Technology has recently established a "policy group" in collaboration with Nasscom to develop a regulatory framework and road map for new technologies such as AI, blockchain, and big data analytics. Furthermore, the Ministry of Commerce and Industry has established an “AI task force” to investigate the potential for using AI for development in various industrial and service areas. Given the health and the promising developments in India’s AI realm over the past couple of years, there is a reason to believe that this segment of the country will see significant improvement and boom in the coming years. AI is already being put to good use in India: - An AI-based flood forecasting model that was adopted in Bihar is now being expanded to span the entire country - SigTuple an Indian Startup designed a data-driven intelligence platform for healthcare management that can analyse blood slides and generate an entire pathology report without requiring a pathologist - Gnani.ai is one of the new well-known companies in India taking a shot at Indic NLP. Gnani.ai creates speech analysis and assistant products for Indian dialects and other various dialects. A country full of excited and enthused youth, India’s AI journey is still in its nascent stages. The Standford AI Vibrancy Index positions India at the sixth position out of 26 total countries across 22 different indicators that span across the categories of R&D, Economy and Inclusivity. (Rekha M. Menon, Pradeep Roy 2021) The Important Conversation However, as AI Grows, it is time that we initiate an important conversation about how AI can impact society. - In 2018, Amazon faced public criticism about using an algorithm for hiring decisions which resulted in the loss of opportunity to many candidates that had the word “Women’s” on their resumes (Women’s college, Women’s Baseball team, etc.) (Dastin 2018) - In 2016, ProPublica audited an algorithm that was used to assess the risk of


Artificial Intelligence and Policy in India, Volume 3

47

recidivism of convicts, and found that the algorithm was continuously marking African American convicts at a higher risk of recidivism than convicts of lighter skin tones. (Mattu 2016) - In 2020, the United Kingdom’s Office of Qualifications and Examinations Regulation marked A-Level Students using a grade generation algorithm which gave an estimation of the student’s final grade based on their past performance. The system was however designed to combat grade inflation, and did not consider several factors which may have contributed to lower grades of students in the past. (Nast 2020) These issues are just some of the countless other ‘bugs’ that AI Systems are susceptible too (ImmuniWeb 2020). In her TED Talk, Researcher Janelle Shane speaks about how the danger of an AI is not that it will get the assignment wrong, but that it will get the assignment right without serving the purpose it was designed for. For Example, a self-driving car’s AI is given the assignment to prevent crashes, so it is trained by being taught to avoid the back of a truck and it does that perfectly, but if a truck appears sideways Infront of the car, it may assume it to be a billboard and end up ramming itself into it. (Janelle Shane 2019) Algorithms and artificial intelligence do have ability to make life better for people. However, because AI is still a modern phenomenon, it is prone to errors. The bulk of these failures aren't catastrophic; almost every limitation can be solved; but many projects fail tragically, leaving the organisation with a significant investment in development with little to show for it. Algorithmic predictions and results might fail owing to a variety of variables. However, most of the time, the cause of such failure can be identified and improved. The most common reasons why prediction models fail and their potential consequences are listed below: • Incorrect or inadequate data: The most important element of Intelligence is data. Machine learning (deep learning) techniques are used to create predictive models using data sets. Generally, a machine learning assignment would necessitate the collection of massive datasets in order to construct an acceptable model. The information has to be a good representation of the real scenario. • Incorrect application area: It is possible that the information gathered for an operation is too complicated, or that the outcome should be considerably more exact than any algorithm can deliver. The use of AI approaches to education, law, and other businesses, for example, may be too problematic. Automation may even become a problem rather than a solution in the hospitality business, because people require more than just being served. • Poor technology: It's often challenging to determine whether a particular breakdown is the result of poor technology because it is practically difficult to go through the script and obtain all of the areas to ensure that the gap occurred (e.g. but not in the data). Of course, we can reasonably presume that design was competent in the cases given here as the corporations named can


48

Artificial Intelligence and Policy in India (2021)

pay the smartest of the engineers. Thus, there is a need for an AI that understands the assignment and performs it in a way that does not end up causing more harm (as seen in the examples from 2016, 2018 and 2020). An AI that follows certain ethical guidelines while meeting its mission properly. Such an AI is called a Responsible AI. (Bhawalkar 2021) Responsible AI Responsible AI is a framework and set of principles that holds AI applications responsible for the decisions they make - just like humans. i.e., it is a set of rules and regulations that is focused on holding AI Accountable for the decisions that it makes. It is a process and philosophy of designing, developing and deploying AI with the goal of empowering people and organizations while also having a fair influence consumers and society. A responsible AI will allow businesses and governments to achieve their goals and improve their productivity without negatively impacting society. A responsible AI is essential for the development and maintenance of public trust and confidence in AI. This philosophy arose after the discovery of the impact that misuse, abuse, poor design or negative unintended consequences of an AI could have on a society, these must be pointed out and rectified. While most developers agree on a set of ethical values, there is still a debate on how these values should be best implemented and how the values should be converted into actionable, tangible actions that affect everyday choices and offer proof of success in order to demonstrate that they are not violating those standards. In order to guide the development of a responsible AI, we have organized our analysis and suggestions according to certain core values and principles These are as follows: - Fairness - Responsibility and Accountability - Privacy and governance - Transparency and Reproducibility

Principles relevant to designing Responsible AI Models Principle of Accuracy Accuracy here refers to the trueness of a prediction made by an AI Model, ‘Prediction’ in the context of AI means obtaining an output on the basis of historical data that is fed to the AI model. The output is not necessarily related to the future. It is important to note that AI is only as good as the data that is used for its training and thus it is important to accept that bad data can often lead to bad results and bad predictions. Following is a table of AI uses and their corresponding risks (Prediction in AI no date; Tom Bigham et al. 2021; Branscombe 2015).


Artificial Intelligence and Policy in India, Volume 3

AI used to predict or suggest… Demand of a certain product during a unit of time

Area

Customer churn

Business Activities

Hiring, Firing Promotion

or

Behaviour of financial asset Possible diagnosis or health complications Healthcare

Risk of suicide Cost of treatment Relevant/similar search results Relevant/similar content suggestions Flagging for illegal activities

On Internet

the

Flagging as dangerous or disruptive for the rest of the community Flagging as infringing on copyright or other proprietary right Flagging user for suicide risk

Governance and policy

Voter opinion popular issues

and

Risk of prediction

49

inaccurate

Over or under production resulting in loss Wasting resources or scaring customers off Arbitrary restriction and limitation of opportunity to some groups Loss or market manipulation Wrong treatment, medicine, causing more health complications Triggering, anxiety induing or loss of opportunity Discouraging or giving unreal expectations of treatments Unwanted but remotely connected suggestions Radicalisation, pushing false information Restricting innocent users’ outreach, & ability to leverage the platform Marking users of a potential ideology or belief system as a threat, even though no such behaviour is displayed Incorrect flagging of content as infringing, causing loss to the developer or creator Triggering, anxiety induing or loss of opportunity Disproportionate representation of opinions and issues, resulting in majoritarianism, minority suppression or disregard


50

Artificial Intelligence and Policy in India (2021)

Possible Results

Election

Allocation of resources

Personal Security and Privacy

Analysing account behaviour and flagging suspicious behaviour Analysing transactions and flagging suspicious spending behaviour

Incorrect prediction Over or under allocation of certain areas and communities Restricting users’ outreach, & ability to leverage the platform Restricting users’ outreach, & ability to access own accounts, etc.

Further, as AI has gained steam, leading software vendors have moved beyond traditional software development to provide more comprehensive products and services that effectively automate corporate intelligence and predictive analytical activities. (iTechLaw 2019) What exactly is Predictive Analytics? Predictive analytics employs artificial intelligence to forecast outcomes based on data. Artificial intelligence influences a forecasting model in predictive analytics platforms and applications. Machine learning is an AI tool that detects patterns in large datasets. Machine learning can then apply what it has learned to forecast future trends in analytics, generally by incorporating correlation analytical tools into the forecasting model. Predictive analytics has numerous applications in business management, many of which are concerned with forecasting future outcomes and/or behaviour. It can predict everything from customer attrition to maintenance work to identify suspicious transactions. These predictions have the potential to save a company's life or generate enormous commercial value. Predictive Analytics and Artificial Intelligence The sheer volume of data makes it almost impossible for humans to glean insights from it. A forecasting model powered by Artificial Intelligence may extract enormous insights from data that you presently have. AI-powered predictive analytics can tell you what's happening right and what's wrong with your business, predict which prospects will convert into potential consumers, uncover insights about your competition, and predict what your intended audience want to consume. To help your business thrive, predictive analytics tools are already available from vendors such as Adobe Analytics, Google Analytics, Helixa, and others. (Mike Kaput 2021) Considering predictive analysis is the backbone of modern business, any incorrect or erroneous prediction can cost the company a fortune. As a result, it is critical for AI developers to verify that the results obtained using automated technologies are accurate and trustworthy. Given below are some suggestions to improve the overall accuracy of AI predictions.


Artificial Intelligence and Policy in India, Volume 3

51

Suggestions to improve accuracy and reduce the risk of incorrect predictions 1. Human in The Loop Designs (HITL Designs) HITL means that a human being's judgement and interaction is required before the process continues to the next step. HITL in machine learning refers to the stages of the model development process or stages at which the model is running that require a person to inspect, validate or change some part of the process to train and deploy a model into production or assess the alignment of the output with important metrics (Key Performance Indicators). E.g., An engineer verifies the prediction before moving it to the next stage of development. OR a human structuring or tidying data that is received before putting it into the bot, OR a moderator screening the results of the AI before the process is considered completed. (CloudFactory 2017) In Many cases HITL processes are outsourced to different companies so that the developers can focus on development, this can cause the potential mismanagement of data, embedding of biases that are not intended by the developers, etc. Human in the loop processes should be included in Prediction models to mitigate the risk of error, and also to ensure accountability and increase responsibility of the AI Models that are deployed. While Human the Loop Designs mitigate the margin of error. There are conditions where the inclusion of Humans in the process should be guided by a set of principles that make the predication more relevant in the Indian Context. The following checklist can be followed to determine when an existing process should be indigenized according to Indian Standards: ✓ Was the model not designed for use in the Indian Subcontinent? ✓ Was the model not given access to adequate, correct and good quality data related to the Indian sub-continent? ✓ Does the model relate to any political, economic, social, environmental or legal issue in India? If any of the above statements stand true the deployment of Indigenized Standards should be considered. Principle of Fairness Fairness refers to quality of an AI to make decisions that are not manipulated by sensitive characteristics of the data that has no bearing on the goal of the algorithm. Fairness should be prioritized in AI development, this would include dealing with algorithms and data bias from the start to ensure fairness and nondiscrimination.4 4

It must be noted here that there are multiple perceptions of what fairness means, in fact researchers in the AI Community have identified more than 21 different definitions of fairness. While some of them perceive fairness as the ability of an algorithm to ignore all biases, there


52

Artificial Intelligence and Policy in India (2021)

Theoretically, AI as an objective problem-solving machine should not display unfairness or any sort of discrimination because an AI does not go through the social training and experiences that humans do – which result in the development of biases; thus, they should be the ultimate tool to not make biased decisions, however this is not true because of embedded biases in the data that an AI Model is trained upon (Sambasivan et al. 2021). Embedded Bias ‘Bias’ refers to the tendency to prefer one point of view, frame of reference or type of information over equally valid alternatives. An example of a bias in real life could be choosing a piece of black furniture over a piece of white furniture because the individual making the decision likes that colour. Although bias in such a situation is inconsequential, bias can creep into more important areas, such as hiring decisions, policy decisions or more. For example, the Human Resources Employee that is tasked with hiring a new member for the company may have the belief that individuals of one gender are more effective than those of another gender, even if both individuals have an equal amount of experience, education and qualification for the position in question, this bias could directly result in the loss of opportunity to someone. (What Do We Do About the Biases in AI? no date) A lot of biases are latent, i.e., individuals may carry them without ever noticing their presence, and even pass these biases on to the work that they do, for example if you search for ‘Professional Haircut’ on Google Images then you will be greeted by many broodings, muscular, white males in suits, search for ‘Unprofessional Haircut’ and you will be greeted by men and women of multiple skin tones (mostly non-white) sporting hairstyles that are mostly genetically or culturally decided. Although Google came under fire for this in 2016, the fact of the matter is that these results came from articles, blog posts, tweets and photos that internet users uploaded. Well intentioned Articles written for students going for their first interviews suggested cutting your hair shorter if you are a male, or straightening it if you are female, as a result the algorithm perceived all other types of haircuts and hairstyles as unprofessional. (Jake Silberg, James Manyika 2019) Similarly, bias can be embedded into the data that is fed into an AI. For example, if an organisation starts using an AI for hiring or promotional decisions then they may feed it with data of past hires and promotions. If the past hiring and promotional decisions were made by a biased party then the data that is provided to the AI is of a biased nature itself and thus the output by the AI will in all probabilities be biased. The success of an AI is often measured by the similarity between the AI’s decisions and the decisions by a human performing the same are some perceptions that brand a fair AI as one which empowers historically oppressed or disempowered communities. The question of ‘What is fair?’ continues to evolve and develop as developers create new strategies. For the sake of this paper, fairness is considered to be the absence of biases in the functioning of an AI.


Artificial Intelligence and Policy in India, Volume 3

53

job, a high rate of similarity to a highly biased human being is far from a success in the context of fairness. Since the primary source of all biased decisions made by an AI come from the biased or bias bearing data that is fed to it – the obvious solution is to not feed biased data to the AI in the first place, however that is an extremely ambitious task. In order to have a truly unbiased AI we would need an extremely unbiased human to review the data that is being fed to the AI, such a person is extremely rare to find. The closest we can get to making data more unbiased is to set certain standards that must be adhered to in order to minimize the bias that is displayed by the AI model. Additionally, data that is mislabelled, not labelled properly or labelled in a manner that is unreadable by the AI system can lead to biased decisions as well. Therefore, there is a need to set up certain standards of data quality that needs to be fed into the AI in order to create better bias evaluation systems. (Eleanor Bird et al. 2020) Suggestions for Standards on AI Bias Evaluation 1. Enabling Minimally Biased Machine Learning Ecosystem: This can be achieved by setting up a system of checks and balances that will all contribute to the reduction of biased decisions, from data collection and input to data processing and output. One strategy would be to establish uniform and informed consent based on personal connections and confidence in data workers, as well as clarity about potential downstream applications. Data distortions due to infrastructure problems and technology usage patterns that cause datasets to be inaccurate representations of people and phenomena are common; data incompleteness due to models that favor a small group of individuals in a society due to financial and social limitations. Data has to be designed and the algorithms are to be trained by making model results fair, as well as the disparities in literacies, economics, ethnicity, classes and infrastructure in India that obstruct access to such fair outcomes. 2. Initiating Fact based Decisions: Leaders may evaluate if the proxies employed in the past were appropriate, and how AI can help by revealing long-standing biases that may have gone unreported, as AI exposes more about human decision making. Aside from giving definitions and employing statistical approaches, it's critical to examine when and how human judgment is required. We must use AI ethically to improve conventional human decision-making in a variety of ways. Variables that do not properly predict outcomes are ignored by machine learning algorithms (in the data available to them). Humans, on the other hand, may lie about or even be unaware of the reasons that led to their decision. We must use AI ethically to improve conventional human decision-making in a variety of ways. Fact-based decision-making should become embedded in the company's DNA. AI can find all the needles in all the haystacks of data it is trained on, but it is up to the business experts (people) to select which of the outputs are relevant to the change the company is attempting to implement.


54

Artificial Intelligence and Policy in India (2021)

3. Setting up standards of data that are relevant for use in the Indian Context Due to the data intensive nature of AI, the more quality data that is fed to it, the better it is. However, the keyword here is ‘quality’. Mislabeled, unrepresentative, incomplete or untidy data can become a cause of bias and bugs in the AI Model. It is thus suggested that the following questions be asked before the deployment of the AI: ✓ Is this dataset representative of different variables within the population? ✓ Is this dataset generated by an unbiased individual or group? ✓ If yes, are there adequate safeguards in the case of inevitable exceptions ✓ Does the model attach importance to variables that are not relevant to the problem at hand? ✓ Is this dataset optimized and formatted for readability by the AI program? ✓ Is this model created to take decisions itself, or help humans take decisions? Principle of Transparency, Responsibility and Accountability Responsibility refers to the condition of being answerable to the consequences of one’s actions. In the context of AI, this could come in the form of being held liable for any harm caused to another person or community due to the actions of an AI. While accountability refers to the ability to hold the creators, designers, developers or deployers of an AI Model responsible for the effects of their AI System. Governments evaluating the possibility for "accountability gaps" in existing legal and regulatory frameworks that apply to AI systems should take a balanced approach that promotes innovation while minimizing the danger of substantial individual or societal harm. Because transparency allows for better examination of an AI system, transparency is frequently addressed in talks about AI accountability. Accountability, on the other hand, does not always rise or improve as a result of more transparency. Transparency alone will not provide better accountability in the absence of robust procedures, principles, and frameworks. Artificial intelligence (AI) is portable cross-border. It's created and used in a variety of countries, and in ways that cut over international and cultural borders. Digital asset distribution and mobility are challenging to control. Because of the "black box" nature of AI, it's an inscrutable conundrum to figure out how or why an AI makes the judgments it does, as well as the complexity of building an "unbiased" AI, deploying accountability in AI is a difficult task. The importance of increasing diversity cannot be overstated. One of the things that must happen, in my opinion, is for individuals who are impacted by AI technologies to take a larger part in the development and regulation of such technologies. The first stage is to establish a cause of action, and an opaque AI system combined with a huge number of interrelated elements underlying


Artificial Intelligence and Policy in India, Volume 3

55

individual choices makes attribution of errors and assigning liabilities challenging to ensure accountability (Ciarán Daly 2017). There are trade-offs between explain ability and the accuracy of a system. There will be times when having explainable AI is so critical that we are ready to tolerate any compromise in terms of the system's accuracy. Data poisoning, which occurs when attackers train AI models on mislabeled data, is another related behavior. Developers and consumers of AI should be given clear instructions. Some steps to be created include establishing baseline standards for AI developers, as well as certifications for auditing and testing to ensure transparency and ethical responsibility. A 2016 US report on AI, automation, and the economy emphasizes the need of ensuring that the potential advantages of AI are distributed fairly and to as many people as possible. The dominance of large companies, which are driving not just the development and deployment of AI, but also the debate over its regulation, may limit the impact of emerging technologies like AI on human rights, democracy, and the rule of law. The infrastructures via which public debate takes place are controlled by tech firms. Citizens, particularly the younger generation, are increasingly turning to sites like Facebook and Google as their primary, if not exclusive, source of political information. Machine learning helps us to extract information from data and uncover new patterns, and it has the ability to transform seemingly harmless data into sensitive, personal information. AI has significant implications for democracy, as well as people's right to privacy and dignity. Models and algorithms are required to allow AI systems to reason and explain actions based on accountability standards. Deep-learning algorithms currently lack the ability to relate decisions to inputs, making it impossible to describe their actions in meaningful ways. Both the role of guiding action (by developing beliefs and making judgments) and the function of explanation are required to provide accountability in AI systems (by placing decisions in a broader context and classifying these in terms of social values and norms). Responsible AI and Legal Personality AI is a complicated and rapidly growing area that, in the perspective of the uninitiated, may appear impossible to govern. Furthermore, the application of AI can amplify current organizational risks, alter how they appear, or even introduce new risks into the organization. Given the complexity and speed of AI solutions, the difficulty in controlling AI is less about dealing with totally new forms of risk and more about current risks being harder to identify in an efficient and timely manner, or presenting themselves in novel ways. The first stage is to establish a cause of action, and an opaque AI system combined with a huge number of interrelated elements underlying individual choices makes attribution of errors and assigning liabilities challenging. There are no rules on how many legal rights and duties something must have to be called a legal person, but the most common requirements are the right to own property and the ability to sue and be sued. The legal personhood of a human being is generally accepted as a


56

Artificial Intelligence and Policy in India (2021)

natural phenomenon. An examination of political and theoretical approaches to the challenge of AI recognition as a legal matter reveals that there is now no consensus on this issue. As we suggested in the opening, it would be appropriate to approach the subject of legal personhood from the standpoint of widely held reasons against it. In terms of actual protection of legal interests, social recognition for non-typical legal persons (everything except humans and corporations) is equally essential. (Dremliuga, Kuznetcov, Mamychev, 2019) It may appear obvious that a machine could never be a real human. Slaves and women, on the other hand, were not recognized as complete individuals for decades. Temple idols have had the status of legal people in India for decades. They've also used their "human" rights to pursue legal disputes through the trustees or governing board of the temple where they're worshipped. In India, the status of AI as a legal entity is still up for debate. There appears to be no precise method or rule for determining whether or not AI should be granted legal personhood. As a result, the major result of this essay is the lack of a conclusion. However, we discovered locations where policymakers may find solutions in another way. The first robot homicide was recorded on July 4, 1981. At the Kawasaki Heavy Industries facility, an engineer named Kenji Udara was performing routine maintenance on a robot. The robot was not entirely turned off by Kenji. As he approached a restricted section of the manufacturing line, the robot recognized him as an impediment and flung him against a neighboring machine with its strong hydraulic arm, killing him instantly. (Whymant 2014) Thus, Artificial intelligence entities must be regarded as legal entities in order for them to be held accountable in the same way that businesses are. If we draw an analogy from the reasoning for giving companies legal personality, which was to restrict corporate liability on an individual's shoulders, thereby motivating people to engage in economic activity through corporations. The question of whether an AI may be granted legal personhood boils down to whether it can be given legal rights and responsibilities. The legal fiction devised for corporations serves as a model for providing legal personality to artificial intelligence. However, there is a distinction to be made between corporations and artificial intelligence. Corporates are ostensibly independent but responsible to their stakeholders, whereas an AI might be really autonomous. Legal personhood for AI should only come with obligations. Though it may appear appealing on the surface, but there would be some apparent issues if the duties were meant to solve accountability deficiencies. Principle of Privacy and Governance AI systems rely on enormous quantities of training data, and using an individual's personal data raises significant privacy problems. As per the unenacted Personal Data Protection Bill of India, ‘Personal Data’ refers to data about or relating to a natural person who is directly or indirectly identifiable, having


Artificial Intelligence and Policy in India, Volume 3

57

regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, or any combination of such features, or any combination of such features with any other information. For example: Consider this statement: “My Name is Ishan Puranik, I live in Thane, Maharashtra, India. This statement provides the author’s name, and location thus making it extremely easy to identify him, thus this would be considered a piece of personal data. Other examples of personal data could be things like address, education, interests, etc. Due to a lack of sufficient privacy measures, technology may be able to completely capture and analyze an individual's personal life without their consent or awareness, causing substantial harm to an individual's interests by disregarding their data usage choices. Punjab's law enforcement agencies deploy the Punjab Artificial Intelligence System, which employs a "smart policing" strategy by digitizing criminal records and facilitating criminal searches by utilizing technologies like as face recognition to forecast and detect criminal behavior. Following a New York Times article, Clearview AI, a start-up located in the United States, garnered worldwide notice. The firm developed AI models to recognize persons from over 3 billion pictures collected from social media sites and then sold the technology to law enforcement. Several civil rights organizations in the United States have expressed concerns, notably about acknowledged flaws in face recognition technology and the prospect of widespread monitoring for malevolent intentions. The firm was also sent with a ‘cease-and-desist' letter by the social media platforms from where the data was scraped for breaking the terms and conditions for data usage. (Accenture, 2021) Privacy by Design and Default Means nothing more than “data protection through technology design.” Behind this is the thought that data protection in data processing procedures is best adhered to when it is already integrated in the technology when created. This could be in the form of making sensitive data unreadable to humans but readable to the technology involved. (Spiekermann 2012) For example: Consider this statement: “My Name is Raghav Saini, I live in Rampur, Uttar Pradesh, India and I am interested in Technology, Marketing and Law” From this simple statement, the author’s name, location and interests can be elicited, although a simple statement, this is a gold mine of information for ad companies, they now know what kind of ads the author is more likely to engage with, thus boosting their utility to their clients. Now imagine if instead of name, location and interests, the data was name, sexuality, religion, financial status, address and political leaning. In the wrong hands that data could potentially put


58

Artificial Intelligence and Policy in India (2021)

the author in danger, and even if danger is not the issue, most of that information is of a highly personal nature that the author may not be comfortable with being shared. If a privacy by design and default system is applied, then a lot of the data is protected from human eyes. Pseudonymization of Data This means the replacement of stored data by some sort of artificial identifier, in the Personal Data Protection Bill, Pseudonymization of Data is referred to ‘“Anonymisation’ and is defined as: relation to personal data, means the irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified, meeting the standards specified by the Authority. (Personal Data Protection Bill 2018) For Example: The earlier statement given above could potentially look like this after pseudonymization: “My Name is GvrhaIiisa, I live in Prmua, TtruraHerapsd, Dinia and I am interested in Aljouvsvnf, ThyrlapunhukShd”” The sensitive data is now protected by some sort of code that only the machine processing such data can read. Generality, Indifference and Diverse Approaches Both technology firms and governments promote AI as a solution for complicated issues such as hate speech, violent extremism, and online disinformation. Given machine learning's poor ability to grasp tone and context, this is a hazardous development. AI has a significant influence on freedom of speech, given the growing dependence on these systems for online content regulation and the rising usage of AI applications in everyday life, ranging from smart assistants to autocorrect technology on mobile devices. Under Indian constitutional law, freedom of speech and expression is a basic right. It has been cited by the Supreme Court of India as an essential component of democracy, and it has also been determined that this freedom encompasses the right to know. The likelihood of one's privacy being compromised is minimal if intrusive face recognition technology is incorrect. However, in the case of applications currently in use by law enforcement in India, this exacerbates the problem because, in addition to privacy concerns, these technologies can lead to false arrests and the forced proof of innocence of people from disproportionately vulnerable and marginalized communities. In one of the recent instances, where the privacy and AI has soared and debated is in the Data breach case of AIR India hackers access personal details Of 4.5 Million Customers where causing alarming concerns. Customers who enrolled between August 2011 and late February 2021 were affected by the incident, which was confirmed two months after SITA's Passenger Service System (PSS)


Artificial Intelligence and Policy in India, Volume 3

59

was compromised, according to Air India. Customers' names, dates of birth, contact information, passport information, frequent flyer data, and credit card information have all been compromised. The right to privacy was unanimously recognized as a basic right under the Indian Constitution by the Supreme Court of India in August 2017. This landmark decision acknowledged informational privacy as a component of this basic right, as well as the threats posed by computers' ability to infer information and analyze data in new and sophisticated ways. The ruling also emphasized the country's urgent need for a "strong data protection system," highlighting the operation in an embedded system between data protection and autonomy. In AI systems, corporate governance is critical for developing and enforcing policies, processes, and standards. Chief ethics and compliance officers have a critical role in identifying ethical concerns, managing such risks, and assuring standard compliance. To oversee and monitor the organization's operations, governance structures and processes should be developed. Currently, there is no comprehensive AI regulation in India. The proposed Personal Data Protection Bill (2019) (PDP), which is meant as complete law detailing different aspects of privacy safeguards that AI solutions must comply with, comes the closest. Limitations on data processing, security precautions to protect against data breaches, and specific protections for vulnerable users such as minors are also included. Principle of Reproducibility and Transparency In 1999, Eric S. Raymond published a seminal book titled ‘The Cathedral and the Bazaar’ that was a fierce advocate for open-source software. One of the key tenets of Open-source software is that the source code of any software program be made available to the public at large, and study, modify and build upon. Open-source software is the epitome of Transparency in the matters of technology. In the realm of AI and its algorithms, making something transparent is not as easy as just releasing the source code of the same. (Haibe-Kains et al. 2020) In the realm of Machine Learning Algorithms, the data that is used as to train an algorithm is as important as the algorithm itself. This is a recurring problem within the Machine Learning community – i.e. many companies, independent researchers and students that work in the realm of AI create new knowledge, identify problems and develop newer methods of working, and release this knowledge in the form of publications in scholarly journals, magazines, blog posts, etc, but for many reasons they do not release the code that is being explained, this results in other companies, independent researchers and students being unable to utilise the fruits of this research’s labour and building upon it through improvement, modification or addition. In some cases, the code is released but the data is not and this leads to a lack of reproducibility, In The Cathedral and The Bazaar, the author mentions the something called “Linus’ Law” which states that the more people that work on or study a particular code, the more likely it is that the code will be improved in quality and functioning (Raymond 1999). In the context of AI, the more an algorithm and its


60

Artificial Intelligence and Policy in India (2021)

data is studied, the more likely it is that the embedded biases and problems can be pointed out, addressed and fixed. This was visible in the UK when a student performance estimation algorithm was made transparent and took in suggestions from the rest of the community which allowed the algorithm to improve in quality and serve its goals better. (Dickson 2021) Suggestions for the improvement of transparency and reproducibility In certain cases, the data that is used, or the code itself may be proprietary in nature and thus cannot be shared with the public at large, and in such cases, transparency cannot be enforced, which it makes it difficult to scrutinize, validate and improve algorithms. In such cases there should be a system through which the validation and reproducibility of an AI Model can be confirmed. This can be done in the following manner: - Setting up an institution that is dedicated to the verification of transparency and reproducibility of a code. - Inculcating the value of transparency and reproducibility in AI Education - Creating standards for publishers of AI research (such as journals, colleges, universities) that will necessitate the presence of certain qualities within a piece of research before publication, exceptions to these standards can be decided on and granted by the institution mentioned under point 1. - Creating a robust sharing platform for AI Research in India (Like Shodhganga) What Is the Future of Responsible Artificial Intelligence? In an optimal desired future, both technology and human characteristics are integrated. An ideal society is one where the business uses a data-driven approach to handling moral issues, recognising progressive innovators, qualitative work, and more difficult-to-measure breakthroughs that repay over longer time frames. For products and technology, this may imply extending the time range within which performance targets are set, while keeping in mind that professional ethics may necessitate a broader or lengthier period of time. People would be accountable for recognizing the diverse skills required to support true responsible-AI deployment and encouraging creative behaviour. A desirable corporate culture is one where there is no fear of retaliation or punishment for disclosing ethical concerns from within, there really are clear pathways for concerns to be raised, and departments can pull in persons with particular expertise as necessary. Management must identify beliefs and ethics, communicate those activities must be consistent with all these standards, and offer resources to sustain those procedures.

Suggestions for a Responsible AI Human in the Loop Processes In order to solve some of the major issues, many developers deploy humans at


Artificial Intelligence and Policy in India, Volume 3

61

different stages of the AI Lifecycle (i.e. the process from conception to deployment). This is known as a Human in the loop approach. HITL means that a human being's judgement and interaction is required before the process continues to the next step. HITL in machine learning refers to the stages of the model development process that requires a person to inspect, validate or change some part of the process to train and deploy a model into production. E.g., An engineer verifies the prediction before moving it to the next stage of development. OR a human structuring or tidying data that is received before putting it into the bot. From start to end, Humans have a role to play in the AI lifecycle, from the development where Quality Control is required, to the integration of the finished product with the organization requesting the services. AI is a machine that can fail and thus humans must be ready to mitigate risk and damage control. Comical Example: In the show of Silicon Valley, an AI Program deletes a large amount of content from a video sharing website due to the files not matching the required standards. Examples of Human in the loop usage: • Collecting data, building and refining the datasets • Annotating collected and refined data • Fixing failures or mistakes when they occur, Encouraging Research The construction or growth of well-funded centres of excellence, which would act as drivers of research and development and harness synergies with the business sector, is crucial to the execution of national AI policies. The availability of highly educated individuals, world-class educational institutes, and an illustrious list of top-notch IT firms dominating the global IT scene are all required building stones for India to establish a robust AI research and development ecosystem. An examination of India's capabilities in AI core research presents a bleak picture. According to the Global AI Talent Report 2018, which analyzed LinkedIn, India has just 386 PhD-educated researchers out of a total of 22,000 globally, and is placed 10th. The perspectives of stakeholders on which application challenges to focus on will be critical in ensuring the research's practical usefulness. While the figures indicate a modest but promising basis, a coordinated effort is required to develop a complete AI strategy for India that is focused on research in order to achieve a Responsible India. The focus of AI research should be on monitoring the impact of AI technologies developed at the consumer level through social indices and recommending necessary changes for better market penetration, as well as studying the financial viability of AI technologies developed so that they cater to the target consumer base while proposing improved pricing models for a pan-India reach. Coordinating the drivers of change from within and outside of the


62

Artificial Intelligence and Policy in India (2021)

organization External groups, either intellectual institutions or industrial research organisations, are a vital component for quality standards and recommendations, but they must be adjusted and utilized in ways that are appropriate for each firm and meet the difficulties it is most prone to incur. Administration standards should explicitly demonstrate why they are in existence and provide little space for interpretation, that can lead to the thorny issue of "exceptions to the rule." This helps ensure that teams from different functionalities, including leadership, business analytics, and law, are aware of the prerequisites and benefits of each. Adequate Financing and Investment in Research and Development AI indicates that governments are making considerable financial investments in AI research and development. In the battle for AI development, most strategy papers emphasize the need of safeguarding national objectives. To accomplish so, a national strategy for AI research and development is required, as well as the identification of nodal entities to facilitate the process and the building of institutional capacity to conduct cutting-edge research. Beyond the base financing by the Government, the National Strategy Report observed that the ICTAIs should seek additional funding from charitable and private sources, preferably through an equity sharing arrangement. ICTAI Inc. may also grant additional funds to ICTAIs based on their performance. Data that is clean, correct, and properly curated is critical for training algorithms. Importantly, having a big amount of data does not guarantee superior outcomes. Quantity of data should be predicated on data accuracy and compilation. Organizations that go through this process also build the mechanisms necessary to show the long-term value of Responsible AI by expanding it across the company, allowing them to make the crucial step of putting principles into reality. Successful corporations and organizations should use AI models, methods, and platforms that are designed to be trustworthy, fair, and transparent. The MLs are to be trained to be better positioned to establish cross-domain consensus on mitigation methods such as recognising biases in the Indian situation such as caste, class, ethnicity, and so on, by using proven qualitative and quantitative approaches to analyze possible hazards as in Indian scenario like caste, class, ethnicity, etc. Access to data that is trustworthy, accurate, and relevant is a major issue in AI. As a result, acquiring high-quality health data from other countries and using what has been learned in India might be a viable option. One method to allow access is to use robust open data sets. For tiny start-ups building prototypes, open data is very crucial. Despite the fact that India is a data-rich country with a National Data and Accessibility Policy, it lacks strong and comprehensive open data sets across industries and areas. Regulation of certain values more than others


Artificial Intelligence and Policy in India, Volume 3

63

Each AI development and deployment provides an advantage to the society as well as the party that is responsible for the provision to society.5 Thus when it comes to the assessment of a hierarchy in the context of responsible AI, then we should rank the importance of these values on a scale of most impactful to the society to least impactful to society. Those that are most impactful on society should be more heavily regulated. Following this logic, the values of fairness, responsibility and accountability should be held to higher standards than transparency and reproducibility. - Unfair AI can propagate and intensify human biases and thus should be held to extremely high standards, and thus should have the highest position in the matter of a vertical hierarchy. - It must be acknowledged that not all developers and designers will have altruistic motives and thus there should be a penalty for failing to uphold the standards created to ensure fairness, and thus the standards of responsibility and accountability should be at position 2 and 3 respectively, this because accountability is essentially a framework to enforce responsibility. - Finally, for India to become the AI Garage of the world that it aims to be, it is important for it to emphasises on research. Research will greatly benefit from transparency and transparency and reproducibility. Finally, broaden the concept of quantifiable value in order to shift from a shortterm to a longer-term outlook while simultaneously acknowledging the relevance of the indescribable. If enterprises are to truly create value-driven tech, their organisational structures must develop to achieve these ambitious goals.

Conclusions As the potential and perils of Artificial intelligence are expected to affect nations, businesses, and social groups, authorities must be vigilant in not just leveraging Artificial intelligence for productivity expansion but also putting in place rules and regulations to protect individuals from the dangers posed by Artificial intelligence. Insurance, credit, recruiting, and even what news one reads on social media are all constantly influenced by algorithms. When a search engine like Google prioritises one search result over another, it's because an algorithm determined that one page is more valuable than the other. When your favourite OTT 5

In the theory of intellectual property rights, there is an ever-developing answer to the question of who should get the benefits of protection, the creators or the society at large. If creators are given too much protection, then it reduces the ability of the society to utilise the creation properly. For example, if a revolutionary invention is patented and then sold at an incredibly high price, then the society does not benefit, on the other hand, if the society is given too much protection, then there is a lack of incentive for the creators to continue creating original works. And thus, it is important for this to be balanced properly.


64

Artificial Intelligence and Policy in India (2021)

platform suggests one movie over another, an algorithm based on your viewing experience made the recommendation. Artificial intelligence is already driving our cars but still this is just the beginning and AI is still in its early stages of emergence. As a result, it is critical to concentrate on rules and standards that remain flexible as new opportunities and difficulties arise. This is especially important given that Automation is multi-purpose in character. It is also critical that governments work together at several levels—administration, research, social order, and business—to build legal regimes that meet the difficulties and hazards brought by Artificial intelligence. Given the scope of dangers that Technology can create to a nation's security and integrity, the consequences of conflicting rules across nations and operating in divisions could be massive. Any legislation should be formed after a discussion on the elements, such as how severe standards ought to be. Since there are contradictory meanings, what should be the definition of fair treatment? How should issues of security be resolved? However, when establishing standards, it is critical to consider the potential cost of not adopting an AI solution when one is present, as well as the levels of comparative safety performance at which AI solutions should be employed to replace the conventional ones. Artificial systems may make errors, but so do humans, and in certain situations, Intelligence may be better than solutions that do not use It, even if it is not fail-proof.

References 1.

2. 3. 4.

5. 6. 7.

ACCENTURE, 2021. Responsible AI Principles to Practice [online]. Research Report. Accenture. [Accessed 26 June 2021]. Retrieved from: https://www.accenture.com/us-en/insights/artificial-intelligence/responsible-aiprinciples-practice BENKLER, Yochai, 2019. Don’t let industry write the rules for AI. Nature. 1 May 2019. Vol. 569, no. 7755, p. 161–161. DOI 10.1038/d41586-019-01413-1. BHAWALKAR, Rudraksh, 2021. The Rise of Responsible AI. Analytics India Magazine [online]. 5 February 2021. [Accessed 26 June 2021]. Retrieved from: https://analyticsindiamag.com/the-rise-of-responsible-ai/ BRANSCOMBE, Mary, 2015. Artificial intelligence can go wrong – but how will we know? CIO [online]. 26 October 2015. [Accessed 20 June 2021]. Retrieved from: https://www.cio.com/article/2997514/artificial-intelligence-can-go-wrong-buthow-will-we-know.html CHAKRABARTI, Rajesh and SANYAL, Kaushiki, 2020. Towards a ‘Responsible AI’: Can India Take the Lead? South Asia Economic Journal. 1 March 2020. Vol. 21, no. 1, p. 158–177. DOI 10.1177/1391561420908728. CIARÁN DALY, 2017. Bringing Accountability, Responsibility, and Transparency to AI - AI Business. [online]. 14 November 2017. [Accessed 26 June 2021]. Retrieved from: https://aibusiness.com/document.asp?doc_id=760460 CLOUDFACTORY, 2017. Human in the Loop: Accelerating the AI Lifecycle | CloudFactory. [online]. 2017. [Accessed 24 June 2021]. Retrieved from: https://www.cloudfactory.com/human-in-the-loop


8.

9.

10. 11.

12.

13. 14. 15. 16. 17.

18. 19. 20. 21.

Artificial Intelligence and Policy in India, Volume 3

65

DASTIN, Jeffrey, 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters [online]. 10 October 2018. [Accessed 24 June 2021]. Retrieved from: https://www.reuters.com/article/us-amazon-com-jobsautomation-insight-idUSKCN1MK08G DICKSON, Ben, 2021. One researcher’s mission to encourage reproducibility in machine learning. TechTalks [online]. 1 March 2021. [Accessed 20 June 2021]. Retrieved from: https://bdtechtalks.com/2021/03/01/papers-without-codemachine-learning-reproducibility/ DREMLIUGA, Roman, KUZNETCOV, Pavel and MAMYCHEV, Alexey, 2019. Criteria for Recognition of AI as a Legal Person. Journal of Politics and Law. 20 August 2019. Vol. 12, no. 3, p. p105. DOI 10.5539/jpl.v12n3p105. ELEANOR BIRD, JASMIN FOX-SKELLY, NICOLA JENNER, RUTH LARBEY, ALAN WINFIELD, and EMMA WEITKAMP, 2020. The ethics of artificial intelligence: Issues and initiatives. Study, PE 634.452. European Parliamentary Research Service. Panel for the Future of Science and Technology. HAIBE-KAINS, Benjamin, ADAM, George Alexandru, HOSNY, Ahmed, KHODAKARAMI, Farnoosh, WALDRON, Levi, WANG, Bo, MCINTOSH, Chris, GOLDENBERG, Anna, KUNDAJE, Anshul, GREENE, Casey S., BRODERICK, Tamara, HOFFMAN, Michael M., LEEK, Jeffrey T., KORTHAUER, Keegan, HUBER, Wolfgang, BRAZMA, Alvis, PINEAU, Joelle, TIBSHIRANI, Robert, HASTIE, Trevor, IOANNIDIS, John P. A., QUACKENBUSH, John and AERTS, Hugo J. W. L., 2020. Transparency and reproducibility in artificial intelligence. Nature. October 2020. Vol. 586, no. 7829, p. E14–E16. DOI 10.1038/s41586-0202766-y. IMMUNIWEB, 2020. Top 10 Failures of AI | ImmuniWeb Security Blog. [online]. 2020. [Accessed 24 June 2021]. Retrieved from: https://www.immuniweb.com/blog/top-10-failures-of-ai.html ITECHLAW, 2019. Responsible AI: A Global Policy Framework [online]. 14 June 2019. [Accessed 26 June 2021]. Retrieved from: https://www.itechlaw.org/ResponsibleAI JAKE SILBERG and JAMES MANYIKA, 2019. Tackling bias in artificial intelligence (and in humans). McKinsety Global Institute. JANELLE SHANE, 2019. The danger of AI is weirder than you think [online]. 14 November 2019. [Accessed 24 June 2021]. Retrieved from: https://www.youtube.com/watch?v=OhCzX0iLnOc MATTU, Julia Angwin, Jeff Larson,Lauren Kirchner,Surya, 2016. Machine Bias. ProPublica [online]. 2016. [Accessed 24 June 2021]. Retrieved from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminalsentencing?token=tCCj2I-FHUtoVSuy7QOrlXw0rpazS0Qn MIKE KAPUT, 2021. AI for Predictive Analytics: Everything You Need to Know. [online]. May 2021. [Accessed 26 June 2021]. Retrieved from: https://www.marketingaiinstitute.com/blog/ai-for-predictive-analytics NAST, Condé, 2020. Everything that went wrong with the botched A-Levels algorithm. Wired UK [online]. 2020. [Accessed 24 June 2021]. Retrieved from: https://www.wired.co.uk/article/alevel-exam-algorithm Prediction in AI, no date. DataRobot [online]. [Accessed 24 June 2021]. Retrieved from: https://www.datarobot.com/wiki/prediction/ RAYMOND, Eric S., 1999. The Cathedral and the Bazaar. Revised Edition. United


66 22.

23.

24. 25.

26.

27.

28. 29.

Artificial Intelligence and Policy in India (2021)

States of America: O’Riley & Associates. ISBN 0-596-00108-8. REKHA M. MENON and PRADEEP ROY, 2021. Rewire for growth: Boosting India’s economic growth with AI. [online]. 26 April 2021. [Accessed 26 June 2021]. Retrieved from: https://www.accenture.com/in-en/insights/consulting/artificialintelligence-economic-growth-india SAMBASIVAN, Nithya, ARNESEN, Erin, HUTCHINSON, Ben, DOSHI, Tulsee and PRABHAKARAN, Vinodkumar, 2021. Re-imagining Algorithmic Fairness in India and Beyond. arXiv:2101.09995 [cs] [online]. 26 January 2021. [Accessed 26 June 2021]. Retrieved from: http://arxiv.org/abs/2101.09995 SPIEKERMANN, Sarah, 2012. The Challenges of Privacy by Design. Communications of The ACM - CACM. 1 July 2012. Vol. 55, p. 38–40. DOI 10.1145/2209249.2209263. THOMAS, Siddhartha, 2020. Report: State of Artificial Intelligence in India - 2020. Analytics India Magazine [online]. 8 September 2020. [Accessed 26 June 2021]. Retrieved from: https://analyticsindiamag.com/report-state-of-artificialintelligence-in-india-2020/ TOM BIGHAM, ALAN TUA, TOM MEWS, SUCHITRA NAIR, VALERIA GALLO, MORGANE FOUCHÉ, SULABH SORAL, and MICHELLE LEE, 2021. AI and risk management: Innovating with confidence [online]. Deloitte Centre for Regulatory Strategy. Retrieved from: https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/financialservices/deloitte-uk-ai-and-risk-management.pdf TRAPASSO, Ed, VUJANIC, Aleks, and ACCENTURE, 2010. Investment in Open Source Software Set to Rise, Accenture Survey Finds [online]. Press Release. New York: Accenture. [Accessed 26 September 2020]. Retrieved from: /news/investment-inopen-source-software-set-to-rise-accenture-survey-finds.htm What Do We Do About the Biases in AI?, no date. [online]. [Accessed 26 June 2021]. Retrieved from: https://hbr.org/2019/10/what-do-wedo-about-the-biases-in-ai WHYMANT, Robert, 2014. Robot kills factory worker: From the archive, 9 December 1981. the Guardian [online]. 9 December 2014. [Accessed 26 June 2021]. Retrieved from: http://www.theguardian.com/theguardian/2014/dec/09/robotkills-factory-worker


Artificial Intelligence and Policy in India, Volume 3

67

4 Preliminary Review Report on Explainable Artificial Intelligence & its Opaque Characteristics Sameer Samal1 1Junior

Research Analyst (former), Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. Artificial Intelligence and its subsets have developed into advanced systems over the last few years. AI models such as Deep Neural Networks under Deep Learning have penetrated certain critical fields that require absolute certainty regarding their outcomes and predictions. Critical fields such as healthcare, finance and security require flawless predictions with explanatory justifications and sufficient reasoning. However, complex Machine Learning and Deep Learning models, that are widely used in the aforementioned fields, are opaque to a large extent which render them untrustworthy. Therefore, it is deemed necessary to add an interpretability or explanatory layer over the existing outcome. A prediction with reasonable justification regarding any factors that might have influenced its outcome can play an important role in building trust among the users. This report aims to outline the opaque nature of conventional Artificial Intelligence systems by probing its ethical constraints. The significance of transparency and its facets are also observed. Subsequently, the concept of Explainable Artificial Intelligence (XAI) is explored and the role of XAI in Intellectual Property laws as well as its impact on the Medical field is analysed.

The Opaque System of AI Models Researchers throughout various academic disciplines have witnessed the penetration of artificially intelligent technologies in their respective fields. However, with this, researchers have also observed the hesitation of users. The premise for the advent of any new technology can be traced to two systems; first, the technology being born out of certain needs, and second, the technology being initially developed and subsequently matched with any requirement. In both these methods, the users initially portray their reluctance in adopting the new technology, but their trust can be gained eventually. This is usually done by providing sufficient proof and evidence in favour of the technology’s reliability


68

Artificial Intelligence and Policy in India (2021)

and safety. An appropriate example tracing the historical evidence in this context is the introduction of automatic elevators and the public’s response towards it. Automatic elevators were introduced in the United States of America during the early 1900s. However, these machines, that have shaped the architectural landscape of the modern city, were surrounded with skepticism when first introduced. It was not until the 1950s, after a New York City strike in 1945 that cost the administration over a million dollar in lost taxes and a series of advertisements in favour of the elevators’ safety, that trust was built among the public (HennGray 2015). With little deliberation one can establish similarities between the aforementioned events and the present-day mindset of users towards AI powered autonomous vehicles and robots. Artificial Intelligence and its subsets have the power to transform the human civilization by improving our autonomy and wellbeing. However, to effectively interact with this technology, it is essential for users to trust it. As stated earlier, trust can be built by sharing adequate information about the technology, its process and all other associated details. While the early models of artificial intelligence were easily understandable and interpretable by humans, the last few years have witnessed the significant growth of opaque or black-box models. A black-box machine learning (ML) model contains thousands of parameters and hundreds of layers that render these models practically impossible to understand. These models are being increasingly used in making important predictions in critical context (Arrieta, Díaz-Rodríguez, Del Ser, Bennetot, Tabik, Barbado, Garcia, Gil-Lopez, Molina, Benjamins, ChatilaHerrera 2020). Therefore, all the affected stakeholders have begun demanding more transparency to understand these models better. Explanation about the output from the model is crucial in some fields, such as healthcare, where a life is at stake and the slightest error in understanding the prediction from the model can have serious repercussions. Therefore, it is crucial for these models to be understandable, so that the decisions based on the output can be justifiable.


69

Artificial Intelligence and Policy in India, Volume 3

The Concept of Explainable Artificial Intelligence The nature and process of Explainable Artificial Intelligence (XAI) can be better explained with the help of visual illustrations. Input Training Data

?

Learned Algorithm

Prediction

Confused User

?

Training Data

Figure 1: ML Workflow (Image Structure Credit- Jamie Zornoza. Published attowardsdatascience)

A Machine Learning model is initially fed some training data that goes through a specific learning process to result into a learned function. The learned function can then be fed input to derive predictions, which are labeled as output in the above figure. The biggest flaw of this model is its lack of transparency that leaves the end-user confused and skeptical. It is this opaque structure of a Machine Learning system that has awarded itself the title of ‘black-box’ model. Considering the fact that the predictions from this model, i.e., the outcome, does not come with any justification, a confused and an ill-informed user will have difficulty in trusting and relying on the prediction of the Machine Learning model. Adequate justification and reasoning behind a specific prediction will help the user trust and make informed decisions. This can be achieved by adding a layer of interpretation or explanation to the aforementioned process. Thus, transforming the traditional Machine Learning model into an Explainable Artificial Intelligence model.


70

Artificial Intelligence and Policy in India (2021)

Input Training Data New Process Training Data

New Learned Algorithm

Explainable Prediction

Satisfied User

Figure 2: XAI Incorporated ML Workflow (Image Structure Credit- Jamie Zornoza. Published at towardsdatascienc)

In this model, a new learning process has been used that gives us an ‘Explainable Model’ as a learned function. This Explainable Model can be fed input to derive predictions that provide reasoning and justification. The ‘explanation interface’, i.e., the output of the Explainable Artificial Intelligence incorporated Machine Learning workflow provides adequate justification with every prediction that enables the end-user to make an informed decision based on the prediction. This aspect can be very crucial in some fields that mandate precise and accurate decisions, such as healthcare, finance, and security. Black-box, Meet Transparency As explained in the aforementioned example of automatic elevators, building trust among the public at large requires a multi-faceted approach. There exists technical as well as societal challenges that need to be addressed (Tubella, Theodorou, DignumDignum 2019). The trust in Artificial Intelligence or any relatively new technology, for that matter, is directly linked with transparency. A transparent system or process enables all stakeholders to easily understand it. It is important to verify whether the Artificial Intelligent system complies with the established legal framework and the society’s ethical and moral values. Transparency, in this context, means the visibility and understandability of the factors that influence the outcomes and predictions of the AI model. It should be effectively understood by the individuals who employ it, the individuals and organisations that regulate it, as well as by those who are either directly or indirectly affected by it. Therefore, transparency does not merely refer to the


Artificial Intelligence and Policy in India, Volume 3

71

clarity of process, but also clarity of the factors that influence the various steps involved in the process. The contemporary ideals of transparency are not restricted to the traditional notions of privacy considerations, but are spread across the far-reaching horizons of fairness and ethics. Therefore, it is essential for developers and regulators to venture beyond the isolated elements of privacy and ethics, and look into the wide palette of concerns that paint the canvas of AI with bias. Aesthetic-Geographical Constraints in India All individuals have a value system that influences their choices. A value system is a hierarchy of moral and ethical values that is unique to every individual. These values are based on one's virtues or vices, and experiences (value system Wiktionary 2021). Therefore, an individual may prioritize a certain value more than others. Nations are held to be the collective construct of the individuals that build it. This ‘construct’ is not only limited to its corporeal existence but also extends to the ethical and moral principles that its citizens collectively hold to be of significance. Therefore, similar to individuals, even nations have a value system that influences their legal framework, judicial principles, legislative enactments, and executive decisions. A value system or a set of values in hierarchy based on their priority is inferred from a premise or a set of premises. These premises may differ from one nation to another based on their generalised practices, traditions, and norms. This premise will verify an Artificial Intelligence system’s adherence to ethical and socio-legal values of that country. The Role of Explainable Artificial Intelligence in Intellectual Property Policy and Administration Artificial Intelligence engages with various fields of Intellectual Property at many levels and capacities. The World Intellectual Property Organisation (hereinafter referred to as ‘WIPO’ for the sake of brevity), has identified the following three aspects of Artificial Intelligence that engage with Intellectual Property (World Intellectual Property Organisation 2020): 1. Intellectual Property Policy 2. Strategic Capabilities: AI Capacity and Regulation 3. Intellectual Property Administration: a. classifying patents and goods and services for trademark application; b. searching of patent prior art; c. identifying elements of trademark; d. other trademark formality compliances; and e. client services and automated help desks. WIPO has also identified a list of issues that are associated with Artificial Intelligence and its interactions with Intellectual Property. Conventionally, the


72

Artificial Intelligence and Policy in India (2021)

debate around Artificial Intelligence and Intellectual Property revolves around the questions of ‘authorship’ and ‘ownership’. However, it is imperative to discuss various other facets of Artificial Intelligence and their impact on Intellectual Property Policy and Administration. WIPO has categorised the issues into separate sections. Relevant sections and some of the pressing issues under them are as follows (World Intellectual Property Organisation 2020): 1. Patents: questions revolving around the impact of this technology began with the advent of Artificial Intelligence. In the last few years, Artificial Intelligence has transformed into an essential element of the notion that we use to perceive contemporary patent law. The biggest challenge that AI, as an assisting technology, had to face while penetrating the field of Intellectual Property, was the issue of inventorship and ownership. The latter is also generally referred to as proprietary rights. However, it is necessary for the debate to move beyond these conventional issues and shed light on the challenges arising from the introduction of AI in Intellectual Property Policy and Administration. Some of them are: a. incorporation of legal standards and ethical notions into AI systems that exclusively deal with the process of identifying and regulating patentable subject matter, as well as their guidelines; b. to revisit the requirements of inventive step and non-obviousness in patent registration; and c. the necessity of disclosure and its boundaries. 2. Copyright: similar to patents, majority of the issues addressed about copyright are regarding its authorship and proprietary rights. However, copyright laws must also consider the grey area of copyright infringement and its exceptions in terms of Artificial Intelligence. Machine Learning or Deep Learning models might infringe existing copyright laws by allowing these models to be trained on training data that might be subject to copyright protection. Therefore, it is essential to revisit the provisions of copyright infringement and its exceptions in light of the recent technological developments. Further, the controversial issue of ‘Deepfakes’ must also be addressed by initiating dialogue on the very nature of the technology in question- the ambit of copyright and its capabilities of bringing ‘Deep Fakes’ under its purview of regulation. 3. Designs: similar to inventions, designs are also made with the assistance of Artificial Intelligence and other computer programs. Therefore, the question of authorship and ownership of these designs arises. Additionally, when Artificial Intelligence models work alongside humans, the issue of infringement of copyright-protected data exists. 4. Trademarks: it is well established that Artificial Intelligence does not affect trademarks in the same way as patents and copyrights, but there still exists a slight interference. Artificial Intelligence can be used in administrative processes associated with trademark registration.


Artificial Intelligence and Policy in India, Volume 3

73

Artificial Intelligence and its subsets already impact Intellectual Property in a number of capacities. The World Intellectual Property Organisation as well as domestic legislations of various countries have identified the aforementioned issues that attention. Explainable Artificial Intelligence can resolve the majority of issues that relate to IP Administration and Regulation. The inherent characteristic of explainability and interpretability in opaque Artificial Intelligence models aim to improve transparency and clarity. With improved transparency, stakeholders can identify the factors that influence an AI model’s predictions and base their decisions accordingly. The Role of Explainable Artificial Intelligence in Healthcare The role of Explainable Artificial Intelligence in the healthcare field is a highly debatable topic. Undoubtedly, Artificial Intelligence systems have proven to be highly efficient and at times more accurate than their human counterparts, but the lack of transparency in their output and predictions raise genuine concerns. The issue of transparency does not necessarily affect other fields with the same gravity as it affects the healthcare sector and it is absolutely crucial that the factors influencing an AI model’s outcome are clearly communicated with the end-users. Considering the healthcare sector’s critical nature, the decisions based on predictions from the AI model may directly or indirectly affect an individuals life. The following aspects in this section can be considered by healthcare enactments: 1. Regulators may indicate a minimum level of transparency in the Artificial Intelligence powered Clinical Decision Support Systems before their suggestions may be relied on or decisions may be based upon them. 2. Stringent regulations may be prescribed regarding the consensual collection of clinical data of patients. Additionally, guidelines relating to its storage, processing and dissemination may be formulated. Conclusions Artificial Intelligence and its subsets, specifically Machine Learning and Deep Learning models, have firmly embedded their presence in sectors such as Intellectual Property and Healthcare. While their utility and positive impact is widely appreciated, it is necessary to consider their transparency concerns. A multidisciplinary approach towards Explainable Artificial Intelligence might resolve most of the issues faced in these two sectors.

References 1.

ARRIETA, ALEJANDRO BARRETO, DÍAZ-RODRÍGUEZ, NATALIA, DEL SER, JAVIER, BENNETOT, ADRIEN, TABIK, SIHAM, BARBADO, ALBERTO, GARCIA, SALVADOR, GIL-LOPEZ, SERGIO, MOLINA, DANIEL, BENJAMINS, RICHARD, CHATILA, RAJA and HERRERA, FRANCISCO, 2020,


74

Artificial Intelligence and Policy in India (2021)

Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion [online]. 2020. Vol. 58, p. 82115. [Accessed 18 January 2021]. DOI 10.1016/j.inffus.2019.12.012. Available from: http://www.sciencedirect.com/science/article/pii/S1566253519308103 2. 3.

4. 5.

HENN, STEVE and GRAY, LEE, 2015, Remembering When Driverless Elevators Drew Skepticism. [radio]. 2015. TUBELLA, ANDREA, THEODOROU, ANDREAS, DIGNUM, FRANK and DIGNUM, VIRGINIA, 2019, Governance b Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour. In : International Joint Conference on Artificial Intelligence Organisations [online]. 2019. p. 5787-5793. [Accessed 18 January 2021]. Available from: https://doi.org/10.24963/ijcai.2019/802 value system - Wiktionary, 2021. En.wiktionary.org[online] WORLD INTELLECTUAL PROPERTY ORGANISATION, 2020, Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence [online]. World Intellectual Property Organisation. [Accessed 18 January 2021]. Available from: https://www.wipo.int/meetings/en/doc_details.jsp?doc_id=499504


Artificial Intelligence and Policy in India, Volume 3

{Discussion Papers}

75


76

Artificial Intelligence and Policy in India (2021)

5 AI and Trademark Law: Is AI akin to a Personal Shopper? Pankhuri Bhatnagar1 1Junior

Research Associate, Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. Today’s Trademark law is still fundamentally based on 19th century concepts of how products were purchased. While it is a common presumption that the manner in which we purchase products and services has remained constant over the years, the fact of the matter is that our shopping habits have already undergone four revolutions in the past two centuries and a 5th revolution is looming around the corner, if not already here. The evolution of disruptive technologies like Artificial Intelligence in the purchasing process have the potential to change the face of conventional law. Online marketplaces such as Amazon and Flipkart have taken over the role of a traditional shopping assistant and make recommendations to us based on our previous search terms and purchase history. The use of predictive text and selectively awarding the tag of ‘Amazon’s choice’ and ‘Flipkart assured’ products to certain brands over others has been argued to be anticompetitive, affecting the rights of brands. We also have new innovations like chatbots, Siri, Alexa, google assistant which respond to our queries, survey the market on our behalf and influence our decision making. The rise of influencers and social media platforms have further exacerbated this issue. AI based technologies like Amazon Dash are even capable of automatically ordering consumables which are running out of our homes! Thus, without us realizing, Artificial Intelligence has gradually taken over all aspects of the shopping experience and the human has been completely eliminated from the purchasing process! The paper explores the dichotomy between AI and Trademark law and the ways in which Artificial Intelligence has led to the devaluation of Trademarks, and conversely, the various AI tools which have proved to be beneficial in Trademark search and registration. Like any technology, AI also has the potential to be both a boon and a bane for Trademarks and its alleged drawbacks can be cleverly utilized to strengthen the position of brands, using the methods discussed in the paper. The impact of AI on traditional tenets of trademark law such as 'average consumer', 'imperfect recollection', 'likelihood of confusion', 'phonetic and visual similarity' etc. have also been studied and analysed. It is predicted that the increase in voice search will lead to a greater focus on the phonetic aspects of Trademarks, while at the same time, reducing instances of typo squatting and domain name squatting. Recent case laws regarding the liability of intermediaries such as eBay, Amazon and Google in promoting the sale of counterfeit goods and misleading the


77 Artificial Intelligence and Policy in India, Volume 3 consumers have also been discussed in detail. The purpose of the paper is to explore the aforementioned concepts and analyse whether the existing law is adequate to deal with these changes brought about by the AI revolution in retail sector.

Introduction The Trademark law of today is still fundamentally based on 19 th century concepts of how products were purchased, however, the involvement of Artificial Intelligence in retail threatens to necessitate a change in the conventional law. At its very heart, Trademark law concerns itself with how goods and services are purchased and the emergence of disruptive technologies like Artificial Intelligence has brought about drastic changes in our shopping experience, which in turn, impacts the law of Trademarks. AI affects the purchasing process in two ways (a) by providing limited information / making specific suggestions to the consumer (b) by itself taking the purchasing decision! Applications like Amazon Alexa usually recommend only three products to the customer. It controls which brands are being suggested to the consumer, instead of the human consumer having all the brand information at his fingertips. Virtual assistants like Siri and Alexa have the potential to become ‘gatekeepers’ between brands and the customers, controlling which information is supplied to us. On the other hand, there are smart home devices which automatically order things on our behalf. This is known as the ‘automatic execution model’ of AI, requiring little to no human interaction and it has completely modified the traditional shopping experience from ‘shopping then shipping’ model to a ‘shipping then shopping model’. It is quite an interesting phenomenon and the same is currently being explored by WIPO in an attempt to clarify “the most-pressing concerns relating to intellectual property policy in the dynamic and rapidly evolving field of AI.”6 The paper has been divided into 5 parts. Part 1 defines what is Artificial Intelligence, what are Trademarks and how are AI tools being used to improve Trademark search and registration. Part 2 analyses the four revolutions in the product purchasing process and describes the latest AI technologies in the retail sector. Part 3 talks about the dichotomy between AI and Trademarks and predicts the impact of AI on traditional concepts of trademark law such as 'average consumer', 'imperfect recollection', 'likelihood of confusion', 'concepts of aural, conceptual and visual comparison' etc. Part 4 essentially explores the pros 6

Wipo.int. 2021. WIPO’s Second Session of Conversation on IP and Artificial Intelligence Ends with Outline of Next Steps. [online] Available at: <https://www.wipo.int/pressroom/en/articles/2020/article_0014.html> [Accessed 16 March 2021].


78

Artificial Intelligence and Policy in India (2021)

and cons of the technology – the ways in which AI is adversely impacting trademarks and conversely, how is it strengthening trademarks. Part 5 concludes with a detailed analysis of the judicial precedents relating to the subject at hand and postulates that the law needs to be modified or interpreted from the point of reference of the artificial consumer. 1.1 What is artificial intelligence? In simple terms, Artificial Intelligence refers to the intelligence displayed by artificial machines, as compared to the natural intelligence demonstrated by living organisms such as animals and human beings. One important distinction between the two is that the former is more accurate and quicker in scientific or mathematical calculations, but the latter possess consciousness, emotional intelligence and a sense of justice which the AI does not (at least at this stage). AI is one of the most disruptive technologies of this era and is rapidly expanding into every sphere of our lives. It is a running debate that AI may prove to be a danger to humanity if its progress is not stopped at a suitable point.7 Many have also predicted that AI can result in mass unemployment as many of the tasks currently performed by humans would be automated.8 While these challenges belong in the distant future, there is a more pressing concern just around the corner – the impact of AI systems on Intellectual Property laws, in particular, on Trademarks. 1.2 What are trademarks? Trademarks are a form of Intellectual Property Rights. They have been defined in Section (zb) of the Indian Trade Marks Act of 1999. It refers to a unique sign which is capable of distinguishing the services and products of one entity from those of the other. The sign may be in the form of a logo, numbers, alphabets, a word or a small phrase, a picture, shape of goods, packaging of the product, the combination of colours chosen or a combination of any of these aspects. The main purpose of a trademark is to enable the identification of which goods belong to whom. The earliest example of this can be traced back to when farmers used different markings on their cattle, so that they could distinguish which animals belonged to whom. With the development of E-commerce, marks began to serve other purposes too. During the medieval period in England, sword manufacturers were required to put certain identification marks on the swords made by them, so that defective goods could be traced back to the owner and a suitable punishment could be meted out to him.9

"Stephen Hawking believes AI could be mankind's last accomplishment". BetaNews. 21 October 2016. Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than they destroy?". The Guardian. 9 BananaIp Reporter, HISTORY AND EVOLUTION OF THE TRADEMARK SYSTEM BANANAIP COUNSELS (2019), https://www.bananaip.com/ip-news-center/history-and-evolution-of7 8


Artificial Intelligence and Policy in India, Volume 3

79

The concept now involves principles such as ‘no one has the right to pass of his products as those of somebody else’ and ‘no one can take advantage of the goodwill of another’. Trademark law has not seen any major disruptions since its codification. However, the introduction of disruptive technologies such as Blockchain, AI, Robotics, Data analytics, Internet of Things etc. threaten to uproot the very basic tenets of Trademark law, as will be explored in this paper. 1.3 Use of AI Tools in Trademark • Since the main advantage of using a machine is that it reduces human errors, many companies around the world have launched AI tools which are capable of helping Trademark lawyers. These software’s can quickly scan through voluminous information, perform a document review, interpret contracts, conduct legal research and can even assist in trademark clearance searches and related policies.10 • Various other systems and tools have also been introduced in the market to assist in pre-filing searches, which involve determining whether there is any likelihood of confusion between marks. This likelihood can be in terms of similarity of products and services or similarity in the marks itself. • Thus, AI technology has improved speed and accuracy which in turn, saves the time of the lawyers. The success rate of these tools is much higher in the preliminary screening process and lawyers can quickly advise their clients on whether they can register and use that particular trademark or not. • TrademarkNow is an intelligent, online Trademark management platform.11 The company claims that it uses a complex algorithm using which, searching for registered goods and trademarks has become a 15 second miracle instead of a weeklong process! This technology has proved to be a game changer for the industry and their services are utilized by law firms, companies and branding agencies alike. • In April 2019, the World Intellectual Property Organization announced

trademark/#:~:text=Evolution%20Of%20Trademark%20Law&text=Before%20the%20enactme nt%20of%20a (last visited Mar 16, 2021). Yashvardhan Rana, ARTIFICIAL INTELLIGENCE AND TRADEMARK LAW IN THE DIGITAL AGE | NATIONAL JURIST WWW.NATIONALJURIST.COM (2020), https://www.nationaljurist.com/international-jurist/artificial-intelligence-and-trademark-lawdigital-age (last visited Mar 16, 2021). 10

THE

11

About TrademarkNow, WWW.TRADEMARKNOW.COM, https://www.trademarknow.com/about.


80

• •

Artificial Intelligence and Policy in India (2021)

their AI based software12 which makes use of deep machine learning to help in searching for similar registered trademarks. Their system encompasses 45 trademark offices and the algorithm can even search for criteria like jurisdiction and classification of those products and services. In June 2019, a new technology called TradeMarker came into being.13 It aids in image searches by analysing 4 different factors: Content similarity, text similarity, the pixels of the image similarity and manually added similarity criterion. The tool requires human assistance but has saved time and resources by a factor of five, while ensuring an 80% success search rate! Many companies are making use of AI to review the product listing, to process copyright or trademark takedown orders, identifying substitute and counterfeit goods and removing the same. Thus, AI based searching has proven to be more affordable, quicker and accurate for brands but at the same time, has some inherent drawbacks. A software can only search from the data that was input, or participating jurisdictions that have provided online access to their databases. As such, unregistered images, text references, common law searches and trademarks which are pending registration are excluded from the purview of these systems. It has also been claimed that AI is useful in smaller tasks but humans are necessary for completing complex or broad tasks. Most companies ask their employees to review the data predicted / results generated by the AI, so as to prevent any legal consequences of relying on an inaccurate report.

2.1 Revolutions in the product purchasing process It is a common presumption that the manner in which we purchase products and services has remained constant over the years, which is why it might be surprising to note that that is not the case. Our shopping habits have already undergone four revolutions in the past two centuries and a 5th revolution is looming around the corner, if not already here.14 1. The basic principles of Trademark law can be traced back to the nineteenth century. During this period, the traditional method of shopping was to enter a shop, specify your demand and a shop assistant would recommend which Matan Rusanovsky, Idan Mosseri & Gal Oren, Intelligence based Trademarks Similarity Search Engine, HCI INTERNATIONAL 2019 (2019). 12

trademark searches | World Trademark Review, (2019), https://www.worldtrademarkreview.com/ipoffices/how-ai-will-revolutionise-trademark-searches (last visited Mar 16, 2021). 13

How

AI

will

revolutionise

WWW.WORLDTRADEMARKREVIEW.COM

LEE CURTIS & RACHEL PLATT, M A N A G I N G I P. C O M Y E A R E N D 2 0 1 7 https://www.hgf.com/media/1173564/09-13-AI.PDF. 14

HGF.COM,


Artificial Intelligence and Policy in India, Volume 3

2.

3.

4.

5.

81

product to buy. Most of the products were unbranded in nature and the shop assistant acted as a ‘filter’ between the goods and the consumer. The assistant was the only actor in the product purchasing process who had knowledge and awareness of the products and the customers faithfully trusted the opinion of the shopkeeper. This Victorian mode of buying underwent a significant change with the advent of the modern self-service grocery stores, which are believed to have originated in Tennessee in 1916. The role of the shop assistant was considerably reduced and there was now, no filter between the goods and the ultimate consumer. The customers were themselves responsible for making decisions and thus, developed an awareness of all the products available in the supermarket. This soon led to an increase in the importance of product branding, as brands replaced the Victorian shop assistant and took over the role of directly influencing the consumers. Then there was the third revolution, which emerged with the development of the world wide web. The technology was launched in the 1990’s but gained prominence in the early 21st century. Retail started shifting online and posed new challenges for Trademark law. Search engines played a significant role in Internet shopping and brands had to start focusing on aspects such as domain names, keywords, meta data and initial interest confusion. It also allowed new and small sellers to come into the market and the choices available to the consumer increased at an exponential rate. The internet also facilitated consumer awareness about harmful ingredients, what factors to look at while buying a product and they could easily compare the quantity and prices of different brands. The customers were in complete control of the purchasing decision. The most recent trend is the Social Media revolution. Social media platforms such as Instagram, Facebook, Twitter, YouTube etc. are swarming with advertisements. Brands have started using online promotions as the key method to generate awareness about their brand, instead of relying solely on traditional television and radio ads. This has also given rise to an entirely new profession of ‘Influencers’ and ‘product reviewers’ who try out different products and services and share their experiences with their loyal audience. Celebrities and sportspersons also became involved in advertising and the logos, trademarks, brand colour combinations came to well known by the people. The revolution which this paper is most concerned with, is the introduction of Artificial Intelligence applications. Ecommerce platforms such as Amazon, Flipkart, Myntra etc. have quickly gained popularity and are based on algorithms which show products which the user would be most interested in. Applications such as Amazon Echo, Alexa, Siri and Google Home have fascinated the world and more advanced versions are still being developed. We also have robot assistants like Pepper and AI based personal shopping assistants in the form of Amazon Dash and Mona!


82

Artificial Intelligence and Policy in India (2021)

2.2 Examples of Artificial Intelligence in Retail There are various examples of AI systems functioning in the shopping environment. The most commonly encountered and simplest form of this technology can be said to be the Amazon shopping platform. The website tracks the search criterion used by the customers and suggests ‘products which may be of interest’ to the user. Apart from tracking our browsing history, their system also keeps a check on our purchase history and is capable of making “recommendations based on your order.” These targeted suggestions and recommendations have taken over the role of a traditional shopping assistant and consumers have found these virtual platforms more comfortable to use. Chatbots have become extremely popular in Ecommerce nowadays. A study found that consumers prefer to have conversations with brands rather than merely receiving promotional messages from them. One effective way to implement this was to interact through a chatbot. Bots help customers get instant gratification and quick answers to their queries and may also provide 24/7 customer care support. This increases the conversion rates of the platform by building the consumer’s trust. WeChat is one such bot which allows merchants to directly interact with their target audience by permitting business to add their customers as friends. Shopify Messenger is another similar bot which offers a complete shopping experience on Facebook Messenger. eBay also launched its shopbot in 2016, claiming that their vision was to make the shopping experience as simple as talking to a friend.15 Users could send pictures of a dress they like and the bot would find a similar product from eBay’s huge inventory. Apps such as CelebStyle allow users to find products worn by their favourite celebrities and a bot called ‘Madi’ helps its customers in choosing hair colours according to their personalities!16 Amazon Echo was originally launched as a brand of smart speakers but went on to exceed everyone’s expectations, and is now sold as a smart home hub.17 The device is connected to a voice-controlled personal assistant called Alexa and is capable of making to-do lists, reporting the weather and purchasing products. The AI offers recommendations to consumers on the basis of different factors such as previous purchase history. While these innovations may seem harmless RJ Pittman, SAY “HELLO” TO EBAY SHOPBOT BETA EBAYINC.COM https://www.ebayinc.com/stories/news/say-hello-to-ebay-shopbot-beta/. 15

(2016),

Aaron Orendorff, 15 BEST SHOPPING BOTS FOR ECOMMERCE STORES YOTPO (2020), https://www.yotpo.com/blog/shopping-bots/ (last visited Mar 16, 2021). 16

Joshua Brustein, The Real Story of How Amazon Built the Echo, BLOOMBERG.COM, April 19, 2016, https://www.bloomberg.com/features/2016-amazonecho/#:~:text=The%20talking%20speaker%20started%20as (last visited Mar 16, 2021). 17


Artificial Intelligence and Policy in India, Volume 3

83

and in fact, helpful to some, the truth of the matter is that they have completely removed the human from the product selection process. It is Alexa who surveys the market, gathers information and makes a purchase. Who would be liable if the AI suggests a counterfeit product? Or automatically orders items priced higher than the usual cost? In 2016, Google launched its smart speakers under the name of Google Home, to compete against Amazon Echo, which further led to the creation of their Google Assistant. Siri, Alexa and Google duo are the 3 most popular virtual assistants in vogue these days. Amazon Dash is powered by an AI software and consists of a barcode scanner and a voice operated device, which is capable of re-ordering consumable items which are running out of the home or office at the click of a button! 3.1 AI and Trademark Dichotomy Since its origin, the purpose of Trademark has been to eliminate any confusion between brands and to protect the original manufacturer or producer. In order to do this, brands engage heavily in advertising, customer support and other techniques in an attempt to form long term associations with their customers. Thus, the relationship depends on the formation of an emotional bond between the two. But what happens when the emotional bond is substituted by an artificial bond? When the warm and friendly suggestions of the shop assistant are replaced by the cold and calculated algorithmic recommendations made by a computer? We may not have realized it, but human choices are increasingly being substituted by artificial choices. These two subsets do not overlap, but are mutually exclusive in nature. Despite the release of new robots like Pepper, which behave in a ‘more human’ manner, at the end of the day, it is an algorithm, a machine parading as a human, making decisions on our behalf. 3.2 It’s All About People or Lack Thereof Let us consider a step-by-step procedure of what happens when we try to buy anything online. 1) You open any shopping app, take Amazon for example and start typing in the search box for whatever product you are looking for. The predictive text is displayed. Now, instead of continuing to type, users may simply click of one of the predicted search terms. Thus, at the very first stage, the user was deprived of his choice and gets influenced by the search terms displayed by the AI, which may lead to different results than what the customer was originally looking for. 2) Once we have entered the search term, the products are displayed on the webpage in the order chosen by the AI. There does not appear to be any clear logic behind which brand’s products made it to the first page and which are on the 4th page. Neither are the products displayed in the alphabetical order of their brand names, nor by 5-star ratings first, or according to prices


84

Artificial Intelligence and Policy in India (2021)

(although these settings are available upon changing the default settings). The default setting on Amazon is ‘sort by: featured’ which again means that the platform has chosen which products to feature and in which order. 3) The first 2-3 products on the webpage usually have a label announcing that it is a ‘limited time / lightning deal’ creating a sense of urgency to buy in the mind of the user. 4) Next in the list is something called ‘Amazon’s choice’ products. The website claims that it is merely ‘highlighting’ highly rated and well-priced goods which are available for immediate shipping. While this does sound like a helpful feature for us customers, is there any transparency in the process of awarding this recognition to a particular brand? There are many equally highly rated products in the same category, so what makes the software suggest that particular one? Intrigued by this, I decided to conduct a mini experiment18 to see how this prestigious label is awarded to brands in different categories. First, I typed ‘best soap’ on the Amazon India website. In the list which was shown, the Amazon’s choice badge had been awarded to a Biotique soap, which while well rated, did not appear to be ‘well priced’ as Amazon had claimed and its price per 100 grams was almost double of some other soaps. I then modified the search criteria to ‘good soap’ and the results were completely different! The winner of amazon’s choice was now a Good Vibes charcoal soap which had only 34 ratings as compared to other soap brands like Pears and Dettol which had over 5000 and 10,000 ratings respectively! Thus, it appears that the products highlighted by Amazon are not necessarily well priced or highly rated as is claimed. Out of curiosity, I only typed ‘soap’ and the winner now was a Himalaya neem soap. Thus, it appears that the labels being bestowed do not have any proper reasoning behind them. You’ll get different results on typing a general term like ‘soap’ or toothpaste and apparently ‘nice soap’ and ‘best soap’ are two mutually exclusive categories. The label is ‘keyword dependent’ and results change dramatically based on what you type. 5) Apart from these, there are also Amazon best seller badges being awarded which are given to sellers whose products based on the most sold products and are updated hourly, which is again capable of influencing the customers. 6) Some online news outlets have questioned these seemingly arbitrary parameters set by Amazon. To this, an Amazon spokesperson responded19 that these are just “recommendations made by their website and the users are free to ask for specific goods and brands if the wish to.” While this makes The search results are as observed by the Author on 14th March, 2021. Results and the Amazon’s choice labels are subject to change with time and other factors. 19 Louise Matsakis, WHAT DOES “AMAZON’S CHOICE” ACTUALLY MEAN? WIRED (2019), https://www.wired.com/story/what-does-amazons-choice-mean/. 18


Artificial Intelligence and Policy in India, Volume 3

85

sense, it is worth exploring how many customers get influenced by these recommendations and labels. Why would anyone scroll through 5 pages if they can simply order the product placed on the first page? Why wouldn’t they simply select the products recommended by the platform they trust? How does this affect the rights of other brands, especially those which, despite being well known in the real marketplace, haven’t found a mention in the first few result pages of the website results? While this example was of Amazon, other shopping apps also have similar functionality. Amazon’s choice label can be equated to ‘Flipkart assured’ products and so on. In this manner, AI has been affecting our choices every single day, without us even paying attention to this phenomenon. The substitution of humans by machines is real and very much here. Retail shopping has gradually shifted from being reactive to predictive and the human has now been completely eliminated from the purchasing process.20 So will the principles of Trademark law still be applicable to this ‘artificial consumer’? 3.3 How Will AI Impact the Basic Tenets of Trademark Law? a. Doctrine of likelihood of confusion The entire rationale of providing protection to Trademarks under Intellectual Property laws is to maintain the distinctiveness and goodwill of the brand to which the mark belongs. There should not be any scope of confusion in the mind of the consumer. Any such confusion may not only affect the image of the brand but may also result in diversion of revenues which would have rightfully accrued to that Company. One of the ways by which trademark can be infringed is by making ‘deceptively similar’ trademarks. The concept has been discussed in Indian law which describes it21 as a mark which so nearly resembles another’s mark that it is likely to deceive the public or cause confusion. Thus, it is not necessary that the consumers are actually confused. The likelihood of confusion is sufficient to constitute liability. This is highly probable in case of online shopping where the users are not entering the shop or showroom of the brand and are buying goods from individual sellers. There is no guarantee whether the goods are authentic or counterfeit in nature. Often, the same product is being sold by multiple sellers at different rates and it gets confusing for the consumers to identify which is the authentic one and they end up randomly selecting any seller. Places like Snapdeal have often offered Lakme and Maybelline makeup products at rates cheaper than those in a street market and it was later found that they were fake copies of the

20 21

Supra note 9. Section 2(h) of Indian Trade Marks Act, 1999


86

Artificial Intelligence and Policy in India (2021)

original goods.22 It was also reported that many ecommerce websites are selling fake gadgets or sending knockoffs to the customers. A recent survey has revealed that 1 in 5 products sold by online sellers is fake.23 These instances are common examples of how AI has been a source of confusion in the purchasing process. The ease of setting up seller accounts has also made it quite easy for new and smaller sellers to enter the market. Many local sellers are also allowed to sign up as a seller and they may intentionally register with a name that is deceptively similar to another brand, or has a logo which looks similar to a well-known brand, thus misleading the customers who want to quickly buy goods at the click of a button without verifying the exact identity of the seller or brand. b. Doctrine of Initial Interest Confusion This phenomenon occurs when the customer is interested in buying one product, but he/she is being suggested an alternative product of the same category. This leads to a temporary state of confusion. The doctrine has been in existence since the 1970’s but was applied for the first time by the US 9th circuit in the case of Brookfield Communications Inc. V West Coast Entertainment Corp24 where the Court recognized that (i) use of deceptively similar domain names (ii) use of another person’s trademark as a keyword creates initial interest confusion. The case was regarding the use of registered trademarks as meta tags by the infringer. It can be witnessed when we are buying a particular product on Amazon and just when you are about to click on ‘add to cart’ button, it shows a row of products with the heading ‘see lower priced items’. This has the potential of creating a doubt in the mind of a customer as to why should they buy the higher priced product (of their own choice) when the same goods are available at lower prices by a different brand? This encourages them to try out new brands instead of opting for their usual ones. Another customer may have a certain haircut in mind, but the software suggests a different haircut on the basis of that person’s face shape and other parameters. This is how initial interest confusion takes place. c. Doctrine of Post Purchase Confusion

www.ETRetail.com, SNAPDEAL DELIVERS “FAKE” PRODUCTS; COMPANY FOUNDERS BOOKED - ET RETAIL ETRETAIL.COM (2019), https://retail.economictimes.indiatimes.com/news/ecommerce/e-tailing/snapdeal-delivers-fake-products-company-founders-booked/70388947 (last visited Mar 16, 2021). 22

Digbijay Mishra / TNN / Updated: Nov 5, 2018 & 12:18 Ist, ONE IN FIVE PRODUCTS SOLD BY EIS FAKE: SURVEY - TIMES OF INDIA THE TIMES OF INDIA (2018), https://timesofindia.indiatimes.com/business/india-business/1-in-5-products-sold-by-e-tailersis-fake-survey/articleshow/66504276.cms (last visited Mar 16, 2021). 23

TAILERS

24

174 F.3d 1036-9th Cir. 1999.


Artificial Intelligence and Policy in India, Volume 3

87

This occurs when the customers get confused after having purchased the product, due to the use of a protected trademark on a different quality, unauthentic product. This alleged misuse of the mark can have two effects: • It can tarnish the reputation of the original owner of the trademark as the non-genuine product was of a lower quality • It can lead to a diversion of revenue from the original brand if people unintentionally bought the fake product and found the quality to be at par or better than the original. This practice is unethical as it takes advantage of the goodwill and reputation of the owner The main subject matter of confusion after making the purchase includes: • Competing products – Where two products have a very similar ‘trade dress’. Trade dress refers to the design or packaging of a product. In a recent case regarding Oreo biscuits, Oreo filed an action against Parle for copying the packaging and design of their cookie, since the goods were deceptively similar and competing in nature.25 • Counterfeits- Where either the product protected by Trademark or the Trademark itself has been copied and is sold under the guise of the original. • Reselling altered product – Where the seller was selling the original product but alters or modifies it to make it into something more expensive, or resells it as a refurbished item without affixing the name of the Company to it • Making use of an authentic product to make and sell a new product. This is regarded as a valid practice and the rule of post-sale confusion will not apply in this case. d. Average Consumer The European Court of Justice has recognized the average consumer as “reasonably observant and reasonably well informed and circumspect´.26 This principle was also applied in the Google France case, where the Court defined the average internet consumer a person who was reasonably well informed and observant, capable of distinguishing genuine websites from sites selling counterfeit products. Whether there is any likelihood of confusion or not, between identical or similar signs of goods and services and trademark infringements is evaluated through the lens of the principle of an average consumer. Rasul Bailay, Oreo-maker takes Parle to court for “copying” cookie design, THE ECONOMIC TIMES, March 3, 2021, https://economictimes.indiatimes.com/industry/cons-products/food/oreomaker-takes-parle-to-court-for-copying-cookie-design/articleshow/81298570.cms?from=mdr (last visited Mar 16, 2021). 25

26

Sabel BV v Puma AG Rudolf Dassler Sport [1998] ECR I6191.


88

Artificial Intelligence and Policy in India (2021)

In the new world of Artificial Intelligence, the customers are Internet users and the term ‘average consumer’ may apply to them or to the AI itself! In case of automated ordering of products or voice search technology, it is the AI which is responsible for selection and placing of an order. Virtual assistants like Alexa, Siri, Amazon echo etc., are taking the place of an average consumer, apart from the fact that they are by no means ‘average’. The software has perfect recollection capacity and is quite unlikely to be confused. However, it may be possible that the AI gets confused and accidentally orders something else due to certain accents or mispronunciations of words. This implies that the phonetic aspects of Trademark law will become a decisive factor in considering the likelihood of confusion and ultimately, in deciding the liability of the platform. e. Imperfect recollection: Although, the average consumer is considered in law to be reasonably well informed and observant, the fact remains that an average consumer rarely gets an opportunity to compare two similar marks side by side. The consumer has to rely on the imperfect image stored in his mind. An average consumer is attracted to different aspects of the products and may remember only the colour combination used, or a vague image of the logo etc. Even in cases where an online seller has not infringed the trademark of another directly, but has used similar packaging / colour combination / slightly different brand name etc, on account of the imperfect memory of an average consumer, his goods might pass off to be the ones of the original trademark holder. Some companies also hire celebrity look-alikes to shoot advertisements for their brand at a more affordable rate. This creates a misconception that the particular product has been sponsored by a celebrity without the brand actually having to mention the name of that popular personality. f.

Visual, phonetic and conceptual similarity:

It is a well-known concept that the analytical framework for comparing trademarks comprises of its visual, phonetic and conceptual impact. The dominance of one form of comparison over the other varies with the purchasing process utilized. For example, if the goods are bought at a store they will be visually examined and the visual impression which the signs create in the mind of the consumer will be assessed to decide if they create any confusion. Thus, it has to be decided on a case-by-case basis whether it is the visual, aural or conceptual impact which has to be analysed to judge the likelihood of confusion. Currently, majority of online shopping takes place by typing keywords on google on by looking for specific items on ecommerce platforms. Predictive text plays an important role in this regard and is capable of influencing the consumer to a large extent. In some cases that where the software does not recognize what we


Artificial Intelligence and Policy in India, Volume 3

89

are looking for, it automatically asks ‘Did you mean this?’ and displays a completely different search term. It has even been observed on platforms like Flipkart, Myntra etc that when they don’t have a particular product, for example - a ‘black A line skirt’ it would simply display other clothes which are black in colour, despite the keyword that was entered by the user! With the increase in voice searching, it can be predicted that phonetics will be the main focus of a trademark. This may lead to the reduction of importance of visual impact of branding. There may even be fewer instances of domain name squatting and typo-squatting, since majority of people will be shifting to voice search rather than typing the product names online. Typo squatting is predicted to be lessened as customers have already started relying on voice assistants like Alexa, Siri, Cortana and Google Assistant to place orders, look for nearby restaurants and other functions. Domain name squatting will also reduce as the field would no longer be attractive and it may not be worthwhile to register similar domain names. Thus, certain types of infringing activities may soon witness a reduction in their commission rates. Search Engine Optimization (SEO) may also undergo a drastic change. This is not the first time such a change would be made which impacts the practicalities of the law on Trademarks. Meta tags are snippets of text which describe the content of a webpage. These tags are not displayed but on the actual webpage, but can be found in its source code. In other words, they are content descriptors which are useful in telling search engines what the page stands for. Search engines like Google and Yahoo use meta data from meta tags in order to derive information about that page which is sued for ranking purposes. When online shopping was initially introduced, there was a surge in trademark infringement cases regarding the use of metadata and metatags. One such landmark case is that of Reed Executive and Reed Business Information. However, as the importance of meta data in SEO declined, such type of infringements are also unheard of. There is a high probability that with the increase in voice searching, new types of optimization will come into being and in turn, will lead to the development of new types of Trademark infringements. Currently, optimizers are trying to determine the parameters on the basis on which Alexa ranks products on Amazon. g. Voice search Experts have predicted that by the year 2023, 50% of product search will be done using voice commands as a direct consequence of AI assistants coming into play in shopping. In majority of countries across the world, including regions like the European Union, oral usage of a trademark is considered to be use of that Trademark. Thus, use of voice commands to look for a particular brand amounts to trademark use. The dominance of one type of use over the other has waxed and waned over the years, and phonetic cues are expected to be in the spotlight due to the shift to


90

Artificial Intelligence and Policy in India (2021)

voice searching. AI assistants are based on voice recognition soft wares. However, people belonging to different cultural and ethical groups may pronounce the same word differently. It is also possible that the AI may not correctly interpret the information. Thus, the accents and pronunciation of customers and the interpretation of the software are going to form an important factor in the product purchasing process. It appears that Amazon has also recognized this as a potential issue, as there was recent report which claimed their employees use ‘samples’ of recorded communications between customers and Alexa, in order to improve the accuracy, the reliability of their voice recognition software’s. 4.1 HOW IS AI ADVERSELY IMPACTING THE ROLE OF TRADEMARKS? Artificial Intelligence has changed the way consumers perceive brands. They no longer look for the same criteria and may rely on factors like the highest rated product or they might simply have seen an advertisement on Instagram for the same brand around ten times and finally decided to give the product a try. It is important to note the following ways in which AI based systems have changed the product selection process and have ultimately led to devaluation of the role of Trademarks: • Automation – Smart appliances and smart home devices have started gaining popularity. They utilize AI to streamline the shopping experience and minimize human participation in the ultimate decision. Devices such as Amazon Dash provide replenishment services. It has an auto detect system which can identify when we are running out of consumables such as sugar and will place an order and have sugar delivered to our doorstep before we realize we are running low on stock! The device can also accept voice commands such as “order more pens” and will make the decision without requiring any more input from the human, by utilizing information about our previous order history from the e-commerce platform connected to it. •

E-commerce platforms – As was covered in a previous section, online marketplaces such as Amazon, Flipkart, Myntra etc make use of predictive texts, choose which products to feature on which page, often have multiple sellers selling the same brand’s goods at different prices and add labels next to the product names such as ‘Amazon’s choice’ and ‘Flipkart assured’ products which influence the consumer to a large degree. We believe we are the ones in control of the product selection process but in reality, the AI is continually at work, using predictive analytics and algorithms to offer customized product suggestions. Barriers to entry – It is highly possible that current trends may prevent new sellers from gaining market share from already established players. Earlier, one television advertisement broadcasted on a major channel would


Artificial Intelligence and Policy in India, Volume 3

91

have sufficed to introduce the product into the market and interest at least some consumers to try it out. Nowadays, the shopping process is heavily manipulated by online platforms and also, by the thousands of reviews given by influencers who claim to ‘love a particular brand’ they collaborated with. Many consumers may also be locked into subscriptions with brands, or use automation / suggested products feature to make their decisions, thus, their decisions will always be related to their prior purchasing history and there is very little scope for a new start-up or brand to make its name or gain market shares from its competitors. Some might even argue that this constitutes an unfair trade practice under Competition law, but only time will tell how this plays out. • Elimination of preferences – It is also possible that brand preferences may be completely wiped out, by paving the way for default preferences which are entirely unrelated to the quality or goodwill of the brand. For example, consumers may choose to always select ‘prices low to high’ and will simply select whichever is the cheapest available option, without having to employ any kind of decision making on their part. This phenomenon is also known as “brand washing” as it results in a retail decision which has no corelation with the brand’s image or quality. Thus, the latest trends depict a detachment of products from their corresponding trademarks. With the existence of parameters such as ‘only show Prime products’ on Amazon, or ‘delivery by tomorrow’ or ‘only products eligible for pay on delivery’, it appears that people are only concerned with getting quick and cheap deliveries, without any attachment to a particular brand. This poses a grave threat to brands and they must consider the possible means for adapting to these new circumstances, to sustain their businesses in the long run. 4.2 How Can AI Strengthen Brand Recognition? As is true for all technology, AI can be both a boon and a bane for brands, depending on how they utilize it. It is clear that AI has created a changing environment which is adversely impacting the role of trademarks, but is there any way the same technology can be used to strengthen the position of brands? Making the software work in their favour might be easiest for those who are already key players in the market. By offering beneficial subscriptions on online platforms, they would be able to lock consumers into purchasing their goods or services repeatedly and regularly. Even in case of automated purchases by smart devices, previous purchases are referred to. Thus, AI can be beneficial for retaining consumers once they have already bought something from the brand. Even if they are tempted by a new brand, they might not be motivated enough to modify their default settings, or may simply be unaware of when their next purchase would be. Subscriptions or similar tie-up arrangements can also be beneficial in combating the nuisance of counterfeit goods. By setting a default preference of buying from


92

Artificial Intelligence and Policy in India (2021)

one particular seller, this reduces chances of people mistakenly buying look-alike products. It also removes initial interest confusion or the risk of mis-collection. Customer reviews also play an important role in this regard. Even a single review stating the products received were inauthentic and of low quality, and redirecting people to a different seller having authentic goods would be helpful in preventing the image of the original brand from being spoiled. Thus, in this manner, online marketplaces can help strengthen trademarks and push fake competitors out of the market. New brands or start-ups can also benefit from e-commerce practices. By initially selling the goods at a low cost, the product would show in the list of cheapest available options. Setting the prices from ‘low to high’ is a practice which is followed by majority of online consumers and can help new brands set up a name for themselves or get customers. Later on, the prices can be increased and the retention rate would depend on the quality of their products or the relationship the brand managed to form with the consumer. AI can also help in connecting brands with the right audience. Nowadays, there are many survey sites which conduct studies of how consumers of different categories interact with a brand, the factors which they look for the most, and so on. The internet has made it easier to gather data about customer habits, preferences, latest trends and can thus, help the brand in identifying their target audience. Quick comments and feedback on social media can also enable brands to redefine their advertisements, campaign strategies etc and to evolve with changing social and economical considerations. It is also easier to make an advertisement which resonates with the audience and ‘goes viral online’. Brands may also embrace new and innovative methods of increasing their reach such as partnering with influencers or with other brands. There may also be ways to influence the decisions made by the AI. For example, if a smart device knows which headphones we use, and the headphone company recommends the accessories or gadgets of a particular brand, then the AI may treat that brand of accessories as our first preference. Personalization is yet another way to create and maintain a relationship of trust with the people. By introducing chatbots on their website, consumers will always get quick replies and are made to feel like a priority. Chatting with customer care options are also useful, especially for those who despise arguing with customer care agents on call or being on hold for long hours. Platform algorithms are also continually working on customizing advertisements according to our preferences. AI has also made it possible for every customer to have their own personal shopping assistant and be treated like a VIP customer! The assistant answers frequently asked questions, can recommend hairstyles and dresses according to our face and body measurements respectively and do much more. Thus, AI has the potential to be used as a tool for creating an emotional connection between the brand and their loyal customers.


Artificial Intelligence and Policy in India, Volume 3

93

There is also an increasing trend where people are giving priority to their work environments, and the most admired brands are those which have the most motivated employees and supportive mentors. AI is also being used by certain companies to look for certain traits in their prospective employees and for shortlisting job applicants. This can give those companies a competitive advantage over their competitors and will increase brand strength. Search Engine Optimization is also something that key skill businesses need to focus on. Properly marketing their content online will go a long way towards promoting their services. Companies which are well ranked and show up in Google results will have an advantage over others in terms of reach and awareness. Efforts must be made to capitalize on these new forms of advertising and to drive traffic to their official websites. Offering app-only discounts is also a great way to get customers to download the Company’s apps and make regular use of the same. AI can also be used to collect information about consumer psychology and habits, such as what keywords to use to make people more likely to open the email? Companies may also choose to send creative video teasers or interactive offers, which can be viewed only upon clicking the link given in the mail, thus driving traffic to the website. Thus, despite all the drawbacks and potential challenges of AI in retail, it is clear that there can be various advantages too. Social media and online marketplaces have brought these brands into the lives of the people. We no longer have to visit a shop to interact with brands, they are always here, in our homes and offices, available at the click of a button. This 24/7 access to brand’s social media pages and placing orders from the comforts of our homes can play a significant role in increasing consumer demand and strengthening our relationships with these businesses. 5.1 Judicial Precedents on AI & Trademarks 1) Louis Vuitton v Google France27 Louis Vuitton is one of the leading manufacturers of luxury goods in the world. They have trademark over the terms ‘LV’, ‘Louis Vuitton’ and ‘Vuitton’ all of which are quite well known. In 2003, Vuitton discovered that when anyone entered these trademarked terms in Google’s search box, the results included advertisements of sites selling copies of their products under the category of ‘sponsored links’. Google had allowed the advertisers to select Vuitton’s trademarked words through its Keyword suggestion tool and hence enabled the selling of the counterfeit goods. Taking note of this, the Regional court of Paris found Google to be guilty for infringement of Trademark. However, in an appeal to the Cassation court, this decision was overturned. The Court held that LV had the right to prevent those sellers from using identical keywords without the consent of the owner of Trademark. However, an internet service provider which did not play an ‘active role’ cannot be held liable 27

CJEU, March 23 2010.


94

Artificial Intelligence and Policy in India (2021)

for the same, unless they failed to act expeditiously in removing the concerned information. Google had only stored the keywords and arranged the display of ads on the basis of the same, without ‘using the trademark’ itself. They had merely created the ‘technical conditions’ which enabled others to make use of such keywords. It was also recognized that customers are usually aware that they can be shown ads outside the scope of the original Company and are capable of identifying the authentic website to purchase from.28 Thus, there was no ‘confusion’ or ‘use’ affecting the function of that sign and search operators like Google cannot be held liable for trademark infringement merely because they allow advertisers to use the trademark as a keyword. 2) L’Oréal v eBay International 29 Loreal is a well renowned cosmetics company and the owner of various national trademarks. The company brought an action for trademark infringement against eBay, its subsidiaries in Europe and individual sellers who were selling counterfeit goods resembling Loreal products on the platform. The action against eBay was prolonged, while Loreal either settled or obtained default judgments against the other defendants. eBay argued that it had adequate filters in place to find counterfeit goods and also had a practice of suspending the accounts of sellers reported to be selling infringing products. They also highlighted Article 14 of Directive 2000/31,30 under which an intermediary cannot be held liable unless it had knowledge of an infringing sale and did not take any action towards correcting it. The Court acknowledged the anti-counterfeit measures in place, but added that eBay could provide additional filters to combat the issue. The Court’s final decision was beneficial for both brands and online marketing platforms. It held that eBay would not be liable since it merely played the role of an intermediary and did not play any active role or have knowledge or control over the information entered by sellers. This implies that in cases where the service provider was aware of the infringing activity and failed to take immediate action, or processed the data entered by sellers, they can be held liable for the same. The ECJ further clarified that proprietors of Trademark can hold ecommerce sites liable for not just counterfeit, but also for the sale of product samples, unboxed and grey market products. 3) Coty v Amazon31 TRADEMARK LAW — INFRINGEMENT LIABILITY — EUROPEAN COURT OF JUSTICE HOLDS THAT SEARCH ENGINES DO NOT INFRINGE TRADEMARKS, 124:648 HARVARD LAW REVIEW 650–655. 28

CJEU July 12 2011. Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce in the Internal Market. 31 Taylor Vinters LLP-Laura Rose & Louisa Dixon, HIGHEST EU COURT POURS HOT WATER ON COTY V AMAZON CASE | LEXOLOGY WWW.LEXOLOGY.COM (2020), 29 30


Artificial Intelligence and Policy in India, Volume 3

95

Coty is a perfume distributor and proprietor of the Trademark DAVIDOFF. They discovered that their perfume bottles were being sold on Amazon without their consent. The sales were made by an independent seller and the service was ‘Fulfilled by Amazon’, an arrangement where Amazon helps sellers in stocking and shipping of the goods. Coty claimed that the service provider had played an ‘active role in infringing’ by offering its services and also pointed out Article 9(3)(b) of the European Union Trade Mark Regulation, which refers to the right of the proprietor to prevent third parties from “offering, putting on the market, or stocking infringing products under the (infringing) trademark”. Amazon argued that they were merely handling the goods on behalf of the seller and had no knowledge that the goods were infringing in nature since they hadn’t received any takedown notice for the same from Coty. The Federal Court of Germany referred to the Google and Loreal decisions and gave the decision in favour of Amazon. However, we may infer from this that had Amazon played a more active role and assisted in the advertising or sales process rather than merely storing the goods of the third party, it’s liability may have differed. An active role constitutes ‘use’ and hence, infringement of a trademark. 4) Cosmetic Warriors Ltd and Lush Ltd v Amazon.co.uk Ltd and Amazon EU Sarl32 Cosmetic Warriors Ltd is the owner of the LUSH. The Company prefers not to sell any products through Amazon as they disagree with the latter on certain ethical grounds like taxation. Upon detecting certain uses of the word LUSH by the ecommerce platform, the Company filed a case against Amazon for 3 uses 1. Display of LUSH in a sponsored ad by Amazon – The Court held that this constitutes infringement as it would mislead customers that Lush products are available on Amazon 2. Use of LUSH as a keyword on Google which did not lead to any sponsored ad – The court applied the reasoning of the Louis Vuitton case that consumers would know that competing brands can show up on sponsored search results despite having entered the name of a particular brand 3. Availability of LUSH as a search term on Amazon which leads to different results – Amazon predicted LUSH on typing ‘lu’ and the results were of similar goods of other brands. This may confuse the consumers as the products had similar appearances and hence, this action was considered to be an infringement. The automatic prediction also perpetuated the belied that LUSH products were available on Amazon, which hurts the Company’s reputation due to their ethical concerns about Amazon. https://www.lexology.com/library/detail.aspx?g=dca362ff-628b-4da0-9753-888ef440a1bf (last visited Mar 16, 2021). 32

[2014] EWHC 181 (Ch).


96

Artificial Intelligence and Policy in India (2021)

5) Ortlieb v Amazon33 This refers to a series of cases relating to Ortlieb Sportartikel GmbH before the German Federal Court. The Company Ortileb is well known for selling outdoor gear. They filed a suit against Amazon for sponsoring advertisements using the ORTILEB trademark and redirecting internet users to a page on Amazon which contained products of ORTILEB as well as those of other companies. This was clearly misleading and consumers would have no way of knowing that after entering a specific search term, they are being shown similar goods of other manufacturers as well. Ortileb claimed trademark infringement under certain sections34 of the German Trademark act and Amazon was held to be guilty for the same. Conclusions Artificial intelligence has the potential to drastically modify all aspects of our lives, although its impact in the short term may not be that significant. As has been noted by scholars like Roy Amara, “humans have a tendency to overestimate the effect of a technology in the short run and underestimate its effect in the long run.” So, while AI may not have the short or medium impact on the society which is awaited or feared by many, it is coming. One such area where it will and has already had a significant impact is in the way products are bought, which by definition influences trademark law. The retail process is currently undergoing a change from “shopping-then-shipping´ to “shipping-then-shopping´ model. In the latter model, products are shipped even before the consumers demanded for products, which is possible due to the predictive nature of AI systems. It is clear that AI as a technology presents both opportunities and threats and it is up to the businesses to adapt and respond accordingly. They need to be cautious of the threats while also being open to the plethora of opportunities this technology may bring. As has been discussed in the paper, traditional and fundamental concepts of Trademark law will have to be modified to fit the description of the ‘artificial consumer’ rather than that of the ‘average consumer.’ The existing legislations are inadequate to deal with the predicted changes. The basic principles underlying Trademark law need to expanded and modified to deal with the upcoming challenges, or at the very least, will h have to be interpreted differently by the Courts to reflect the new reality. The Judiciary appears to be a step ahead of the Legislature in this field and has been dealing with issues regarding the liability of internet service providers in the infringement of Trademarks. Recent trends in litigation have shown that ecommerce platforms have played a key role in manipulating the situation and in perhaps unknowingly, promoting the sale of counterfeit goods. 33 34

Case ZR 138/16 of the Federal Court of Germany, dated 15.02.2018. Sec. 14(1), (2) no. 1, (5), (7) of the German Trade Mark Act (Markengesetz – MarkenG).


Artificial Intelligence and Policy in India, Volume 3

97

6 Can Artificial Intelligence save the remnants of Dying Cultures? Niharika Ravi1 1Research

Intern (former), Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. The visual and performing arts have been intrinsic components of human culture since time immemorial, but the advent of renewed cultural practices, especially in the face of globalisation, has arguably challenged public reception of the arts. As humankind focussed keenly on honing skills to combat the new realities constructed by the technological revolution, it seemingly left behind a more nuanced pursuit of these arts. Centuries of music, dance, and art traditions face extinction at the hands of the same, yet an elegant dichotomy arises in this advent of technology- can the same technological revolution that endangers the pursuit of the arts revolutionise the process of conserving them? This paper observes that traditional and classical forms of visual arts, music, dance, and the primary object that binds them all, language, are in constant motion. It explores whether these arts ever go extinct or are survived by further evolved forms. Further, it attempts to identify if artificial intelligence can create such an evolved form by preserving the dying aspects of these arts while simultaneously maintaining the ethical codes of traditional institutions. Notably, it also circumnavigates the legal questions in terms of intellectual property debates when concerned with the transfer, storage, preservation, and dissemination of knowledge of these art forms. Considering that the research community is at a nascent stage in its dealings with artificial intelligence at this time, and that the use of the same has always been met with reservations and caution, it is of essence to understand the ethical, cultural, and legal implications of potentially using this technology to protect the arts. The intellectual property rights in age-old forms and styles of arts have always been ambiguous, and hence, a significant ethical dilemma arises from the prospect of privatising the relatively free existence of these arts, only in lieu of saving them from complete extinction. In many European cities harbouring rich cultural heritage within their limits, AI is already being used as a tool to conserve culture and heritage and further disseminate the same among the masses. Projects in keeping with these aims have successfully sought funding from the European Union, which has arguably been one of the most proactive international government bodies in terms of creating frameworks to incorporate AI in daily life.


98

Artificial Intelligence and Policy in India (2021) On this note, it is pertinent to delve into the India-focussed requirements of conserving dying art forms using AI. India is home to many thousands of dying languages and cultures, may benefit greatly from similar applications of AI technology. The intertwining of cultural heritage with modern technology may also prove to be the carrot on the stick for younger generations to renew their investment in local forms of art. With the expanding scope of digitising the arts, undoubtedly an effect of the COVID-19 pandemic, it is worth questioning whether the traditional schools of art that may have been forced interact online would receive the idea of immortalising aspects of their crafts, and perhaps in the same spirit, letting go of other aspects forever. It is pertinent here, to ask the question- who makes the choice?

Introduction In his theory of the end of art, American art critic Arthur Danto attempted to explain the contemporary condition of art by observing the changing status of art in society. In the book “The Death of Art,” he addresses a progressive model of art history wherein art moves forward due to the motivations of each successive generation. This motivation was identified as a desire to duplicate the world at the time of writing, however, this desire is thought to have been threatened by the introduction of motion picture (Danto, 1984). To propose a new perspective herein, this paper, at the very offset, puts forth that while art may be perceived as endangered within and outside arts cultures, it has also historically been known to evolve. The present versions of the arts have often borrowed from predecessors, sometimes heavily, to make a unique identity. For instance, the present form of ballet took heavily from ballet de cour, an elaborate court dance spanning many hours that was performed exclusively in royal courts and bourgeois setups. The Indian dance form Bharatanatyam takes heavily from the erstwhile sadir, nautch, and dasi attam forms. If art could evolve to imbibe the present social norms of society in the past, this paper proposes the question- would it as readily accept to be immortalised and revolutionised by the tech culture today? This paper deals with a more nuanced approach to this question. It attempts to understand whether AI can be used in art preservation. Considering that AI refers to machines displaying intelligence without the human or animal qualities of conscience and emotionality, one ponders whether the interdisciplinary nature of AI can extend to AI not only learning to preserve art but making better the process of teaching the arts in an advanced application of ed-tech. DataRobot CEO Jeremy Achin defined AI as a computer system able to perform tasks that ordinarily require human intelligence (DataRobot, 2018). Performing and visual arts require extensive applications of human intelligence. It is evident from the Boston Dynamics Robots that made news in early 2021 when a video of them dancing was circulated widely on the internet that AI has made significant


Artificial Intelligence and Policy in India, Volume 3

99

strides in this aspect.35 Atlas, Spot, and Handle were made to dance when Boston Dynamics collaborated with choreographer Monica Thomas and a team of dancers. A dance ensemble was composed and a routine assembled, and then the human dance moves were adjusted so that the robots could perform them. New tools were developed, making the time frame to develop new dance moves shorter and shorter. The developers noted, in an interview, that the spinning turns inspired from Ballet were particularly difficult to accomplish but the end result, arguably, seems eerily similar to the actual dance form. However, the developers emphasised how robots are really good at doing something over and over again, hence lacking the practice that a human would. (Ackerman, 2021) The hundreds of hours it takes for an artist to master a skilful stroke of a brush, a particular note in their song, or a tricky leap in their dance step is rendered obsolete once the software is programmed and the hardware is compatible. To this effect, this paper addresses art on four grounds- visual art with focus on painting and drawing, performing arts with focus on music and dance, and language, which is arguably an art in itself.36 It argues that the present forms of these arts must be preserved as they may be at a risk of extinction like no other in their respective histories. Preservation, here, is notably a two-fold processdocumentation and dissemination. In this context, the paper identifies the ethical, cultural, and legal issues that may be encountered the use of AI in preserving these arts in these fashions. 1.1 Need for Conservation and Preservation Historically, art has been a testament of human progress. It has depicted the advancement of humans from painting figurines on walls and beating drums for mere communication to making legendary pieces like the Mona Lisa in Paris and creating hundreds of thousands of rhythms and beats with a single instrument. Advancement in technology in the 20th century has undoubtedly lent the responsibility to preserve these arts to professionals (Larson, 2017). In this process, the law has come to play a significant role in governing which pieces of art are worthy of being saved. This distinction is essential, as art, being a hallmark of the progress of humankind, is infinite, however, the ability to immortalise it, whether that be in through AI or alternate mediums, is definitely finite. (Larson, 2017). With paintings, “true conservation” is regarded as the preservation of paintings Find Atlas, Spot, and Handle dancing herehttps://www.youtube.com/watch?v=fn3KWM1kuAw. Upon understanding the process used to teach the robots to dance, one begins to notice where the human movements were substituted to suit to robots’ hardware. 36 “Language is the most massive and inclusive art we know, a mountainous and anonymous work of unconscious generations” said Edward Sapir, American anthropologist-linguist in Language: An Introduction to the Study of Speech (Sapir, 1921) 35


100

Artificial Intelligence and Policy in India (2021)

in a manner that would arrest decay of material and delay the need for restoration for as long as possible (Larson, 2017). This paper, however, deals with the preservations of dying method viz. perhaps the dying method of creating madhubani art, or the dying schools or gharanas of Indian Classical Music, or the dying practice of creating dome-shaped architecture. 1.2 Evaluating AI’s Applications to Art: A Review of Literature While AI has not been used as extensively to preserve art and heritage, it has been used, to some extent, to create art and music. Mantaras and Arcos, in 2002, observed that the main challenge in modelling the expressiveness of humangenerated music in a machine was the grasp of a “performer’s touch” that came only with years of observation and imitation. To remedy this, their paper speaks of the SaxEx System that uses “performance knowledge implicit in examples extracted from recordings” instead of attempting to make the knowledge of music explicit by following a set of rules (AI and Music: From Composition to Expressive Performance, 2002). An inspired solution to art’s AI needs and AI’s art preservation challenges, a refined version of SaxEx demands consideration in the coming years as AI broadens its horizons. New Media Theory expert Lev Manovich defines cultural analytics as “the analysis of massive cultural data sets and flows using computational and visualization techniques.” His work has led to the development of two new fieldssocial computing and digital humanities. Digital humanities scholars, writes Manovich, use computers to inspect historical artefacts created by musicians, artists, and writers, etc. Manovich’s plaint is that these scholars are bound by the copyright laws of their countries and are hence shut out from studying the present. (Manovich, 2017) One stunning application of digital humanities is the use of software like ARION and ORPHEUS developed to reconstruct ancient Greek music instruments to the best possible historical accuracy and in a user-friendly environment. Singingvoice synthesis to reproduce lost dialects sees the interdisciplinary application of AI, musical acoustics, music perception and cognition, and linguistics, among many others in this project (Emulation of Ancient Greek Music Using Sound Synthesis and Historical Notation, 2008). In India, the nascent digital art scene is already seeing the use of AI. The India Science Festival, 2020 held at the Indian Institute of Science Education and Research, Pune, boasted AI-generated art as one of its most sought after exhibits. In 2018, the Nature Morte Gallery in New Delhi hosted the country’s first AI art exhibition. Works of seven curated artists from seven countries were displayed for a span of eight months. (Iyengar, 2018) In creating AI art, it is evident that the artist’s role is limited to what the aesthetic looks like and what the end result is, along with the number and thickness of strokes. The artist determines the style and the algorithm determines the strokes, according to Tom White of Victoria University of Wellington, New


Artificial Intelligence and Policy in India, Volume 3

Zealand (Iyengar, 2018).

101

1.3 Present Initiatives and Exercises in Digital Humanities There are multiple exercises in digital humanities underway in institutions like Microsoft and the Alan Turing Institute. These organisations are committed to using AI to attempt to conserve our heritage. AI-startup IVOW’s “Culturally Sensitive Deep-Learning model” is trained on images to suggest detailed captions reflecting the cultural elements of a photo generated by natural language processing algorithms. (Ibaraki, 2019) Their women in history dataset and indigenous knowledge graph stand testament to their commitment to bring the ancient art of storytelling to our devices by telling the stories of those that are often unheard. IVOW founder Ivan Davar Ardlan told NPR that this model can prove effective in diminishing bias in algorithmic identification and in training AI software to be more inclusive.37 The open-source Time Machine project that aims to digitise volumes of information stored in museums and archives aims to use AI to analyse this data and reconstruct 2000 years of European history with a “Large Scale Historical Simulator.” The project has secured 1 million euros from the European Union and several European cities including Amsterdam, Paris, and Naples have endeavoured to develop their own time machines. Further East, Intel and China Foundation for Cultural Heritage Conservation recently teamed up to gather thousands of photos of the Great Wall of China and use AI to analyse the data and pin point exactly which parts of the wall need restoration (Ibaraki, 2019). Alan Turing Institute’s website focusses on answering questions like how data science and AI can reveal information about a painting’s creative process and hence reveal a restoration history and formulate a strategy for conservation and preservation and how AI can be made available at galleries, archives, libraries, and museums to substantiate their infrastructure with a focus on the United Kingdom. The UN declared 2019 the International Year of Indigenous Languages to promote awareness about the plight of disappearing languages. One language is said to die every 14 days (Rymer, 2012). AI has become instrumental in the fight for saving these languages. A Facebook Messenger chatbot powered by IBM Watson called Reobot provides a platform to interact in te reo Māori, New Zealand’s indigenous language. Opie, a low-cost robot, was developed to teach children the indigenous languages of Australia. Google’s open source AI platform TensorFlow facilitates AI models for indigenous languages as well. First People’s Cultural Council in British Columbia works with local communities to archive linguistic data and produce teaching programs and apps through First Voices, a keyboard app that incorporates AI and machine learning in its knowledge management platform Nuxeo to enable users to type in over 100 indigenous languages. (Ibarki, 2018) 37

One can find IVOW’s work here: https://www.ivow.ai/.


102

Artificial Intelligence and Policy in India (2021)

Microsoft has invested $10 million over five years in its AI for Cultural Heritage Program which seeks to support individuals and organisations through collaborations to use AI for the preservation and conservation of cultural heritage around the world. 38 The project is helping preserve Inukitut, an Inuit language from North Canada in a similar spirit as First Voice, Reobot, and Opie.39 Multiple ethical, cultural, and legal challenges may arise as these organisations continue to attempt to preserve art, and especially indigenous art, through AI. This paper attempts to identify the same in order to realise the need for legal structure herein. 2. The Ethical and Cultural Dilemma in Incorporating AI in Art Conservation Projects One notes that there has been nary an application of heritage conservation through AI in India, or in other countries in South and South-East Asia, and most of the highly profiled projects are driven from centres of art and culture in Europe. However, the threat of dying traditions, culture, and heritage, is equally daunting in the Indian subcontinent, if not more. At this time, traditional forms of painting like Warli, Manjusha, Mithila, and Roghan, the art form of puppetry, Parsi and Toda embroidery, Dhokra, and Naga handicrafts are at significant risk of extinction. Traditional forms of folk music like Thummri, Tappa, Ghazal, and Qawwali, along with their Carnatic and Hindustani Classical counterparts are also endangered. Forms of dance, like Domni, Ojha, and Tamasha, along with the hallmark 8 Classical Dance Styles from India all face extinction today as well. Moreover, Reuters reported in 2013 that India had lost 220 languages in a span of the last fifty years (Lalmalsawma, 2013). Especially with music and dance, but also with the other forms of art, much of handing the tradition down through generations is based in handing the language they are practiced and taught in down as well. Considering that this diverse range of art forms that are endangered in the Indian State, it is reasonable to assume that the heritage conservation practices that have started employing AI in Europe, China, etc. may soon find that the rich cultural heritage of India is worth conserving and preserving. Moreover, while these conservation projects are primarily focussed on conserving archival literature and architecture, applications to the Indian diaspora would require an extended emphasis on music, dance, handicrafts, etc. This chapter looks at two particular challenges that the use of AI in conserving these components of Indian heritage would face. The first is the challenge of respecting the historicity and keeping up with the demands of the traditions 38

39

Learn more about the impact of Microsoft’s AI for Cultural Heritage here: https://www.microsoft.com/en-us/ai/ai-for-cultural-heritage. Read more about Microsoft’s efforts to use AI to save languages herehttps://news.microsoft.com/en-xm/features/lost-in-translation-can-ai-keep-endangeredlanguages-from-disappearing/


Artificial Intelligence and Policy in India, Volume 3

103

attributed to these arts. The second is the fear of cultural appropriation as a result of international or westernised bodies storing and potentially capitalising on traditional Indian forms of art. One of the primary challenges that organisations aspiring to collate data to further preserve and propagate heritage in India is the test of tradition. Forms of music, dance, art, and language are ever-evolving. However, each form of art is often governed by a set of rules that change, perhaps, once in a few hundred or thousand years. For instance, the alphabet of a language or the vocabulary of a form of dance seldom change. In order to keep these in place, a set of tangible rules are invented that are followed with utmost reverence in the formal teaching-learning process of many indigenous arts in India. It is foreseeable that artists may perceive that these traditions that guard the sacred teaching process may be tainted in the preservation process. These traditions may range from teaching or practicing only during certain hours of the day to teaching some portion only in a particular place. They may also include a certain traditional method of teaching that cannot be achieved over an open-access internet lesson. Over time, these traditions become an intrinsic part of the art itself, and preserving them with the intention of further dissemination may become challenging. Disadvantaged communities that may not have access to the resources to employ AI to store, analyse, and disseminate information relating to cultural heritage may look to advantaged communities. For instance, the subaltern classes have a rich painting and handicraft culture. AI can be used to immortalise the methods used in creating these arts. Members of these classes may look to more economically privileged classes to employ the technology, however, the risk of cultural appropriation and subsequent capitalisation of the same at the disadvantage of the original creators is a significant risk. This poses a need for strong laws protecting the rights of these communities and binding the AIoperating institutions to an ethical code. Lastly, the absence of the incorporation of AI-based conservation projects in the East suggest that Eastern countries shall look West for aid in kick-starting these programmes, running a similar risk of exploitation at a macro level. Foreseeably, an international code of conduct may be required and AI ethics must address the needs of the less-advantaged in terms of preservation of culture. 3. Legal Questions Apart from the cultural and ethical questions that may arise in the pursuit of art preservation through AI in India, of course, a number of legal questions may arise. Three of the most prominent areas are addressed herein. In 2018, Christie’s auction house sold one “Portrait of Edmond Belamy” at $432.500. The painting was created by a machine-learning algorithm that had combed through a data set of 15,000 portraits and created a new one that could not be differentiated from the others, resulting in a blurry-faced fictional subject. “What you have is something more like conceptual art than traditional painting. There is a human in the loop, asking questions, and the machine is giving


104

Artificial Intelligence and Policy in India (2021)

answers,” said the director of the lab in Rutgers where the painting was created. (Art or Artifice?, 2018) The question raised here, was who is the artist of this painting? Is it the algorithm, the person who invented the code, the person who uses it, or those who painted those 15,000 paintings that created the foundation of the AI’s art? (Art or Artifice?, 2018) 3.1 Intellectual Property Rights When AI was used to examine almost 350 paintings and 150 gigabytes of data to painstakingly create a Rembrandt in 2020, an important question of law aroseto whom did the copyright of the final product belong? In a similar fashion, if AI assesses all the ragas or notations in Carnatic Music and creates a new raga, or assesses thousands of dance pieces and choreographs a new movement, one would be forced to question if this machine-generated art belongs to the machine itself. Copyright protection is given to an original work of authorship that is fixed in a tangible medium, and has a minimal amount of creativity. It is noted that in most cases, where the creator of the AI software is human, copyright law regards this human as the originator or creator and attributes rights to him. However, with advancements in machine learning and with the development of AI’s own product, this strategy may need rethinking. (Aritificial Intelligence Generated Works Under Copyright Law, 2020) On that note, the next question to ask is if AI can be regarded as a legal entity. With Sophia being granted full legal citizenship of Saudi Arabia, and Shibuya Marai being granted residency in Tokyo, it is relevant to ask whether these robots are equal to other citizens or residents in those countries i.e. would the same laws applicable to natural or artificial persons apply to them. Pokhriyal and Vasu argue that AI is more intelligent than animals and more animate than idols and rivers, which all have been granted the status of legal entity. They are also capable of being represented by individuals like corporations. They hence say that AI must be held as capable of being given legal identity. On the other hand, copyright laws are intended to protect the economic and moral rights of authors. AI, currently lacking economic motive, moral conscience, and civil rights, fails this essential test. (Aritificial Intelligence Generated Works Under Copyright Law, 2020) Notably, The Copyright Act, 1957 in Section 2(d)(vi) mentions that author in relation to any literary, dramatic, musical, or artistic work which is computergenerated means the person who causes the work to be created. Further, “computer” is defined in the act as “any electronic or similar device having information processing capabilities (1957). Directly applying these definitions to the challenges posed by the intersections of AI and art would attribute the ownership to the person who causes the work to be created. However, the legislation fails to assume that the computer itself may have human-like


Artificial Intelligence and Policy in India, Volume 3

105

intelligence and may contend as the author. Section 22 of the same act speaks of the term of the copyright in published literary, dramatic, musical, and artistic work. The law states that the copyright shall subsist within the lifetime of the author until sixty years from the beginning of the calendar year after the death of the author (1957)- a testament of the fact that the law is meant for mortal beings, and moreover, a challenge to AI’s jurisdiction over art being created in the present time. Hence, with regard to intellectual property rights and copyright, it is evident that perhaps the developer is the closest human link to the AI and hence must hold the rights to the AI’s creations. On the other hand, reading Section 17 of the Indian Copyright Act makes evident that the developer’s employer shall be made the first owner of the work if the AI is developed under the term of employment. In this case, not only is the developer an employee, the software itself can be regarded as an employee. (Aritificial Intelligence Generated Works Under Copyright Law, 2020). Of course, the death of the developer or the employing body itself would fail to faze the machine, which would live on in its immortality, perhaps creating for generations after its original “employers’” passing. New laws in this domain, and all others, are a necessity as rapid developments occur in the field. 3.2 Which Art is Worth Saving? Matthew Arnold, in 1960, said that culture is ‘the best that has been thought and said in the world” (Arnold, 1960). However, with the advent of culture preservation with the use of AI that may immortalise said culture, the responsibility of deciding which pieces of art, or what components of culture are worth preserving falls to the law. In the past, The Civil Amenities Act, 1967 in Great Britain, the Commission des Secteurs Sauvegardés of 1962 in France, and the Historical American Buildings Survey in America have been laws that have governed or aided in decisions pertaining to what monuments and architectural structures are worthy of being preserved (Larson, 2017). Arnold’s definition of culture is widely regarded in the field of humanities, however it begs the question- what is the best? (Manovich, 2017) Does conserving the “best” mean that what has been thought and said by historically excluded groups like women, the queer community, non-whites, oppressed castes, and other subaltern and marginalised groups deserves to be lost in obscure history textbooks while the tyrannical oppressors’ cultural heritage is immortalised? While there is a need for legislation to govern the ownership rights of those things that are stored, inspected, and created by AI, there is also a need for legislation to govern what things are stored and inspected by AI. Algorithmic bias is evidence to the fact that AI is but a mirror of the tangible world, imbibing the values of exclusion, discrimination, and bias practiced by its creators. Hence, it is imperative that a high code of ethical and moral conduct be laid down in the letter of the law to govern the ethical transmission, storage, and, dissemination


106

Artificial Intelligence and Policy in India (2021)

of cultural data in lieu of preservation practices in the future. Concluding Notes AI can aid in the saving the remnants of dying cultures in two distinct waysaiding in analysing and conserving data relating to these cultures, and aiding in the further teaching and dissemination of the cultures to ensure that they persist as a part of tangible human reality. This paper had discussed the former in considerable detail, however, an extension of the present ed-tech machinery that looks to machine learning to optimise education can be applied to effectively teaching the arts as well. Of course, the primary teacher-student human bond cannot be replaced by the machine. The latter shall only play the part of an assistant to the teacher, analysing the individual needs of the learner to optimise the propagation of the knowledge of the dying arts. There is much to be studied in the interdisciplinary field of AI and art, and this paper has touched upon what is but the tip of the iceberg. However, laws dealing with AI and art must be sensitive to the needs of all the stakeholders involved in the creation, storage, analysis, and further re-creation or artificial creation of art, and so on. The fact that these are uncharted waters for policy makers makes it all the more essential that of these stakeholders, it is imperative that the leastadvantaged benefit the most from these initiatives, especially when these arts may be the source of their livelihoods. References 1. 2.

1957. The Copyright Act. s.l. : The Government of India, 1957. Ackerman, James. 2021. How Boston Dynamics Taught its Robots to Dance. [https://spectrum.ieee.org/automaton/robotics/humanoids/how-bostondynamics-taught-its-robots-to-dance] s.l. : IEEE Spectrum, 2021. 3. AI and Music: From Composition to Expressive Performance. Mantaras, Ramon Lopez de and Arcos, Josep Lluis. 2002. 3, s.l. : AI Magazine, 2002, Vol. 23. 4. Aritificial Intelligence Generated Works Under Copyright Law. Pokhriyal, Ayush and Gupta, Vasu. 2020. 2, s.l. : NLUJ Law Review, 2020, Vol. 6. 5. Arnold, Matthew. 1960. Culture and Anarchy. Cambridge : Cambridge University Press, 1960. 6. Art or Artifice? TG. 2018. 4, s.l. : American Society for Engineering Education (AEEE Prism), 2018, Vol. 28. 7. Danto, Arthur. 1984. The End of Art. [book auth.] Arthur Danto et. al. The Death of Art. New York : New Haven Publications, 1984. 8. DataRobot. 2018. DataRobot AI Experience - Keynote from CEO Jeremy Achin. [YouTube] Japan : DataRobot, 2018. 9. Emulation of Ancient Greek Music Using Sound Synthesis and Historical Notation. Doinysios, Politis, et al. 2008. 4, Massachusetts : Computer Music Journal, The MIT Press, 2008, Vol. 32. 10. Ibaraki, Stephen. 2019. Artificial Intelligence For Good: Preserving Our Cultural Heritage. s.l. : Forbes, 2019. 11. Ibarki, Stephen. 2018. Turning To AI To Save Endangered Languages. Forbes.com.


Artificial Intelligence and Policy in India, Volume 3

12. 13.

14. 15. 16. 17.

107

[Online] Forbes, 23 November 2018. [Cited: 7 February 2021.] https://www.forbes.com/sites/cognitiveworld/2018/11/23/turning-to-ai-tosave-endangered-languages/?sh=66d9c02d6f45. Iyengar, Radhika. 2018. Inside India's First AI Art Show. New Delhi : Live Mint, 2018. Lalmalsawma, David. 2013. India speaks 780 languages, 220 lost in last 50 yearsSurvey. blogs.reuters. [Online] Reuters, 7 September 2013. [Cited: 7 February 2021.] http://blogs.reuters.com/india/2013/09/07/india-speaks-780-languages-220lost-in-last-50-years-survey/. Larson, J.H. et. al. 2017. Art conservation and restoration. [Internet] s.l. : Encyclopedia Britannica , 2017. Manovich, Lev. 2017. Cultural Analytics, Social Computing, and Digital Humanities. [book auth.] Mirko Tobias Schäfer and Karin van Es. The Datafied Society. Amsterdam : Amsterdam University Press, 2017. Rymer, Russ. 2012. Vanishing Voices. National Geographic.com. [Online] National Geographic, July 2012. [Cited: 7 February 2021.] https://www.nationalgeographic.com/magazine/2012/07/vanishing-languages/. Sapir, Edward. 1921. Language: An Introduction to the Study of Speech. [Document] New York : Harcourt, Brace, 1921.


108

Artificial Intelligence and Policy in India (2021)

7 Implementation of Artificial Intelligence in Banking in India: A Critical Review Mohit Agarwal1 1Research

Intern (former), Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. This is a discussion paper submitted under the former Indian Strategy on AI and Law Programme.

Introduction With the advent of a digital revolution around the globe, India faces a defining moment. With its digital sector estimated to double its output as early as 2025, digitization is expected to foster widespread economic growth and employment through incremental value addition across a variety of sectors including education, logistics, manufacturing, and healthcare. In the last decade India has witnessed a wave of technological disruptions that have been facilitated by our advanced IT sector and the demographic potential in the country. This has made India the world's second largest digital ecosystem with over 700 million internet users. Although the concept of AI has been around for centuries, it was not until the 1950s when its true possibility was explored. It was British polymath Alan Turing who suggested that if humans could solve problems and make decisions by using available information and reason, then machines could do it too. Artificial intelligence (AI) which is also called as “machine intelligence” is intelligence exhibited by machines contrary to the usual intelligence displayed by people. AI is often used to describe machines that humans associate with the human mind, such as “learning” and “problem solving”. Artificial Intelligence is the ability of a machine or a computer to copy from something that is natural, in terms of acquiring and applying knowledge and skills. When a machine mimics a human mind by thinking for itself, it is known as Artificial Intelligence. AI is fast evolving as the go-to technology for companies across the world to personalize experience for individuals. The technology itself is getting better and smarter day by day, allowing more and newer industries to adopt the AI for various applications. From Siri to self-driving cars, AI is progressing at a rapid


Artificial Intelligence and Policy in India, Volume 3

109

pace. AI in finance is more than about chat bots. Banks have been already offering a wide variety of products and services, integrated with technology and automation, the most familiar being ATM machines all around us. Now moving to the next level in the present Industry era, Banking sector is all set to amplify its strategy implementation by leveraging latest digital technologies so that its customers may experience swift and secure processing of transactions. The rudimentary applications AI include bring smarter chat-bots for customer service, personalizing services for individuals, and even placing an AI robot for self-service at banks. Beyond these basic applications, banks can implement the technology for bringing in more efficiency to their back-office and even reduce fraud and security risks. The banks use algorithms to generate accurate results which in turn help in enhancing customer service and generate better sales performance to deliver profits. AI includes machine learning and profound learning which helps to reduce errors caused by emotional and psychological factors. This discussion paper will effectively deal with the impact of AI in the banking sector and analyse the current scenario as well as the challenges faced by it. In the coming sections of the paper, we will look at the Indian and global scenario and a few case studies followed by conclusion. Literature review Christian Catalini, Chris Foster and Ramana Nanda (2018) in their work ‘Machine Intelligence vs. Human Judgment in New Venture Finance’40 study that machine learning models trained to mimic human evaluators performed relative to models trained purely to maximize financial success. Their findings have important implications for the selection and financing of high potential ideas, and more broadly for how AI can help humans screen and evaluate information in an era of increasing ‘information overload’. Ryoji Kashiwagi (2005) ‘Utilization of artificial intelligence in finance’ 41studies that man-made artificial intelligence is presently entering another boom stage, the third in its history, in the wake of a technical advancement known as profound learning. Man-made AI is being used in different structures even in the monetary segment. Money related foundations ought to use man-made consciousness all the more effectively through such methods as open innovation. Dr. K. Suresh Kumar, Aishwaryalakshmi. S and Akalya. A (2020) in their paper ‘Impact and Challenges of Artificial Intelligence in Banking’42 discusses that the banking sector is becoming one of the first adopters of artificial intelligence. 40 Christian Catalini, Chris Foster, and Ramana Nanda, “Machine intelligence vs. human judgement 41

42

in new venture finance.” Mimeo (2018). Ryoji Kashiwagi, “Utilisation of Artificial Intelligence in Finance” Vol. 227 Lakyara (Nomura Research Institute Ltd, 2015). Dr. K. Suresh Kumar, Aishwaryalakshmi. S, Akalya. A, “Impact and Challenges of Artificial Intelligence in Banking” Vol. 10(2) Journal of Information and Computational Science (2020).


110

Artificial Intelligence and Policy in India (2021)

Artificial Intelligence is stated to be intelligence by machines. Financial transactions of the banks are analysed for learning, problem solving and decision making with artificial intelligence and also by using big data, advanced analytics coupled with machine learning algorithms. They study that the impact of Artificial Intelligence (AI) in banking sector in India and the challenges faced by the banking sector in implementing Artificial Intelligence. They conclude on the note that AI is necessary for banking sector due to the government’s efforts in financial inclusion and to push India into a digital economy. This could happen only with widespread use of AI by the banking sector in India. It is the AI which is going to be the major game changer in the banking sector. Dr. C. Vijai in his paper ‘Artificial Intelligence in Indian Banking Sector: Challenges and Opportunities’43 discussed that how Artificial Intelligence is used in the Indian banking sector, what are the benefits and what are the Challenges facing India’s Artificial Intelligence. Development that Artificial Intelligence offers to FinTech and the different ways in which it can improve the operations of an Indian banking sector. The paper studied the areas where the artificial intelligence is being used by the banks and the application of Artificial intelligence in Banking Sector using secondary data. It found that the banks are increasingly looking at emerging technologies such as block chain and analytics in creating an active defence mechanism against cybercrimes. AI in the Banking Sector – Indian Scenario The recent major events like demonetization and government sponsored initiatives of developing digital India have not only encouraged India’s economy to become cashless, but also brought in a massive amount of data in banks, demanding quick, accurate and consistent updating & maintenance of records. Banking sector had long back made computers an integral part of its operations and since 1990s, automation became key pillar to modern banking e.g., money withdrawal, transfer of funds, ordering cheque books etc. Now due to huge changes in economy, increased work volume, major shifts in consumer preferences, customer objectives, growing population of youngsters, new competitors, regulatory requirements and corresponding need to have robust access management & secured banking environment for transactions, banking sector has started leveraging AI to digitize the tedious manual tasks, shaping future of economy, reducing strategy cycle, implementing its strategies successfully, thus transforming traditional branch banking into mobile/online banking, pioneered by private sector banks of India. This initiative is adequately supported by technological improvements in computing, its storage, mobile devices and wide spread usage of social media. 'Digital India' focuses on transforming India into a digitally empowered and 43

Dr. C. Vijai, “Artificial Intelligence in Indian Banking Sector: Challenges and Opportunities” Vol. 7(5) International Journal of Advanced Research (2019).


Artificial Intelligence and Policy in India, Volume 3

111

knowledge economy. The recent developments in Cashless Commerce in India shows that there is an urgent need of Digital Payments in India. The balanced approach followed by Indian central bank, Reserve Bank of India, is another major factor in any new technology adoption in Indian banking sector. In the last few years —especially during the governorship of Raghuram Rajan and his successor Urjit Patel — RBI has taken a cautious but pragmatic view of embracing new technologies, often forcing technology adoption on banks through regulation, wherever it has seen scope to enhance customer experience and efficiency using a particular technology. RBI’s proactive push of new technology adoption has not just been restricted to creating policy frameworks. It has used a mix of regulation, evangelism and even worked with the industry to make things easier and effective. The Ministry of Commerce & Industry set up the AI Task Force to carve the path forward for use of AI in the country; the Task Force includes members from the private sector, including banking and finance. In March 2018, the Task Force released its report that identified 10 key domains where AI could play a crucial role in India’s socio-economic development. The report says that if the banking and financial sector leveraged AI, it would help small and medium enterprises apart from enabling better risk assessment. The Ministry’s report on leveraging AI for identifying National Missions in key sectors identifies finance as a sector where AI can be leveraged efficiently. Further, it identifies fraud detection and the use of predictive analysis for identifying potential Non-Performing Assets (NPAs) and bad loans as examples of the use of AI in the financial sector. The Indian banking industry has been evolving from people-driven to machines controlled in the past few years. Passbook printing kiosk is an automatic kiosk which enable customers to print their passbooks. Indian banks such as SBI and Bank of Baroda have installed this facility in a big way. They have installed selfservice passbook kiosks wherein customers can print passbooks on their own. Some of the AI applications in use in the leading commercial banks in India like State Bank of India, HDFC, ICICI and Axis are: • State Bank of India (SBI) SBI the largest public-sector bank with 420 million customers has embarked on using AI by launching “Code for Bank” for focusing on technologies such as predictive analytics, fintech/ block chain, digital payments, IoT, AI, machine learning, BOTS and robotic process automation. SBI has also launched SIA, an AI-powered chat assistant that addresses customer enquiries instantly and helps them with everyday banking tasks just like a bank representative. • HDFC Bank HDFC Bank has developed an AI-based chatbot, “Eva”, (which stands Electronic Virtual Assistant) Eva can assimilate knowledge from thousands of sources and provide simple answers in less than 0.4 seconds. By using Eva, customers can get information on its products and services instantaneously. It removes the need to search, browse or call. HDFC is also experimenting with in-store robotic


112

Artificial Intelligence and Policy in India (2021)

applications. HDFC’s IRA (stands for “Intelligent Robotic Assistant”) robot. • ICICI Bank ICICI Bank, India’s second-largest private sector bank has deployed a software robotics (robotic software) a kind of software generally focused on automating office work. The bank is the first in the country to deploy the technology, which emulates human actions to automate and perform repetitive, high-volume and time-consuming business tasks. It has also enabled the bank’s employees to focus more on value-added and customer-related functions. ICICI Bank has also launched an AI-based chatbot, named iPal (chatbot) which has interacted with 3.1 million customers, has answered about 6 million queries, with a 90 percent accuracy rate. The bank is also considering the process of integrating iPal with existing voice assistants such as Cortana, Siri and Assistant. • Axis Bank Axis Bank, India’s third-largest private sector bank, launched an AI & NLP (Natural Language Processing) enabled app, Conversational Banking, to help consumers with financial and nonfinancial transactions, answer FAQs and get in touch with the bank for loan other products. Currently available on Face book and the Axis Bank website, it will soon be extended to mobile banking channels as well. In 2018, the Punjab National Bank announced its plan to implement AI in account reconciliation as well as using analytics to improve its audit systems. The move came in after the infamous fraud of approximately INR 20K Cr, carried out by the pair of Nirav Modi and Mehul Choksi, which almost paralysed the bank’s operation for a short time. In the present day’s significant transformation era, businesses are relying heavily on interconnectivity, automation, machine learning and real time data processing for conjunction of physical production / services with digital technologies. This is an ideal stage for integrating digital technologies e.g., AI with banking operations also, which offers huge potentials of harvesting profits for Banks and provides reduced reaction time (up to milliseconds) for its customers. It is extremely important for banks to remain competitive and proactive in Industry environment. They have to be dynamic, constantly taking and aligning their decisions, accelerating changes in business processes with an objective to optimize the profitability as well as service delivery. The creation of National Payment Corporation of India (NPCI) which has significantly brought down the cost of electronic transactions is a case in point. The regulator also has an academic/research unit, Institute of Development and Research in Banking Technology (IDRBT) which keeps studying the opportunities and challenges in new technology areas. An example of successful implementation of data analysis techniques in the banking industry is the FICO Falcon fraud assessment system, which is based on a neural network shell to deployment of sophisticated deep learning based artificial intelligence systems today


Artificial Intelligence and Policy in India, Volume 3

113

Mobiles are becoming smarter globally. Millions of people depend vigorously on mobile banking, which means that AI-powered banking mobile apps strongly attract them. Consumers have moved to mobile banking effortlessly. Mobile apps can readily meet the client’s desires. There are intelligent apps that can track the user’s behavior and give them customized tips and insights on savings and expenses. Nowadays every bank offers these services of mobile and text banking. With the use of mobile banking, it has become more convenient to do daily transactions such as money transfer, payments etc. Consumers can do better financial planning, can get smart financial advisory, can do efficient and quicker transactions with the advent of artificial intelligence in mobile banking. Although most banks are still in the early stages of AI adoption, immediate applications involve achieving productivity gains and developing proactive compliance/risk management systems. The magnitude of potential benefits, however, remains largely to be tapped. AI integration in the workplace can deliver cost and efficiency results, particularly for customer service and backoffice operations in banking. Besides, customised fraud detection, risk management and compliance solutions can transform the scope of efficiency for banks. Application of AI Banks are using AI applications to recommend, forecast and execute tailored financial advice to customers and also gain quick information on financial strategies, loan rates and the future market progress. AI helps banks in providing personalized and more efficient services to customers and in increasing revenue, faster decision making and having a good customer relationship. The increasing advent of digital wallets and UPI has not only created hassle-free transaction but also revolutionized the traditional approach to banking. Customers can transfer any amount of money from the click of their phone without taking the pain of filling the forms and going to the nearest branch so as to avail such facility. Not only limited to the primary function of the banks, AI has broader horizons in terms of security and fraud detection. Since the early 2010s, major banks have used anomaly detection – an AI technique for identifying deviations from a norm – for automating fraud, cybersecurity, and anti-money laundering processes. Teradata is an AI firm selling fraud detection solutions to banks. They claim their machine learning platform can enhance banking fraud detection by helping their data analytics software recognize potential fraud cases while avoiding acceptable deviations from the norm. In other cases, these deviations may be flagged and end up as false positives that offer the system feedback to “learn” from its mistakes. According to the data on their website, the application of AI by the Danske Bank reduced their false positives by 60% and increased detection of real fraud by 50%. Not only this, it also assisted in refocusing their time and resources toward actual cases of fraud and identifying new fraud methods. Machine learning increases


114

Artificial Intelligence and Policy in India (2021)

understanding by showing which factors most affect specific outcomes: Correlation matrix helps to dismiss correlated variable and feature selection methods like stepwise regression are used to filter irrelevant predictors. Credit scoring is an integral aspect of today’s financial world. Credit scoring is used by lenders to help decide on whether to extend or deny credit. The errors of judgment or an error of data entry might lead to irreversible damage. A big advantage of using AI-based credit scoring systems is that they are capable of digging out the aspects and information around an entity that looked meritless to the conventional credit scoring systems. These new softwares allow financial institutions to unearth details about potential customers from the web and then use predictive analysis in deciding their creditworthiness. Data mining works alongside machine learning in specifying what indicators to look for as the decisive factors for differentiating between a responsible and a risky client. AI can analyse all data sources together to generate a coherent decision. In fact, banks today look at creditworthiness as one of their everyday applications of AI. Impact of AI There are many different ways of examining how AI will affect a bank. In addition to business value drivers, another perspective is to examine the functions in the bank that will be affected. AI has the potential to improve all parts of the bank, from front end, customer-facing areas; to the middle office; and through to the back office. While some argue that these terms are becoming obsolete, they’re still a useful construct to consider how AI can create business value. Direct customer interactions can either be addressed by AI directly, most prominently with chatbots or virtual agents, or by enabling employees to do their jobs better (that is, by enabling them to be faster or more accurate, or more efficient), or by augmenting their capabilities. Another area of customer interaction is through next best offers or financial advice or nudges. And finally, AI may simply improve the employee experiences. Customer-facing AI can be either one-way, or two-way, where an ongoing set of interactions takes place, initiated by either the customer or the bank. The benefits to customers include improved advice, better offers, and saved time. While the benefits are tangible, there are risks inherent in exposing AI directly to customers. If the implementation is flawed, the mistakes may earn the bank or credit union public ridicule, or it may poison the well of customer perception such that customers who had a bad experience will not willingly interact again. The functions of the middle office include employees supporting other employees, indirectly supporting customers, or conducting compliance activities. Some salient examples are in report generation, underwriting and credit decisioning, and risk and compliance monitoring. While middle office activities are generally an effectiveness play, efficiency has a role, too, as AI technology


Artificial Intelligence and Policy in India, Volume 3

115

helps process more work at lower cost. Identifying exceptions is one example of AI helping employees become both better and faster at their jobs. The risks of middle office AI are relatively low from a consumer standpoint as long as outcomes (like false positives) aren’t degraded. For all of these AI activities, human supervision is critical. Internal reports generated via NLG, for example, should only be considered first drafts; the responsible analyst should sanity-check them, revise them for tone and voice, and ensure that she knows the substance of the report. Processing and reconciliation, typical back-office functions, can use AI to detect anomalies and exceptions. Direct benefits to customers are hard to envision in a back-office AI situation, but the risks also aren’t as high. A great deal of threat detection and risk mitigation will also take place in the back office. The biggest risks in back-office implementations lie in overreliance on AI and in the cost associated with putting initiatives in place. Challenges faced in adopting AI by Banks Although the banking sector in India is now increasingly adopting AI, it still faces numerous challenges such as the unavailability of professionals with the requisite data science skills means that only a small number of good data scientists are available in the country to work on AI. In addition to the scarcity of trained human resources, the existing workforce in banks is not familiar with the latest tools and applications. The financial services industry needs to work with Indian universities to develop skilled data scientists and develop in-house training programs to train employees in successful implementation of AI technologies for banking functions. Universities in various countries, including the US and UK, are beginning to adapt to the changes that AI is bringing about in the finance sector by offering undergraduate and masters programmes in fintech. The lack of harmonisation in enforcement approaches across different countries and regulators creates challenges for financial firms employing AI solutions. Differing enforcement approaches make it hard for firms to adopt effective global standards and to quantify their risk of rolling out AI innovations internationally. AI systems require huge amounts of training data as inputs. Consumer data is continuously collected by tracking online and offline consumer behaviour, stored, merged with other data sources, to generate big data sets and extract further information about consumers through profiling. Often, loopholes and unprotected servers result in unauthorised access to this data. Apart from these challenges, banks do face asymmetry and irregularity in regulations and low customer confidence, but the industry must be diligent of the practices experimented by other countries and must learn from their mistakes. It is pertinent to observe and learn as well as perform their own research and development. Case Study – Bank of Baroda The Analytics Centre of Excellence (ACE) has built a petabyte-scale Big Data


116

Artificial Intelligence and Policy in India (2021)

Lake (BDL) platform, which can process large volumes of structured and unstructured data. Analytics and ML have been at the core of the bank’s revenue programs over the past two years. The bank’s cross-selling and upselling opportunities in Retail, MSME, Liability and Wealth Management are driven by a significant number of ML and predictive models to deliver cross-channel goto-market strategies. Driving a culture of data-driven decision-making is one of the key tenets of the ACE program. Self-service campaign tracking dashboards provide near real-time updates on different campaigns. The bank has taken an ecosystem-based view for large corporate departments. The internal and external data available for the bank’s large corporate customers enables the relationship management team to not only tap into direct banking opportunities with these clients, but also to provide banking services to their ecosystem of vendors, dealers and employees. To support the collection department in prioritizing its efforts, more than five predictive collection models are built to indicate high-risk cases. These models are augmenting the bank’s collection efforts, bringing down overall collection cost, and lowering NPA slippages. To ensure a bank-wide drive in managing delinquent customers, a self-service NPA and delinquency dashboard is deployed for all its branches. It allows them to take corrective action at the right time, thus bringing down costs related to delinquent assets. The bank has recognized that AI is a long journey with a few successful strides. Several initiatives are being planned for the coming years. These include multiple new use-cases in complex areas of digital enablement of stakeholders; hyperpersonalization of offers and activities; automation of decision making; uptake of digital channels; and the continued integration of internal and external data sources to power these. Conclusions A digital boom is certainly taking place across all segments of industry especially banking, especially after demonetization. The traditional banking has evolved and more and more banks are adopting new technologies like AI, Cloud, block chain to cut down their operating expenses and improve efficiency. Though it is still in its nascent stage, banks are still at cusp of an artificial intelligence revolution. AI is a promising technology, researchers were able to gather rich literature from various data sources, tracking & analysing its evaluation, business opportunities, applications etc. and thoroughly analysed the contribution of various scholars, to find connect with the research objectives as described above, ultimately reaching out to the conclusions to identify the application of AI with specific reference to India’s Banking Sector. Due to the development in the field of AI technology, AI is going to be stronger and smarter in the future, which will help any customer to have a secure banking experience. AI will provide the foundation for increased product and service


Artificial Intelligence and Policy in India, Volume 3

117

innovation. Further, AI has the potential to transform customer experiences and establish entirely new business models in banking. To achieve better results, there needs to be a collaboration between humans and machines that will require training and a reassessment of the future of work in banking. Also, mass customization is the key to unlocking significant opportunities in the future and can be tapped only through technologies like AI and blockchain. The banks are harnessing the power of AI to deliver new customer experiences with various solutions and are setting new standards for the Indian banking ecosystem, thereby charting a new wave by embracing tech intensity. AI technology transforms data into a digital format. It helps in enhancing customer experience. It helps in saving time both for the customer as well as for the bank. It helps in reducing human error. It helps in building strong and loyal customers. It helps in the movement of large cash inflow and outflow. It helps in dealing with cashless transactions from any place and at any time. Improvement and development in the AI industry, will increase productivity at a reduced cost. Managers across industries will have to raise their ante on skillset up gradation. There is no doubt that recent push towards digitalization is rapidly influencing the traditional banking models. To conclude, Artificial Intelligence is gaining popularity day by day and banks are exploring and implementing this technology in transforming the way customers are assisted. So, the future of Artificial Intelligence in banking sector is very bright and with the introduction of AI, it makes it even easier for a customer to do transactions from any place and at any time without waiting in long queues at the bank. Hence, the aim of Artificial Intelligence is to provide personalized and high-quality customer satisfaction along with efficient and time saving services.


118

Artificial Intelligence and Policy in India (2021)

8 Artificial Intelligence in Indian Healthcare: Possibilities and Roadblocks Karan Ahluwalia1 1Contributing

Researcher, Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. The use and importance of computers in general, and sorting algorithms in particular, in healthcare and allied fields cannot be overstated. In environments and circumstances that require accuracy of diagnosis and expedient decision-making, computational devices are invaluable tools in the hands of healthcare professionals. Initially, computers served passive bedside roles such as monitoring patients’ vital statistics and were therefore intended to relieve healthcare professionals of mundane task so as to enable them to direct their efforts towards aspects of healthcare that required the application of their intelligence and skill- tasks such as diagnosis, patient counselling, surgery, medico-legal matters, knowledge sharing, training etc. Computers were limited by the fact that they could only perform tasks they were specifically programmed to. With the development of Artificial Intelligence, particularly Machine Learning, more and more of these limitations began to disappear and computers are seen to be assuming increasingly central roles in healthcare. Artificial Intelligence combines the sheer computational and analytical prowess of computers with the fundamentally-human ability to learn and think; this creation of human-like intelligence enables Artificial Intelligence to perform tasks that until recently, could be done only by flesh-and-blood men and women. There exist innumerable avenues in healthcare and allied fields, for deployment of Artificial Intelligence. However, various ethical, technical and legal concerns remain unaddressed in this field and in-turn, inhibit the development of Artificial Intelligence in healthcare to its maximum potential. This paper intends to give the reader a bird’s eye view of the field of “Artificial Intelligence in Healthcare” in light of technological developments, concerns regarding such developments, limitations of Artificial Intelligence, applications in healthcare and policy considerations.

Introduction “Artificial Intelligence” refers to a broad field of advanced technologies that aim at “the artificial creation of human-like intelligence that can learn, reason, plan, perceive or process natural language” (Internet Society, 2017). It consists of a


Artificial Intelligence and Policy in India, Volume 3

119

variety of approaches- Machine learning and Deep learning and two such approaches around which developments in healthcare are currently taking place. These approaches are essentially means of achieving human-like intelligence in machines; they are “ways” in which machines can be “taught” to be Artificially Intelligent. These two approaches use mathematical procedures to derive and recognise patterns and relationships in data-sets so as to discover and generalise the “rules” that govern said data-set. Once these rules are recognised, they can be applied to other data sets of a similar nature to see whether said rules are applicable in the second set or not. If so, these rules can be said to have a general nature and this weighs in favour of their validity. If the rules are not found to be applicable in the second set, the system will try to recognise the “rules” governing the data-set and reconcile them with the “rules” of the first data set in order to derive a more general set of “rules” that would apply in both data-sets. In doing so, not only has the system condensed a general set of rules that governs both data sets, but it has also expanded its own knowledge of such rules by comparing the knowledge gained from the second data-set, to that gained from the first data-set. This experiential learning process underlines Machine Learning and Deep Learning approaches to Artificial Intelligence. Machine learning can be “supervised” and “unsupervised”: • In supervised learning, machines are fed training data that is labelled by humans. Inputs can be variables such as age, blood group, height, weight etc. which are processed by the machine to determine projected outcomes such as onset of a particular disease within “n” years (Raza, 2020). Repeated exposure to different data sets of the same parameters allows the software to increase the accuracy of its predictions by “learning” the correlations of attributes such as age, weight and height with medical emergencies such as stroke. • Unsupervised learning involves feeding the system with data-sets that are not labelled thereby allowing the software greater freedom to find correlations, links and patterns for interpretation. Deep Leaning is a further sub-set of the Machine Learning approach to Artificial Intelligence that functions on a principle similar to the human nervous system. It consists of the creation of small computational units called “nodes” or “neurons” in multiple layers which combine to form a framework that processes information in a manner similar to how pixels combine on a screen to form an image (The Royal Society, 2017). Each “layer” of nodes processes a very specific aspect of the information so presented before passing it off to the next layer to pick up where it left off, and so on until the output layer is reached. This layered nature of Deep Learning allows it to learn, derive from, and process larger quantities of data than was possible through the use of traditional Machine Learning algorithms thereby creating fertile ground for novel discoveries in the data-intensive medical field of Genomics (Zou J, 2019). The choice of approach, i.e., Machine Learning or Deep Learning, is a function


120

Artificial Intelligence and Policy in India (2021)

of the nature of the data-sets in question. While Deep Learning algorithms require less domain knowledge to train, they are usually data-hungry and therefore require large amounts of raw data for training before they are ready to be deployed. Machine Learning algorithms on the other hand require the human trainer to be a domain expert but its results are easier to interpret.

Genomics and Artificial Intelligence Genomics is a branch of medical science that concerns itself with the study of the entire genetic makeup of a human being. It makes use of information derived therein to provide predictions of the onset of certain kinds of hereditary diseases and chances of passing hereditary diseases on to progeny while also providing personalised strategies for diagnostic and therapeutic decision-making (Raza, 2020). The field of Genomics has also become a commercial success by way of the introduction of Direct-to-Consumer DNA testing kits that profile saliva samples to provide information related to one’s genetic heritage while also allowing one to connect with long-lost relatives who have also used the service and whose genetic profile has been saved in the repository of the service provider. Genomics, by its very nature is a data-intensive field. Sequencing of the DNA one a single person involves the identification and analysis of 3 Billion DNA base pairs in order to compare them against reference samples and identify features that could enable the software to make predictions and projections as mentioned previously. Since Artificial Intelligence requires prior training to identify features in the DNA sequence, the creation of a diverse library of DNA samples precedes the deployment of such algorithms. In recent years, there has been an explosion in the volume of genomic and biomedical data that as been collected via various means including but not limited digitized medical histories, smartwatches, voluntary submission to medical research etc. It is projected that at this pace, such genomic and biomedical data will exceed other major sources of big data in the coming few years (Stephens, 2015). The movement towards gathering as much genetic information as possible has only gained steam in the past decade as evidenced by the creation of the 100,000 Genomes Project (Genomics England), the aim of the NHS of England to sequence 5 Million genomes in 5 years under the National Genomic Healthcare Strategy (Department of Health and Social Care) to name a few. Genomics is an incredibly relevant area of study in the 21st Century, enabled by technology, it can study the entire span of the human life cycle (Shendure, 2019). Until now, this field has been limited by technology that was unable to process and make sense of the sheer volumes of complex information contained in DNA- with recent developments in Deep Learning- this bottleneck could be a thing of the past. One of the major obstacles in the way of widespread use of Artificial Intelligence in Genomics is the insufficiency and unavailability of reliable training data.


Artificial Intelligence and Policy in India, Volume 3

121

Direct-to-consumer genetic testing kits and their associated services account for one of the largest data-banks of genetic information at this time, but the absence of any concrete regulatory framework with regard to their operations and the sensitivity of the data that they handle, means that such data cannot be relied on for training Artificial Intelligence for the healthcare and medical sectors (Falsepositive results released by direct-to-consumer genetic tests highlight the importance of clinical confirmation testing for appropriate patient care, 2018). However, application of the “big data” analysis to Artificial Intelligence in Healthcare indicates that before we expect significant outcomes form this application, we must counter two major problems. Firstly¸ there is a limitation on the number of available samples as control variables for genomic analysis and experimentation. Secondly and more significantly, even if a sufficient number of samples are obtained, they have inherent statistical noise which falsifies or otherwise renders conclusions and predictions inaccurate (Mesko, 2017). The only counter to the first problem is to have universal standards and streamlined practices with regard to extraction, storage, transportation, analysis, interpretation and reporting of biological samples- whatever be their nature- so as to create a standardized library of diverse and reliable data-sets that can be used to train Artificial Intelligence to carry out Genomics studies. The second issue exists because there are inherent biases in genomic databases that exist today- these biases arise due to differences in the accessibility and availability of genomic and medical services in different areas of the globe- this results in a reference data set which is skewed in favour of genetic characteristics of people for a particular region or of a particular demographic. Any genomic analysis of a person outside this demographic or region will be done by the system keeping in mind the general rules deduce by it from the biases data and therefore any conclusions or inferences will be incorrect, for example, if the reference data set of an Artificial Intelligence system engaged in genomic analysis is mostly composed of data derived from elderly African-American men, then if the same system was used to analyse the sample of a young Indian girl- the results would be erroneous. What this implies is that our reference data sets must be equally representative of all races, geographical areas, sexes, and other such variables in order to derive a system that is truly universal- the same would have to be specifically curated keeping in mind the fact that the Artificial Intelligence analysis will only be as reliable as the data we train it with- this helps us avoid the problem of “garbage-in-garbage-out” (Braveman, 2006). In the spirit of progress, there have been calls to streamline and standardize electronic health records so as to make them available as data-sets for Deep Learning. Availability of such data-sets would allow us to train Artificial Intelligence that could read medical data, specifically genetic data- faster and with greater accuracy, which in turn would enable us to use its outputs to prevent diseases and to computationally derive a bespoke. best treatment plan for a given ailment in a given person (Norgeot B, 2019). Realising the immense market potential of such software, technology giants such as Google, Facebook,


122

Artificial Intelligence and Policy in India (2021)

Microsoft, Amazon and Apple have become major investors in Artificial Intelligence for healthcare- this has prompted the US FDA (United States Food and Drugs Administration, 2019) and the UK (Department of Health and Social Care, UK, 2018) to enact regulations to guide the development of intelligent algorithms in healthcare. The second issue throws up another pertinent discussion- would respondents in medical research and patients in hospitals be ready to have their genetic data be uploaded to a universal genomic database? Present trends indicate otherwise. In a study, it was observed that 63% adult respondents from the UK were not only unwilling to have their genetic data be used to improve Artificial Intelligence, but were also against the perceived replacement of doctors and nurses by technology (Buston, 2018). A similar study amongst German medical students showed that 83% of them believe in the promise of Artificial Intelligence to improve healthcare (Pinto dos Santos, 2018). This points to a fundamental divergence in views- doctors are medical professionals seem believe that Artificial Intelligence-enabled machines could be valuable tools in their hands but the recipients of their services approach the same with hesitation. One of the reasons for the same could be the increasing number of data breaches at various service providers such as Adobe, eBay, LinkedIn, MySpace and Yahoo to name a new- these instances have introduced a mass paranoia with regard to safety of personal information (Swinhoe, 2021). Another possible reason for the same could be that most people do not understand how Artificial Intelligence works, this is further exacerbated by the fact that Deep Learning networks work like “Black Boxes” in the sense that one only knows what information is to be inputted and what can be expected as the outcome- it is difficult, even for the creators of Deep Learning Artificial Intelligence, to explain what goes on in the nodes of the intermediate layers of the network. This paints the entire activity in an aura of secrecy and naturally causes respondents to be weary of participation in any research to that extent. A perusal of the trends in this area of research clearly indicates that as genomic research becomes increasingly interdisciplinary- involving areas such as molecular biology, biochemistry, mathematics, computer sciences etc., (AlvarezMachancoses, 2020), technology too must advance in order to create toolkits and procedures that are adept at interpreting, analysing and integrating large amounts of data (Becker, 2019). Other potential applications of Artificial Intelligence Genomics is a very broad area of study that encompasses many “-isms” or studies that aim to combine their findings and derive a more profound understanding of the human body and our environment. A great deal of research is currently underway to enable the use of Artificial Intelligence in biological sciences in order to help doctors and researchers understand the cellular and biological processes that underline diseases and disorders (Ching, 2018). The following are


Artificial Intelligence and Policy in India, Volume 3

123

some exciting areas of research: Phenotyping- Phenotyping is a process by which one observes and reports a patient’s features. This information is then used at various junctures along the treatment process to make treatment decisions, prescribe medications and so on. Research is being carried out in order to enable Artificial Intelligence to extract phenotype data from a variety of sources such as electronic medical records and to analyse said data in order to suggest choice of genetic tests (Beaulieu-Jones, 2016). One such software is “Face2Gene”, which uses “DeepGestalt” computer vision technology to suggest a list of congenital and neurodevelopmental genetic syndromes that a person may be suffering from by looking at their facial features. Face2Gene has remarkable accuracy which is evidenced by the fact that when applied in the real world, it was able to suggest ten suggestions of genetic syndromes out of which one was correct in 90% of the cases where it was used (Gurovich, 2019). The software was trained by allowing it to scan tens of thousands of patient images which were labelled with the correct disorder, in order to enable it to draw similarities and distinguish amongst the many subtle facial cues that result from such disorders. DNA Sequencing- DNA sequencing is the process of identifying the nucleotide pairs and their sequence in a given biological sample which can give us a plethora of information about the person such as their sex, ethnicity, age, genetic predispositions etc. Current sequencing techniques are not perfect- in fact, their results are merely indicative, never conclusive as they suffer from various kinds of errors and noise. Artificial Intelligence can be trained to predict DNA binding rates from sequence data, which can be used in conjunction with existing sequencing machines to increase their accuracy multi-fold as well as to prevent many of the errors that are introduced into the sample by virtue of the analytical process (Zhang, 2018). One of the sub-studies of DNA Sequencing is Variant Calling, a process by which we determine at which points in an individual genome, it differs from a reference genome- this analysis is useful for detecting various anomalies in the genome that are the underlying causes of many diseases. Artificial Intelligence can be used to increase the accuracy and decrease the processing times for Variant Calling. Google’s “DeepVariant” is one such software that converts genomic data into images and then approaches its analysis with a reference set of images as an image classification problem by the use of computer vision algorithms. It has been especially successful as evidenced by the fact that it has outperformed stateof-the-art Variant Calling software despite the fact that “DeepVariant” was not specifically trained to undertake genomic analysis (Poplin, 2018). Genome Annotation is yet another imperative aspect of DNA Sequencing whereby Artificial Intelligence has been extensively used to identify and classify specific DNA sequences and elements of the genome such as splice sites, enhancers, promoters and transcription start sites (Yip, 2013). Accurate identification of such structures allows researchers and doctors to understand various structural, functional and regulatory mechanisms of the body and


124

Artificial Intelligence and Policy in India (2021)

thereby predict how variations in genomic structures can increase or decrease chances of various kinds of diseases (Leung, 2016). Once researchers have the correct picture of one’s DNA, they can edit it by adding, removing, repositioning or otherwise manipulating structural components in order to see how such changes affect the organism. CRISPR is the tool of choice for DNA manipulation across the world and its use in conjunction with Artificial Intelligence enabled guidance systems promises better predictability of outcomes (Kim, 2018). Microbiome studies- Microbiome studies are the genetic studies of each and every microorganism present in a particular body side, e.g., the human digestive tract. Artificial Intelligence can be used to identify and isolate microorganisms in said Microbiomes so that scientists and doctors can study how variations in the numbers of these microorganisms, or their absence entirely, can influence the functioning of the microbiome. Manipulations in the number and types of such microorganisms in specific locations of the body can be used as alternatives to drug-based treatment (Zhou, 2019). Genetic Counselling- it is one thing to successfully analyse the genetic predispositions of a person, and an entirely other thing to convey the same to them in a way that it is easy. Companies are developing chatbots to supplement their genetic counsellors so that patients are able to take queries, clarifications and concerns to a chatbot at any time of the day, as opposed to seeking appointments with counsellors. Various companies have achieved degrees of success in this regard- Clear Genetics has created the Genetic Information Assistant while another company by the name of OptraHealth has created chatbots named GeneFAX and OprtaGuru which can be queried to by the use of voice assistants such as Amazon Alexa and Microsoft Cortana (Raza, 2020). Drug development- Machine learning-enabled Artificial Intelligence has a multitude of possible applications in the pharmaceutical industry as its application in the field of genomics can be adapted and applied to the processes of drug discovery, repurposing, targeting and for predicting possible drug responses and interactions (Madhukar, 2018). GlaxoSmithKline has invested in one of the most popular direct-to consumer DNA testing services- “23 and me” in order to gain access to their genomic data banks which enables them to develop targeted drugs using machine learning. Similarly, AstraZeneca and Benevolent are also using Artificial Intelligence to discover and repurpose new and existing drugs respectively (Raza, 2020). It is therefore clear that the capacity of Artificial Intelligence to process vast quantities of data with ease and accuracy hold enormous potential for developments in the healthcare sector because their use removes one of the biggest impediments to research- human error and complacency. That being said, their use brings in new considerations and concerns- all of which need to be addressed on a policy-creation level of governance. In the next section we will look at some aspects of policymaking that are imperative to bring Artificial


Artificial Intelligence and Policy in India, Volume 3

intelligence into mainstream medical and healthcare research.

125

Policy gaps and changes required for widespread use of Artificial Intelligence The impediments in the way of large-scale application of Artificial Intelligence in healthcare can be summarised under three critical heads: Improvement in the quality and diversity of training data- as has been asserted repeatedly, our Artificial Intelligence systems are only as good and accurate as the training data we feed them. The best Machine and Deep Learning devices will fail if they are exposed to noisy, incorrect, inadequate or otherwise faulty training data. Ideally, training data should be fully annotated, free of noise and derived from a diverse and proportionally-representative gene pool (Harvey, 2017). However, the data we have today is noisy and biased. Bias in training data is a serious concern and the Eurocentrism of such data is well-recognised- the same has the ability to render inaccurate or completely falsify testing results on persons from outside the predominant data pool and can exacerbate heath disparities for groups already underserved (Manrai, 2016). These issues can be countered in the following manner: • By establishing a universal protocol for extraction, storage, transportation, analysis, interpretation and reporting of biological samples so as to create a repository of data sets that are correctly identified, annotated and verified; • By ensuring that data-sets are derived from all demographics, geographies and ethnicities in an equally-representative manner so as to create a software that is not region-specific, rather one that can find universal application; • By creating institutions, agreements and conventions that govern the use of Artificial Intelligence in medical and healthcare related fields to ensure that checks and balances are maintained against fouling of carefully-curated training data sets by improper use of Artificial Intelligence; • By training medical and healthcare professionals to use Artificial Intelligence tools intelligently such that they are able to recognise its strengths and flaws to ensure that human users of these tools are neither over-reliant, nor wary of them. Addressing concerns with regard to privacy of personal information and setting accurate expectations- for all the reasons mentioned in the previous section, there is an inherent distrust in people- of technology that accesses their personal data. This problem is further exacerbated by technology companies that set unrealistic expectations in the minds of consumers as to the efficacy and employability of their products. What this results in is a “larger-than-life” image of Artificial Intelligence in Medicine and if it being able to achieve feats of analysis that is simply is not capable of at the moment. The fact that creators of these algorithms


126

Artificial Intelligence and Policy in India (2021)

are either unable to or chose not to disclose the way in which the algorithm works, adds to the mystery surrounding Artificial Intelligence and allows people to paint the technology in the light of mystery. These issues have to be countered by a holistic policymaking approach that includes but is not limited to the following considerations: • Altering medical school curricula suitably so as to enable future medical practitioners understand the strengths and limitations of Artificial Intelligence enabled medical tools; • Safeguarding privacy by enacting stronger data protection regimes so as to give individuals maximum control over their personal information while also providing mechanisms whereby their data can be de-identified and used as anonymous inputs into data banks to develop training data for Artificial Intelligence; • Interdisciplinary training of technical and medical staff in each-others’ disciplines so as to enable them to work cohesively and effectively for the development of further Artificial Intelligence applications in medicine and healthcare. Addressing infrastructural concerns- Artificial Intelligence is reliant on the foundation created by robust infrastructure, high quality data sets, interoperability and sharing standards. However, most applications of Artificial Intelligence are hindered by the inadequacy of computing and data-storage hardware. This is one of the reasons that the UK’s NHS has been unable to expand its Genomic Services beyond a very nascent stage (Ream, 2018). This issue also can be tacked in the following manner: • Making available, cloud computing services for interpretation of genomic data will help institutions minimize their computational and infrastructural costs to a great extent while also making the services available in remote locations; • Such cloud computing services would have to be enabled by specific legislations and multilateral treaties so as to not create regulatory issues if the cloud service providers are located in geographical areas that are beyond the borders of the place where samples are being collected.

Conclusions It is clear that Artificial Intelligence is going to be the next “big-thing” in medical science. Given the fact that the threshold that has to be met- in terms of technical validation and acceptability, for introduction of new technologies in healthcare is very high- widespread use of Artificial Intelligence is a distant reality. However, we as a community must prepare ourselves with the mindsets, the information and the infrastructure required by this new innovation so that when the time comes for this technology to be fully-assimilated in healthcare- we are ready. While it is unlikely that any form of Artificial Intelligence or consciousness will


Artificial Intelligence and Policy in India, Volume 3

127

be able to replace human doctors and healthcare workers, they must re-skill and prepare themselves to work shoulder-to-shoulder with increasingly complex and sophisticated machinery so as to combine to form an effective healthcare unit. The doctors who attain success in the coming decades won’t necessarily be those who have superior technical skills, rather those who have embraced Artificial Intelligence as a valuable albeit limited tool.

References 1.

Alvarez-Machancoses, O, et al. 2020. On the Role of Artificial Intelligence on Genomics to enchance Precision Medicine. Pharmacogenomics and Personalised Medicine. 2020. 2. Beaulieu-Jones, BK., Greene, CS., et al. 2016. Semi Supervised learning of electronic health record for phenotype stratification. Jounral of Biomedical Information. 64, 2016, pp. 168-176. 3. Becker, A. 2019. Artifical Intelligence in Medicine: What is it doing for us today? Health Policy and Technology. 2019. 4. Braveman, P. 2006. Health disparities and health equity: Concepts and measurements. Annual Review of Public Health. 2006. 5. Buston, Strukelj &. 2018. Ethical, Social and Political Challenges of Artificial Intelligence in Health. s.l. : Future Advocacy, 2018. 6. Ching, T., Himmelstein, DS., et al. 2018. Opportunities and obstacles for deep learning in biology and medicine. 2018, Vol. 15, 141. 7. Department of Health and Social Care. NHS must lead the world in Genomic Healthcare. UK Government. [Online] www.gov.uk/government/news/healthminister-nhsmust-. 8. Department of Health and Social Care, UK. 2018. Code of Conduct for data-driven heath and care technology. UK Goverment. [Online] 2018. [Cited: March 21, 2021.] https://www.gov.uk/government/publications/code-of-conduct-for-datadrivenhealth-and-care-technology. 9. False-positive results released by direct-to-consumer genetic tests highlight the importance of clinical confirmation testing for appropriate patient care. TandyConnor, S et al. 2018. 12, 2018, Genetics in Medicine, Vol. 20, pp. 1515-1521. 10. Genomics England. The 100,000 Genomes Project. Genomics England. [Online] https://www.genomicsengland.co.uk/about-genomics-england/the-100000genomes-project/. 11. Gurovich, Y., Hanani, Y., et al. 2019. Identifying facial phenotypes of genetic disorders using deep learning. National Medicine. 15, 2019, Vol. 1, pp. 60-64. 12. Harvey, H. 2017. Ready...set...AI- Preparing NHS medical imaging data for the future. Towards data science. [Online] 2017. [Cited: March 13, 2021.] https://towardsdatascience.com/ready-set-ai-preparing-nhs-medical-imagingdata-for-the-future-8e85ed5a2824.


128 Artificial Intelligence and Policy in India (2021) 13. Internet Society. 2017. Artificial Intelligence and Machine Learning: Policy Paper. Internet Society. [Online] April 2017. [Cited: March 9, 2021.] https://www.internetsociety.org/wp-content/uploads/2017/08/ISOC-AI-PolicyPaper_2017-04-27_0.pdf. 14. Kim, HK., Min, S., et al. 2018. Deep learning improves prediction of CRISPR-Cpf1 Guide RNA Activity. National Biotechnology. 36, 2018, Vol. 3, pp. 239-241. 15. Leung, MKK., Delong, A., et al. 2016. Machine learning in genomic medicine: a review of computational problems and data-sets. Proceedings of the IEEE. 104, 2016, Vol. 1. 16. Madhukar, NS., Elemento, O., et al. 2018. Bioinfomatics approaches to predict drug responses from genomic sequencing. Methods Molecular Biology. 2018, Vol. 1711, pp. 277-296. 17. Manrai, AK., Funke BH., et al. 2016. Genetic Misdiagnoses and potential Health Disparities. 2016, Vol. 375, 7, pp. 655-665. 18. Mesko, B. 2017. The role of artificial intelligece in precision medicine. Expert Review on Precusion Medicine and Drug Development. 2017. 19. Norgeot B, Glicksberg BS & Butte AJ. 2019. A call for deep-learning healthcare. Nature Medicine. 25, 2019, Vol. 1, pp. 14-15. 20. Pinto dos Santos, D, Giese, D. et al. 2018. Medical Students' attitude towards artificial intelligence: A multicentre survey. European Radiology. 2018, Vol. 29, pp. 1640-1646. 21. Poplin, R., Chang, PC., et al. 2018. A universal SNP and small-indel variant called using deep neural networks. National Biotechnology. 36, 2018, Vol. 10, pp. 983-987. 22. Raza, Sabia. 2020. Artificial Intelligence for Genomic Medicine. s.l. : University of Cambridge, 2020. 23. Ream, M., Woods, T., et al. 2018. Accelerating Artificial Intelligence in Health and Care: Results from a State of the Nation survey. Academic Health Sciences Network. [Online] 2018. [Cited: March 13, 2021.] https://ai.ahsnnetwork.com/. 24. Shendure, J et al. 2019. Genomic Medicine- Progress, Pitfalls, and Promise. Science Direct. [Online] March 21, 2019. [Cited: March 9, 2021.] https://www.sciencedirect.com/science/article/pii/S0092867419301527. 25. Stephens, Z, Lee, S, Faghri, F, et al. 2015. Big Data: Astronomical or Genomical? PLOS Biology. [Online] July 7, 2015. [Cited: March 9, 2021.] https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002195. 26. Swinhoe, D. 2021. The 15 biggest data breaches of the century. CSO. [Online] January 8, 2021. [Cited: March 12, 2021.] https://www.csoonline.com/article/2130877/the-biggest-data-breaches-of-the21st-century.html. 27. The Royal Society. 2017. Machine Learning: The Power and Promise of Computers that learn by example. The Royal Society. [Online] April 2017. [Cited: March 9, 2021.] https://royalsociety.org/~/media/policy/projects/machinelearning/publications/machine-learning-report.pdf. 978-1-78252-259-1. 28. United States Food and Drugs Administration. 2019. Artificial Intelligence and Machine Learning in Software as a Medical Device. USFDA. [Online] 2019. [Cited:


Artificial Intelligence and Policy in India, Volume 3

29. 30. 31. 32.

129

March 9, 2021.] https://www.fda.gov/medical-devices/software-medical-devicesamd/artificial-intelligence-and-machine-learning-software-medical-device. Yip, KY., Cheng, C., et al. 2013. Machine learning and genome annotation: a match meant to be ? Genome Biology. 14, 2013, Vol. 5, p. 205. Zhang, XJ., Fang, JZ., et al. 2018. Predicting DNA hybridization kinetics from sequence. National Chemistry. 10, 2018, Vol. 1, pp. 91-98. Zhou, YH. & Gallins, PA. 2019. A review and tutorial for machine learning methods for microbiome host trait prediction. Frontiers in Genetics. 10, 2019, p. 579. Zou J, Huss, M, Abid, A, et al. 2019. A primer on Deep Learning in Genomics. Nature Genetics. 51, 2019, Vol. 1, pp. 12-18.


130

Artificial Intelligence and Policy in India (2021)

9 The Effectiveness of International Space Law Treaties in the Context of AI: A Critical Review Nalin Malhotra1 1Contributing

Researcher, Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. This is a Discussion Paper submitted for the Indian Strategy on AI and Law Programme.

Introduction Artificial Intelligence has been disruptive and has made its presence across all industries, but none have been so receptive of AI as the space industry. The need for intelligent machines to perform particular tasks in areas as hostile and dangerous as space has led to an increased focus on the development of intelligent machines that could easily perform these functions without the need of human presence. However, this reliance and increased focus also comes with its own problems such as privacy concerns, liability concerns etc., which necessitate proper legal mechanisms and instruments to keep pace with these changes. Artificial Intelligence has seen an increased focus when it comes to the discussion of space technologies and we aim to look at this development in the context of existing legal frameworks and whether they require an updated focus. Space Industry is a service-and-needs based industry driven by demand and competitive industry logics without a centralised regulatory authority. And it possesses a unique consequence that there are more ubiquitous repercussions of these activities, as the benefits and costs are ubiquitous across states and society. Currently AI is used in space activities in two ways: • Autonomous Space Objects: These are autonomous bodies, robots, satellite construction and etc. that not only analyses data but also conduct activities that would not be possible for humans such as collecting data, probes, preserving space assets, Active Debris Removal (ADR) • Analysing the data collected such as conducting a predictive analysis from very high satellite imagery, geospatial analysis, debris monitoring, these large datasets can also be stored via space cloud computing i.e., where data


Artificial Intelligence and Policy in India, Volume 3

131

is stored on space-based assets. The primary concerns when it comes to AI in space are privacy issues such as citizen tracking, fake imagery, biased automatic decision making etc. And liability issues such as damages caused by autonomous spacecrafts for e.g., collisions, malware etc. These concerns have been exacerbated and the confidentiality of the information that has been given is now on the minds of the people. Artificial Intelligence based technologies are able to handle a lot of structured and unstructured datasets and are able to derive information out of them44, that may have been missed by a human and prone to human errors, the possibility of missing critical and relevant information has necessitated the use of technologybased solutions that ease this derivation of information45. Sensors from outer space collect massive spatial data46, the storage and analytical costs of which are very high and it would be really difficult to analyze the same without any help. AI is also in a position to deal with problems that are unique to an outer space environment such as Electromagnetic Radiation emitted by celestial bodies that makes the communication difficult. Due to an increased difficulty in communicating not only over huge distances but also through a medium prone to interruption from such noise, it is seen effective to focus on areas of technology which can quickly make decisions of new stimuli without the need for communicating to and fro Earth, which might risk missing out on potential opportunities or threats. This was noted in the case of Mars rover where the signal between the two planets took up to 24 minutes to pass in a linear direction. As it is a long time for making decisions focus has been placed on increasing the cognitive capabilities of robots in order for them to make their own decisions.47 Artificial Intelligence based technology also has immense application in national defense, as this geospatial intelligence can interact, study and analyze objects near or around Earth and its orbit. It extracts these images and later analyzes them providing the necessary information for any further decision making.48 This application has also been seen to extend to other planets and celestial objects for e.g., AEGIS i.e., Autonomous Exploration for Gathering Increased Goodwin, S.: Data rich, information poor (drip) syndrome: is there a treatment? Radiology management 18(3) (1996) 45–49. 45 https://towardsdatascience.com/artificial-intelligence-for-internal-audit-and-riskmanagement-94e509129d49#2402 46 P. Soille, S. Loekken, and S. Albani (Eds.) Proc. of the 2019 conference on Big Data from Space (BIDS 2019), EUR 29660 EN, Publications Office of the European Union,Luxembourg 2019 https://www.bigdatafromspace2019.org/QuickEventWebsitePortal/2019-conference-on-bigdata-from-space-bids19/bids-2019 47 Downer, Bethany. The Role of Artificial Intelligence in Space Exploration, 2018. https://www. reachingspacescience.com/single-post/Al-in-SpaceExploration 48 Walker, J.R. The Rise of GEOINT: Technology, Intelligence and Human Rights. In: Ristovska S., Price M. (eds) Visual Imagery and Human Rights Practice. Global Transformations in Media and Communication Research, 2018. https://doi.org/10.1007/978-3-319 75987-6 5 44


132

Artificial Intelligence and Policy in India (2021)

Science is an autonomous system which has been tested on the Curiosity rover. Though initially used to identify and take pictures of boulders on Mars by the Opportunity rover, since then it is noted to have been able to distinguish these boulders closely, recognize and examine the materials.49 The applications are not limited to just communication between Earth and the robot but Artificial Intelligence based tech. can also be used to assist astronauts by relaying to them important information such as difference in temperature, changes in the level of carbon dioxide etc. SIMON a space robot that can only communicate with people it recognizes via facial recognition50 and the NASA developed AI companion called ISS Robonaut are some examples of AI based astronaut assistants.51 Mission Design and Planning have always been a complex and integral part of nay space exploration program, this requires a lot of human effort and is designed by using information derived by previous missions. Artificial Intelligence technology has been starting to help with the effort of Mission Planning and Design. Daphne is an example of an intelligent assistant that helps in designing Earth observation satellites systems. It provides access to relevant information, feedback and answers specific queries.52 Another use of AI based tech. is dealing with the problem of space debris. The missions that aren’t succesfull may often times end up becoming space junk and become space debris. As per recent estimations there are about 34,000 objects that are bigger than 10 cm53 in that can pose serious threats to infrastructure of satellites, space stations etc. The development of design collision maneuvers by way of machine learning is a possible solution to accurately pinpoint the location of space debris and algorithmically avoid any collision and determine the probability of such collision.54 Ethical Considerations of use of AI Though the adoption of Artificial Intelligence based technology in the space sector makes it easier to plan and conduct missions and reduces costs significantly. It also raises a number of ethical concerns as well. There have been previously cases of identification of suspects via satellites such as the use of Sim, Herbert R. A Force for Space: Artificial Intelligence (AI) Technology? 2018 https:// herbertrsim.com/a-force-for-space-artificial-intelligence-ai-technology. 50 Skripin, Vladimir. SpaceX otpravila na MKS robota-pomoshchnika CIMON s sistemoy II IBM Watson. ITC.ua. https://ite.ua/news/spacex-otpravila-na-mks-robota-pomoshhnika cimon-ssistemoy-ii-ibm-watson 51 https://theconversation.com/five-ways-artificial-intelligence-can-help-space-exploration153664#:~:text=AI%20has%20also%20helped%20with,remote%20satellite%20health%20mon itoring%20system. 52 Id. 53 https://www.esa.int/Safety_Security/Space_Debris/Space_debris_by_the_numbers 54 https://arc.aiaa.org/doi/full/10.2514/1.G005398, https://www.analyticsvidhya.com/blog/2021/01/artificial-intelligence-in-space-exploration/ 49


Artificial Intelligence and Policy in India, Volume 3

133

Google Earth by the Oregon police to identify a man growing marijuana illegally in his backyard55, In 2018 when trees were ripped out of ground in order to produce coals the Brazilian police identified and then later arrested 6 suspects based on the Satellite imagery.56 It is clear that satellite imagery has made it possible for the police to identify the suspects. There has also been instances of positive use of such technology, recently satellite imagery was used to identify that the Uighur reeducation camps in Xinjiang province are guarded by watchtowers and surrounded by razor wire.57 Though there have been instances of positive use, the possibility of misuse of such technology still looms large. Concerns about the use of such technology for mass surveillance and racial profiling are likely possibilities. AI based technology is created fundamentally through data, yet if this data has a racial bias as well the same could be seen then in the AI technology as well. Massive constellation of small satellites deployed in the Low Earth Orbit has increased the observation capabilities by utilizing a massive influx of data 58. These satellites provide more frequent image changes and can update in real time at a much lower cost. Previously satellites would gather terabytes of data that would later be transmitted to the ground station which would then be processed and reviewed. Now these satellites carry mission applications and AI based technology that conducts the processing and reviewing operation in the orbit only.59 Thus, now only relevant information is transmitted to the ground station, therefore, information and images that indicate in anomalous or unusual behavior are easily recognized. In pursuance of this some companies have developed machine learning applications to detect changes which allows users to track the changes on their properties.60 Concerns about the use of such technology loom large. These technologies are capable of doing material (harm to the safety and the health of an individual) and immaterial harm (limitation of right of freedom of expression, human dignity, discrimination and the possibility of loss of jobs). The accessibility of facial recognition data from a plethora of sources, with more and more self-exposure via social media not only has facial recognition become easier but it can also convert various pictures as biometric data through the technical use.61 Thus, massive amount of facial recognition data is present that https://www.cbsnews.com/news/google-earth-used-to-bust-oregon-medicinal-marijuanagarden-police-say/ 56 https://blog.globalforestwatch.org/people/amapa-police-use-forest-watcher-to-defend-thebrazilian-amazon/ 57 https://www.reuters.com/investigates/special-report/muslims-camps-china/ 58 Popkin, G “Technology and satellite companies open up a world of data", https://www.nature.com/articles/d41586-018-05268-w 59 http://interactive.satellitetoday.com/via/december-2019/space-2-0-taking-ai-far-out/ 60 Gal, G.A., Santos, C., Rapp, L., Markovich, R. and van der Torre, L., 2020. Artificial intelligence in space. arXiv preprint arXiv:2006.12362. 61 https://www.cnil.fr/sites/default/files/atoms/files/facial-recognition.pdf 55


134

Artificial Intelligence and Policy in India (2021)

could be utilized for surveillance of an individual and tracking them. The higher availability of facial recognition data can also mark everyday normal behavior such as wearing sunglasses, hoodies as suspicious because it may stop or hinder the identification process via imagery.62 Racial Profiling is a very high likelihood. It involves recognizing a pattern comparable to categorization, generalization and stereotyping.63 Very-HighResolution images may be used for this purpose and for discriminatory profiling64. At times the surveillance of marginalized communities might be disproportionate to that of the general populace. As a result of their marginalized status, they may not be able to actively advocate against such practices. Certain characteristics, general behaviors and preferences of individuals may also be clustered together without prior identification of which these individuals may not be aware of65. This creates derived data that leads to incorrect and biased decisions that are discriminatory, erroneous, unjustified about health, insurance etc.66 The risk of loss of privacy of an individual also remains high67. It is a recognized right of an individual’s privacy that they should have the freedom to move about in their homes and non-public areas such as their gardens, backyards etc. without being identified and their movements not tracked68. This also carries over in limiting the ability of individuals freely associating with whomever they would want to as satellite imagery and surveillance can also recognize individual associations. Legal Concerns of AI in space law International Space Law primarily consists of 5 treaties that form the core of International Space Law, of which the Outer Space Treaty69 forms the cornerstone of International Space Law. Since the wide adoption of Artificial Intelligence based technology, numerous limitations of the treaties has come to light. It has been found that these treaties have been largely normative in their approach and are limited in their scope. Article 6 of the Outer Space Treaty Id. Hildebrandt, M., & Gutwirth, S. Profiling the European Citizen. 2008, Spring. 64 ICO, Big Data, Artificial Intelligence, Machine Learning and Data Protection, UK, 2017. 65 van der Sloot, B., "Privacy in the Post-NSA Era:Time for a Fundamental Revision?" Journal of Intelluctual Property, Information Technology and E-Commerce Law, 5(1), 2014 66 Edwards, L. Privacy, security and data protection in smart cities: a critical EU law perspective. European Data Protection Law Review, Berlin, Vol. 2, 2016, pp. 28-58. 67 Nissenbaum, H., Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford, CA Stanford Law Books, 2010), 70-72; Solove, D."Understanding Privacy". (Cambridge, MA Harvard University Press, 2008), 24-29. 68 Finn, R. L. Wright D. et al: "Seven types of Privacy" in Gutwirth, S., Leenes, R., de Hert, P., Poullet, Y. (Eds.). European Data Protection Coming of Age. Springer, Dordrecht, 2013, p. 16. 69 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, Jan. 27, 1967, 610 U.N.T.S. 205 [hereinafter OST]. 62 63


Artificial Intelligence and Policy in India, Volume 3

135

states70 “States Parties to the Treaty shall bear international responsibility for national activities in outer space, including the moon and other celestial bodies, whether such activities are carried on by governmental agencies or by non-governmental entities, and for assuring that national activities are carried out in conformity with the provisions set forth in the present Treaty. The activities of non-governmental entities in outer space, including the moon and other celestial bodies, shall require authorization and continuing supervision by the appropriate State Party to the Treaty…..” Article 12 of the Outer Space Treaty states that “The provisions of this Treaty shall apply to the activities of States Parties to the Treaty in the exploration and use of outer space, including the moon and other celestial bodies, whether such activities are carried on by a single State Party to the Treaty or jointly with other States, including cases where they are carried on within the framework of international intergovernmental organizations.” Any practical questions arising in connection with activities carried on by international intergovernmental organizations in the exploration and use of outer space, including the moon and other celestial bodies, shall be resolved by the States Parties to the Treaty either with the appropriate international organization or with one or more States members of that international organization, which are Parties to this Treaty. And article 13 of the Outer Space Treaty states clearly that the provisions of this treaty are applicable to the states party to this treaty. This states that the sole responsibility of the non-governmental organization would be borne by states that are party to the treaty. However, in the recent years it has come to light that the space industry has seen and is foretold to have seen an increased participation of the private sector in space industry. With the Department of Space in India having partnered close to 500 companies with ISRO.71 With an increased participation and investment of private firms in Artificial Intelligence based space technology such as Eagleview, Planet and Cape Analytics, it is also expected that big data companies like Facebook would also be entering the industry sooner or later.72 It is clear that the future of space is more and more in the hands of the private sector. Seeing the increased participation of private players, it is evident that the Outer Space Treaty is limited in its provisions being applicable to non-state actors. This raises concerns about treaty shopping here the private players may select states 70

Id.

71https://pib.gov.in/PressReleasePage.aspx?PRID=1655664#:~:text=The%20role%20of%20Ne 72

w%20Space,in%20carrying%20out%20space%20activities. https://www.gwprime.geospatialworld.net/opinion/the-impact-of-artificial-intelligence-onspace-investment/


136

Artificial Intelligence and Policy in India (2021)

that lack proper regulatory regulations concerning Artificial Intelligence as their base of operation. Thought it has been a recognised practice that any discretion given to the states must be utilised in an objective manner as per the international standards rather than subjective standards,73 however, such standards are not clearly defined yet even in international law. This concerns is particularly exasperated in the case of Artificial Intelligence as it is still a new and developing concept and states are hesitant to overregulate it with the fear that such regulation may hinder the process or newer innovations in the technology. Now that these obligations are only limited to states reduces the bite of the treaty and relegates the regime to the rule of politics rather than that of law.74 Furthermore, the concerned private actors may outrightly ignore or flout the treaty provisions and by the sheer might of the rent seeking behavior they might be able to get away with facing any serious repercussions, as the onus is primarily based on the state parties to the treaty. This also provides states with higher balance of power in the global politics an unfair advantage, but here the potential fallout of any illegal conduct by private bodies is much higher as any damage or harm cause may have extraterritorial effect and could end up jeopardizing the rights and safeties of citizens of other states. This is seen in the existence of a jurisdictional limit of space law, where the consent of state parties is required rather than falling under the jurisdiction of a third party75. Seeing that the treaty does not place direct obligations on the nonstate actors unless they commit a recognized international crime like piracy they cannot come under the ambit of jurisdiction of an international organization.76 States can at times extend their jurisdiction to have an extraterritorial application as per customary international law77 and if remedy is sought from a harm the law concerned would be state law unless both parties have agreed to an arbitration. Another problem arises when we consider that the provisions of the space law do not discuss anything about Artificial Intelligence. Thus, this makes adjudicating AI space-based disputes extremely difficult as it is not certain what substantive law would be applicable. Article 1878 of the liability convention sates “The Claims Commission shall decide the merits of the claim for compensation and determine the amount of compensation payable, if any.” H Bittlinger, Hoheitsgewalt und Kontrolle im Weltraum (Carl Heymanns, Cologne 1988) 37; A Bueckling, 'The formal legal status of lunar stations' (1973) 2 Journal of Space Law 113, 115. 74 supra note 17 p. 19. 75 Marc S. Firestone, Problems in the Resolution of Disputes Concerning Damage Caused in Outer Space, 59 Tul. L Rev. 747, 763 (1985). 76 Gal, G.A., Santos, C., Rapp, L., Markovich, R. and van der Torre, L., 2020. Artificial intelligence in space. arXiv preprint arXiv:2006.12362, P 19 77 United States v. Ali, 885 F.Supp.2d 17, 25-26 rev. in part on other grounds 885 F Supp.2d 55 (D.D.C. 2012). 78 https://www.unoosa.org/oosa/en/ourwork/spacelaw/treaties/liability-convention.html 73


Artificial Intelligence and Policy in India, Volume 3

137

And article 1679 of the liability convention states that such a commission shall decide its own procedure. Therefore, this lack of clear substantive law concerning AI application limits the ability of the law to formulate a uniform legal regime.80 Use of the rule of negligence is generally used in cases where harm is done by a machine, however, this necessitates the need of a human fault81 while product liability concerns the failures in design and failure to warn about foreseeable risks.82 It is also speculated that product liability may not be the exact fit needed as it requires to determine the causal link between the damage caused and the defect in the product. It is suggested thus, to develop a new concept of liability that alters the reasonable man standard to be applicable to robots such as the robot common sense standard, for it to be applicable in AI cases.83 As we have previously noted that there are ethical concerns aplenty of AI based technology being used for surveillance and has the possibility of causing discrimination, profiling, loss of privacy etc. It is important thus to look at the various human rights law instruments that govern such issues. Right to privacy is a right tat has been provided for under article 12 of the Universal Declaration of Human Rights84 and article 17 of the International Covenant of Civil and Political Rights85. And though the word racial profiling is not mentioned in International Convention on Elimination of All Forms of Racial Discrimination86 In the case of Williams Lecraft v. Spain (2009)87 ICCPR has been recognised as the first treaty monitoring body to recognise racial profiling as unlawful discrimination. And on 24 November 2020 CERD had issued its General Recommendations discussing the issue of racial profiling at length, wherein

Id. Gal, G.A., Santos, C., Rapp, L., Markovich, R. and van der Torre, L., 2020. Artificial intelligence in space. arXiv preprint arXiv:2006.12362, p. 20. 81 Dr. Iria Giuffrida, Liability for Ai Decision-Making: Some Legal and Ethical Considerations, 88 Fordham L. Rev. 439, 443 (2019). 82 Jason Chung & Amanda Zink, Hey Watson - Can I Sue You for Malpractice? Examining the Liability of Artificial Intelligence in Medicine. 11 Asia-Pacific J. of Health L. Polly & Ethics 51 68 (Nov. 2017). http://eible journal org index.php/APHLE/article/view/84 Megan Sword, To Err Is Both Human and Non-Human, 88 UMKC L. Rev. 211. 224 (2019). 83 Supra note 37 84 UN General Assembly, Universal Declaration of Human Rights, 10 December 1948, 217 A (III), available at: https://www.refworld.org/docid/3ae6b3712c.html [accessed 22 March 2021] 85 UN General Assembly, International Covenant on Civil and Political Rights, 16 December 1966, United Nations, Treaty Series, vol. 999, p. 171, available at: https://www.refworld.org/docid/3ae6b3aa0.html [accessed 22 March 2021] 86 UN General Assembly, International Convention on the Elimination of All Forms of Racial Discrimination, 21 December 1965, United Nations, Treaty Series, vol. 660, p. 195, available at: https://www.refworld.org/docid/3ae6b3940.html [accessed 22 March 2021] 87 Williams Lecraft v. Spain (2009). 79 80


138

Artificial Intelligence and Policy in India (2021)

Section 788 of the GR 26 recognised how algorithmic profiling can exacerbate and increase discrimination and how predictive policing methods across the states have the possibility of increasing racial profiling. Predictive policing involves utilising criminal and social databases that determines the likelihood of crime or the possibility of violent crime.89 One of its recommendations was not only to focus on the need for Human Rights Education of those personnel that would have to deal with the formulation of such artificial intelligence-based technology but to also the need to formulate a comprehensive legislative framework required to combat racial profiling and the internal standards of various law enforcement organisations must be brought up to the mark with the Human Rights standards. Therefore, we see that there is a need for the international treaty instruments to be updated, though the ICERD does recognise racial profiling it is still focused on the states’ actions and puts the ultimate responsibility on the states party to the agreement. Therefore, there needs to be an active recognition of non-state actors for maintaining an effective legal regime that goes in hand with the advancements in technology.

Concluding Remarks We have seen the need for development of Artificial Intelligence in the modern space age, this investment is increasingly being undertaken by the private actors either independently or in collaboration with government space industry. Seeing the recent declaration by the Department of Space in India this advancement and participation of private players is only expected to be increased at an international level as well. However, this does not mean that the advancement of technology has not brought its own concerns of privacy, profiling and mass surveillance. These are real concerns that affect the society at large by the use of algorithmic data analysis of large Very High-Resolution images to monitor changes. This can exacerbate profiling concerns which leads to unlawful discrimination. The international space law has not yet met the challenge of keeping pace with the advancing technology, the International Space Law does not mention any application of artificial intelligence and is limited in its scope by majorly focusing only on state parties. This puts the entire responsibility on the states and their 88https://tbinternet.ohchr.org/Treaties/CERD/Shared%20Documents/1_Global/CERD_C_GC 89

_36_9291_E.pdf Richardson. Rashida, Jason Schultz, and Kate Crawford. Dirty Data. Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice 2019. New York University Law Review Online. Forthcoming. Available at SSRN https://ssm.com/abstract3333423


Artificial Intelligence and Policy in India, Volume 3

139

domestic legislation, private actors may indulge in the practice of treaty shopping where they may start their operation in jurisdictions with relatively lax legislative frameworks governing such changes and can thus circumvent the provisions of the treaty. Because of the unique operation of the industry any fallout over actions of one player may result in extraterritorial damages and harm. The question of liability of actions of AI based tech also remains a part of the discussion as the international law is not clear on a particular comprehensive framework that may be applicable for the new technology. Question about utilising strict liability, fault liability, product liability or a new concept of determining liability based on a different standard of test for negligence altogether, still loom large. These concerns are also exasperated when it comes to dealing with dealing with International Human Rights Law though similar to International Space Law the responsibility rests mostly upon the states this is still a bit more comprehensive as discussions of algorithmic profiling have taken place and the recommendations for the same have been made. Therefore, in order for an effective advancement of technology it must go hand in hand with new legal standards that maintain proper check on the new technologies and its aftereffects.


140

Artificial Intelligence and Policy in India (2021)

10 The Effectiveness of International Space Law Treaties in the Context of AI: A Critical Review Hemang Arora1 1Research

Intern (former), Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. This is a Case Study Paper submitted for the Indian Strategy on AI and Law Programme.

Introduction Everyday several videos/ articles show-up on the internet teaching tricks to be popular on the different social media platforms. Most of them don't work, but one can use them to go viral in a short period for those that work. One can understand it as learning the algorithm's pattern and then simply tricking it by following the same. These algorithms analyse several patterns to decide whether a post shall be recommended to different users or not. It determines its discoverability and reach. Looking at this issue from a non-regulatory point of view, one can agree this is a very trivial issue and that it is something the platform should look into. However, with the increase in social media users and its effect on us, it can have far-reaching consequences. It can have a significant impact on the financial market both for the investor and regulator. The methods of doing the same ranges from spreading fake news to making an important announcement less discoverable. Such acts need not always be intentional on the users' part; moreover, a question that arises is that, if it is unintentional will the platform running the social media website be held responsible. This article looks at several case studies to understand the issues involved and then ends with some proposed solutions to combat the same. Case Study 1 (Coffee, 2017) • Nowadays, high-frequency traders (using algorithms to make micro-second transactions) which dominate the stock market are duped using the famous 'Pump-and-Dump' Scheme. Herein, several firms as 'Social Media


Artificial Intelligence and Policy in India, Volume 3

141

Consultants' try to influence these algorithms by posting hundreds of fake articles about public companies, which then further affect the algorithms used by high -frequency traders based on such posts. Such an act also prompts us to ask why the social media organisation itself does not immediately remove such bot-generated posts (spreading fake news). Even if we consider this to be a downside of micro-second trading, what needs to be understood is that several retail traders are also caught up in the fraud. Their investing decisions end up being manipulated. The USA Securities and Exchange Commission has tried to curb this practice by adding Section 17(b) to the Securities Act of 1933, making it unlawful to publicise any security if any consideration is being received for it unless the same has been disclosed. The article ends by re-iterating that several innocent retail investors are caught up in these frauds and that fake news distorts and corrupts market transparency, which destroys the fundamental objective of Federal Securities Law itself. • It shows us how social media can act as an 'enabler' in this process of manipulation of the market. Even though such practices have been marked as 'unlawful' by most of the market regulators, what needs to be understood is that only a few of the actual cases are reported. Social media websites can use these possible fixes (through machine learning), automatic fact-checking to curb fake-news, monitoring databases to detect similar-message being spammed from a single source, improvising/changing patterns soon as they are 'cracked' by these third-parties. Lastly, it also shows how important it is for regulators to immediately tackle this problem by collaborating with such websites to track such frauds and release a set of factors describing what amount to manipulating (an extension of the traditional definition). Case Study 2 (Cornelius, 2015) • The James Alan Craig case, an investor from Scotland tweets were responsible for a loss of $1.6 million of a pharmaceutical firm's shareholder value. He set up several fake accounts on Twitter, two of which were modelled after two equity research firms. One of his Twitter handles looked like 'ConradBlock@Mudd1waters', which resembled the influential equity firm 'Muddy waters'. Both of his accounts were made a while back and had very few followers. On 29th Jan, he tweeted about a company being under DOJ investigation, this caused the share price of that firm to fall by 26%, but it recovered soon after trading was halted and the claims were busted. Two days after he posted similar fake news about another firm called 'Sarpta', this time before the fraud could be busted, his tweets caused a 16% drop in their share price. Both firms' stock price was back to normal within a few days, and now James Craig could be seeing imprisonment for his acts. This event raises the question of why was an account with barely 20 followers recommended to so many users. Twitter's algorithm instead recommended a tweet that any reasonable person normally would have ignored as fake-news.


142

Artificial Intelligence and Policy in India (2021)

Although no material evidence was available online, answering the question above raises more profound questions about the algorithm and the liability aspect. What if someone cracks the recommending patterns used by such websites and manipulates the stock market. Social media websites, through machine learning, need to keep changing their patterns automatically. Moreover, there needs to be a proper set of guidelines required by regulating agencies based on several checks like automatic fact checking and probation periods for new accounts (limited reach and accessibility). These measures are simply a part of a non-exhaustive list of necessary steps required to stop such events from happening again. Such acts also raise a fundamental question of liability, will a Social-media website be responsible for its algorithm recommending fake news. All these issues need to be taken up by regulatory bodies before such regulation becomes too challenging to implement efficiently. Case Study 3 (When the Going Get Tough, The Tweets Get Going! An Exploratory Analysis of Tweets Sentiments in The Stock Market, 2018) • This is a research project wherein the researchers have tried to find a relation between people's positive sentiments in different tweets and the parallel changes in stock.Two famous cases have been discussed, one wherein Cynk Technology's stocks purged by 36,000% within a few weeks due to the publicity generated with bots and fake accounts on social media. The other wherein a single fake tweet about an explosion in the white-house wiped out $130 billion from the market as high-frequency traders executed millions of transactions within micro-seconds. The objective of citing these cases was to show how easily sentiment could influence the entire market. The research monitored around 100 accounts and recorded the parallel change in several stocks. They used several parameters like the number of tweets, words, hashtags, sentiment etc. Although the methodology implemented was much more complicated, it was based on such primary factors. This study concluded that dominance behaviour in tweets could have very positive and negative on the subject matter. • This research's relevance for this paper is that it shows us how vulnerable this field is, and with the right algorithms, these stocks can be easily manipulated. Herein, this study was performed for academic purposes and only focussed on a few factors. If the same was conducted at a much larger scale and considered many more parameters, it could lead to an almost perfect stock prediction machine. If used correctly, it can change the entire stock trading scenario (50% of the global traders have already started using these software). However, if it falls into the wrong hands, the social media website's algorithm can itself be tricked to manipulate stock prices. If performed over a long term and with a large amount of data, the predictions can become as precise as we want them to be. The transparency factor initially was a fundamental requirement for fair trading is now raising issues


Artificial Intelligence and Policy in India, Volume 3

143

relating to its possible abuse. The social media giants need to come up with much more innovative techniques to counter this problem. So that in such events, they can identify quickly and compile enough evidence for prosecution. Nowadays, it is almost impossible that any successful social media platform does not use Artificial Intelligence in its algorithm. These technologies have drastically changed how people use social media aimed at serving whichever posts in whichever order ensure the highest usage. However, using these clever techniques can also cause a lot problems in the financial market. Through this article the author tries to highlight the same, by mentioning several case studies

Case Study 4 (Prince) Reducing the discoverability • The Securities and Exchange Commission in the United State of America now allows companies to make announcements on social media. Let's say a firm understands that using particular terms on a social website makes their post less discoverable among other users. If the firm posts a very important information which is likely to influence its stock price on social media, but within the message includes a few terms which are likely to show up on the explore page of a very few users. This way one can argue that the firm complied with all the required conditions of the regulatory body, it posted authentic information on a public social media website. Let's say a company named 'X' posted on Facebook that it is acquiring a company 'Y', this is very likely to drive up the market price of company 'X'. However, the announcement posted on Facebook did not reach enough users and till the time this information got out the shares were bought out by a few insiders at company 'X' at a lower price.90 This also raises a question of accountability, will 'Facebook' be responsible for such an issue, and even if it is accountable it becomes very challenging to detect it and gather enough evidence to prove the same. It also makes it very clear that even though such announcements are now allowed to be featured as posts on social media but there shall exist a proper format to the same, this includes parameters like caption format, no. of characters, font-size and font-style. Case Study – 5 (Sucheta Dalal, 2019) Need for immediate but effective Market monitoring and regulations • SEBI; Securities and Exchange Board of India, is the regulation and monitoring body in India which ensures the smooth and fair working of the security market. There exist two major stock exchange markets in India, the NSE (National Stock Exchange) and the BSE (Bombay Stock Exchange). It isn't easy for a company to get registered under these exchange boards, they


144

Artificial Intelligence and Policy in India (2021)

require an extensive amount of compliance undertakings and submit several documents, with the fundamental objective of promoting transparency and avoiding fraud in the market. SEBI also requires these companies to make regular announcements as to changes within the firm these include any changes in board positions, board meetings etc. Mr. Ajay Tyagi, the chairman of SEBI recently in a conference at National Institute of Securities Markets announced that SEBI is acquiring capabilities so as to monitor social media to ensure no market manipulation takes place. He further elaborated that they are already using AI technologies and plan to further expand its usage in monitoring any such manipulation. (Jayshree P. Upadhyay, 2020) These steps were much needed however the reports about the actual condition is what prompts us to look further. A whistleblower through renowned journalist Ms. Sucheta Dalal, uncovered the investigation performed by him on a company which made frivolous and fake announcements on their official website and on social media as well. These announcements naturally caused unusual variations in the company's stocks, which ranged from as low are Rs. 1 to as high as Rs. 11 within a very short span of time. On confirming, that one of these announcements was fake he reported the same to SCORES, SEBI's system designed for reporting these types of scams. Various follow-ups were sent however his complaint was simply dismissed stating that he was not an investor in the firm (which has nothing to do with the need for authentic announcements), that firm still works in broad daylight. Ms. Sucheta Dalal reported that in India thousands of registered firms are following such malpractices to manipulate their stock price, and she further went on to say that the if and so any AI systems have been laid down by the regulatory body, it is not working well and market is being manipulated everyday looting several innocent investors. Not only are such acts not being detected, even if they are reported no further investigations takes place. • The incidence highlights the need for an urgent and efficient AI framework in equity sector, which works in coherence with these social media giants and other regulatory bodies as well. It has also been acknowledged by the Indian regulator as well that social media monitoring for market manipulation has become a global issue and thus there is a need for immediate investment in the same. Case Study – 6 (OECD, 2017) Collusion • One can agree that there exist two types of agreements in collusion which are 'explicit' and 'tacit' agreements. The former refers to those cases wherein there has been an oral or written anti-competitive agreement to collude. The latter refers to those cases wherein anti-competitive agreement co-ordination exists without the need for an explicit agreement. It has then been highlighted how only 'explicit' agreements are illegal under traditional


Artificial Intelligence and Policy in India, Volume 3

145

competition law rules. This has been seen as a shortfall as the introduction of AI algorithim has blurred this 'explicit'and 'tacit' and requires a newer approach. These modern-day solutions include teaming up with several organisations using these algorithms to identify such cases. Also, a checklisttype system might need to be adopted to determine whether there has been a case of collusion. The importance of multi-dimensional algorithms and policies between several agencies has been re-iterated. Lastly, it emphasises that such rules and policies required to regulate shall only be implemented after due diligence and proper research due to its long-reaching effects. • One could face the issue that stems from the fundamentals of tacit agreement to collude, one wherein the algorithms used by these high-frequency traders could end up colluding without any explicit agreement. Such a situation could arise under the following circumstances; different traders use the same program, different algorithms processing the same data, different algorithms using very similar parameters. We can understand this by taking an example of high-frequency trading software used by two other firms using the same financial records and analysing the same social-media websites to analyse investor choice. Suppose may be due to huge market share, these algorithms working for two different firms overtime learn how their interdependence could lead to the stock price manipulation. In that case, it could lead to uncontrollable modern-day collusion. Such acts don't come under the scope of traditional anti-trust rules. This also highlights the importance of moderntechniques/methods required for market regulation. Moreover, it becomes tough to identify such cases and evidence required, as such decisions result from complex algorithms. Lastly, we can also apply the recommended solutions in the article, like using AI algorithms to identify any possible algorithmic collusion and then perform software-controlled account reviews to search for coordinated effects. Proposed Solutions 1. Proper Format for Social Media Announcements- As mentioned before, adding or deleting a few terms can heavily affect a post's virality. Thus, it becomes essential for regulators all around the world to declare strict guidelines for such announcements. This will ensure that no post causes any sudden fluctuation in the stock merely because its discoverability was set higher by the algorithm. Such guidelines can include fixed formats using many parameters, such as captions, graphics, and the post itself. 2. Realtime fact-checking for fake-news- Due to several instances, major social media platforms have now started to monitor their platforms for fake-news. But if such posts stay live even for a short period of time it can cause huge variation in the stock market due to the existence of high-frequency traders. Although this process will be made faster with time, at present, measures like setting a discoverability threshold for spiking posts (before they have been fact-checked) can also be helpful. Moreover, the practice of probation period


146

Artificial Intelligence and Policy in India (2021)

for new accounts can be set so as to avoid bot-generated spams on the platoform. Such an approach is now being followed by 'Reddit', \ which doesn't allow new users to do certain tasks for the first few days of joining. Like a simple limit on the maximum number of posts by that account (this practice has now been adopted by most of the social media organisations). Following these measures will avoid spamming, ultimately avoiding market manipulation. However, to combat the immediate response by the highfrequency algorithms is still very difficult as even a micro-second delay in fact-checking can have a huge after effect; thus, this is an area that still needs to be looked upon. 3. Combined Efforts to Detect Instances and collect Evidence- As discussed before, due to such high complexity of AI algorithms, it becomes very difficult for the regulators to monitor and detect such instance; thus, it becomes necessary to use AI-enabled technology to monitor the same. Moreover, it also needs to be a joint effort in whichever way possible; any suspicious acts reported by the social media algorithm shall also be immediately notified to the State headed regulators. This might seem very naïve as it includes many aspects like that of violation of users' personal data (as it becomes difficult to differentiate between bots and actual human beings). But it has also become clear that it is challenging to spot such acts, and even if they are detected, it is a nightmare for the regulators to collect evidence for the same. Thus, only with a combined effort, if not between actual agencies but the algorithms used by them, can lead this process to become much more efficient and curb such acts. 4. Frequently changing Algorithm Preferences- Lastly, although almost all major social-media organisations have started to change their discoverability preferences often, but for those that don't rules can be added so as to make compliance mandatory. In other words, algorithms prefer those patterns which result in maximum watch/use-time, this can lead specific posts to get preference over others. This pattern, if known can be used to trick the algorithm. Thus, it becomes essential for algorithms to constantly keep changing such 'preferred patterns' so that it becomes virtually impossible to manipulate the same. This can be introduced in the form of some guideline for all social media organisations so that the problem of specific posts becoming 'viral' is curbed.

References 1. Coffee, John C. 2017. Cheating the Algorithm: The New “Pump and Dump” Fraud. The CLS Blue Sky Blog. [Online] Columbia Law School, January 24, 2017. [Cited: March 15, 2021.] https://clsbluesky.law.columbia.edu/2017/07/24/cheating-thealgorithm-the-new-pump-and-dump-fraud/.


2.

3.

4. 5.

6.

7.

Artificial Intelligence and Policy in India, Volume 3

147

Cornelius, Doug. 2015. Twitter for Stock Manipulation. Compliance Building. [Online] Compliance Building, November 9, 2015. [Cited: March 8, 2021.] https://www.compliancebuilding.com/2015/11/09/twitter-for-stockmanipulation/. Jayshree P. Upadhyay, Abhijit Ahaskar. 2020. Wary of social media, Sebi to get tech-savvy. LiveMint. [Online] Mint, January 24, 2020. [Cited: March 14, 2021.] https://www.livemint.com/companies/news/sebi-to-enhance-social-mediasurveillance-11579766267051.html. OECD. 2017. ALGORITHMS AND COLLUSION Competition policy in the digital age. s.l. : OECD, 2017. 1-0. Prince, Kate. Social Media and the Stock Market: A Lesson in Global Manipulation. Business Grow. [Online] [Cited: March 13, 2021.] https://businessesgrow.com/2014/08/14/publicly-traded-companies-waryfacebook-lesson-global-manipulation/. Sucheta Dalal. 2019. Stock Manipulation Is Rampant & Unchecked: Here’s Why. MoneyLife. [Online] MoneyLife, December 25, 2019. [Cited: December 25, 2019.] https://www.moneylife.in/article/stock-manipulation-is-rampant-and-uncheckedheres-why/58990.html. When the Going Get Tough, The Tweets Get Going! An Exploratory Analysis of Tweets Sentiments in The Stock Market. Andrey Kretninin, Jim Samuel, Rajiv Kashyap. 2018. 5, s.l. : American Journal of Management, 2018, Vol. 18.


148

Artificial Intelligence and Policy in India (2021)

11 The Emerging Trends of Chinese Companies’ Usage of AI & the Ethical Trends Nisarg Bhardwaj1 1Research

Intern (former), Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. This is a Case Study Paper submitted for the Indian Strategy on AI and Law Programme.

Introduction There has been an AI boom in last couple of decades which has swept the the whole world with it. This AI wave is evident by research towards commercial applications, primarily driven by the significant enhancement in computing power, policy formations by various countries, big frequent investments and continued increasing applications in various forms in a user’s life. The AI wave has hit China on an even bigger scale where there exist over 4,000 AI companies in 2019, according to a report by Deloitte, making it the second largest hub of AI companies in the world. (Deloitte, 2019) In the recent years, Beijing has laid out quite a few plans that aimed towards turning China into a world leader in numerous technological areas, in the likes of a 15-year blueprint termed the “China Standards 2035” whose objective is the outlining of plans for setting the global standards for futuristic technology. In 2017, China declared her ambition towards becoming a global leader in the field of AI by the year 2030, dubbed the 2030 vision. In 2015, Beijing inaugurated the “Made in China 2025” plan for purposes of dominating global high-tech manufacturing. (Cher, 2020) Recent Developments in AI Ethics in China The Chinese State Council, which is the chief administrative body within China, in July 2017 released the ‘New Generation Artificial Intelligence Development Plan’ (AIDP) which acts as a unified document that outlines China’s AI policy objectives. The all-embracing focus of this policy, according to the State, is to make China the world centre of AI innovation by 2030 and make AI the centerpiece in the industrial upgrading and economic transformation. (Webster, et al., 2017)


Artificial Intelligence and Policy in India, Volume 3

149

The Plan shows that the Chinese Republic understands the importance of AI in a broad range of sectors, including defence and social welfare, and the need to develop standards and ethical norms for the use of AI. All in all, this plan by the Chinese state outlines the strategy that China will take on in the coming years. (Webster, et al., 2017) The government organizations, semi-government organizations and even private companies, post the release of AIDP have been slow to come up ethical guidelines in this regard. (The Chinese approach to artificial intelligence: an analysis of policy, 2020) However, since late 2019 there has been a surge in trying to define the ethical principles surrounding AI operations. In March 2019, the Chinese Ministry of Science & Technology came up with the National New Generation Artificial Intelligence Governance Expert Committee. Later in June, 2019 this body came out with 8 principles for the governance of AI. These principles advocate for AI governance, aiming to “promote the healthy development of a new generation of AI; better coordinate the relationship between development and governance; ensure that AI is safe/secure, reliable, and controllable; promote economically, socially, and ecologically sustainable development; and jointly build a community of common destiny for humanity.” (Laskai, et al., 2019) In line with these principles, a white paper was published on AI standards, by the Standardization Administration of the People’s Republic of China, the national level body responsible for developing technical standards. (The Big Data Security Standards Special Working Group , 2019) The white paper contains a lengthy discussion on the important issues of ethics and safety in relation to the technology of AI in China and AI in general. The white paper puts out three essential principles for the ethical working of AI technologies to be adopted by the government as well as the private sector companies. The principles are, first there is the principle of human interest. The principle of human-interest states that the ultimate goal of AI has to be human welfare and that each AI technology developed should contribute to the benefit of humans. This principle reflects the respect for human rights and that importance should be given to natural environment as well as reducing technological risks and negative impacts on society. This principle urges people to devote policies and laws for the construction of the society for the development and proper reception of AI in the society. In addition, the principle warns to be wary of AI systems making ethically biased decisions. An example of this is in case where universities use machine learning algorithms to assess admissions, and the historical admissions data used for training (intentionally or not) reflect some bias from previous admission procedures (such as gender discrimination), then machine learning may exacerbate these biases during repeated calculations, creating a vicious cycle. If not corrected, biases will persist in society in this way.


150

Artificial Intelligence and Policy in India (2021)

Second is the principle of liability. The principle of liability emphasizes that there should be accountability for the actions relating to development of AI. Accountability should be there as a requirement for both the development and the deployment of AI systems and solutions. According to this principle, a clear system of liability should be made, in relation to technology development as well as application, for the reason that AI developers could be held accountable in case something goes haywire at the level of technology, and to establish a reasonable system of liability and compensation at the application level. Also, the principle of liability encompasses within itself the principle of transparency. Technology development in the area of AI should follow the principle of equal rights and responsibility. Transparency in the operations of an AI technology is very essential for predicting future development. The operators should know why and how did the technology work in a certain way, which also helps in establishing and allocating liability. An example of this can be an artificial neural network, an important topic of AI, where one needs to know why it produces a specific output result. In addition, data source transparency is equally important. Even when working with a data set that has no problems, it is still possible to encounter problems of prejudice hidden in the data. The principle of transparency also requires attention to the hazards associated with the collaboration of multiple AI systems when developing technologies. The third principle is the principle of consistency of rights and responsibilities. This principle of consistency of rights and responsibilities has a two-way duty. On one side, the principle asks for necessary business data should be properly recorded, the corresponding algorithm to the data should be supervised, and that commercial applications should be subject to a reasonable review; and on the other side, the principle states that commercial entities can use reasonable intellectual property to the protect the parameters of the organization. However, the white paper stated that this particular principle of ‘consistency of rights and responsibilities’ has not been yet fully implemented in the field of AI applications, by the business community and the government in the practice of ethics. The reason according to the white paper for this was because engineers and design teams tend to ignore ethical issues in the development and production of AI products and services. In addition, the entire AI industry is not yet accustomed to a workflow that takes into consideration the needs of various stakeholders. In AI-related industries, the protection of trade secrets is not in balance with transparency. (Ding, et al., 2018) Ethical Principles developed by Multi Stakeholder Groups & Companies Government affiliated bodies and private companies have also developed their own AI ethics principles. Noteworthy examples of bodies discussing AI ethics in a formal, multistakeholder capacity are found in two sets of documents that


Artificial Intelligence and Policy in India, Volume 3

151

contain AI ethics principles. The first example is of the Beijing Academy of Artificial Intelligence which is a multi-stakeholder group research and development body for AI which includes China’s leading private companies and universities from Beijing and was established in November, 2018. This organization, in May 2019, released a document called the ‘Beijing AI Principles’, which are similar to the principles forwarded by AIDP Expert Committee. Although the contents of the BAIP are closely similar to the international principles, it has some distinctive features. (The Chinese approach to artificial intelligence: an analysis of policy, 2020) The AIP principles are discussed below. One of the drafters, Yi Zeng, has said that these principles are proposed as an initiative for the research, development, use, governance and long-term planning of AI, calling for its healthy development to support the construction of a community of common destiny, and the realization of beneficial AI for humankind and nature. (Bruce, 2019) The principles are divided into three different categories. The first category deals with the Research and Development and the second category deals with the ethical principles which deal with Use of AI technology and the third category of principles deal with the governance of AI systems. (Bruce, 2019) The research and development (R&D) of AI should observe the following principles: 1. Do Good: AI technologies should be designed and developed with the aim of promotion of the progress of society and human civilization, of promotion ofe the sustainable development of nature and society, for the benefit all humankind and the environment, and for the improvement of the well-being of society and ecology. 2. For Humanity: The Research & Development of AI should be done with the purpose to serve humanity and conform to human values as well as the overall interests of humankind. Values such as human privacy, dignity, freedom, autonomy, and rights should be sufficiently respected. This principle also states that AI technologies should not be used to against, utilize or harm human beings. 3. Be Responsible: Researchers and developers of AI should research responsibly and keep the potential legal, ethical and social impacts and risks of their products and take strict actions to reduce and avoid them. 4. Control Risks: Consistent efforts should be made to improve the maturity, robustness, reliability, and controllability of AI systems, so as to ensure the security for the data, the safety and security for the AI system itself, and the safety for the external environment where the AI system deploys. 5. Be Ethical: AI Research & Development should make the system trustworthy by taking measures to make ethical design approaches. Measures such as , but not limited to: making the AI technology as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability, and predictability, and making the technology more traceable, auditable and accountable.


152

Artificial Intelligence and Policy in India (2021)

6. Be Diverse and Inclusive: The development of AI should indicate diversity and inclusiveness. AI technologies have to be designed to benefit as many people as possible, especially those who would otherwise be easily neglected or underrepresented in AI applications. 7. Open and Share: Avoiding data/platform monopolies should be encouraged. Platforms should be encouraged to share the benefits of AI development to the greatest extent, and to promote equal development opportunities for different regions and industries. The next set of principles for the use of AI are: 8. Use Wisely and Properly: People who use AI systems should have the necessary knowledge and ability to make the system operate according to its design. They should be aware of the risks and impacts associated with the working of the system so as to avoid possible misuse and abuse, and to maximize its benefits and minimize the risks. 9. Informed-consent: Steps should be taken which make sure that the people using AI are with sufficient informed-consent about the impact of the AI technology on their rights and interests. In case if unexpected circumstances occur, measures should be taken to establish reasonable data and service revocation mechanisms to ensure that users' own rights and interests are not infringed. 10. Education and Training: Stakeholders of AI systems should be able to receive education and training which would help them to understand and adapt to the working of the system as well as understand the impacts of AI development in psychological, emotional and technical aspects. The next set of principles in the white paper deal with governance of AI systems– 11. Optimizing Employment: Governance should follow an inclusive attitude towards the potential impact of AI on human employment. As well, a cautious attitude should be taken towards the promotion of AI applications because of the impact AI technology can have over people at times. Measures to explore Human-AI coordination should be encouraged. New forms of work which would give full play to human advantages and characteristics should be encouraged. 12. Harmony and Cooperation: Different stakeholders should cooperate to be actively develop and establish an interdisciplinary, cross-domain, crosssectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, which might help to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis". 13. Adaptation and Moderation: Scope should be left for adaptive revisions of AI principles, policies, and regulations so that laws, regulations and policies grow hand in hand with the development of AI. Governance policies of AI


Artificial Intelligence and Policy in India, Volume 3

153

should match its development status to ensure the benefits are given to all in the society and also to avoid hindering its proper utilisation which might happen in case policies do not go in hand with the development. 14. Subdivision and Implementation: Focus should be also given to the various fields and scenarios of AI applications before further formulating more specific and detailed guidelines. The implementation of such principles should also be actively promoted – through the whole life cycle of AI research, development, and application. 15. Long-term Planning: Apart from the short-term goals, continuous and consistent research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic decisions should be made in a way as to ensure that AI will always be beneficial to society and nature in the future. Another example of private sector groups establishing AI ethical principles is the Joint Pledge on AI Industry Self-Discipline by the Artificial Intelligence Industry Alliance. (Gal, 2020) The Artificial Intelligence Industry Alliance is an alliance formed by a group of Chinese tech companies including Xiaomi and Megvii, for the purpose of expediting the development of the artificial intelligence sector in Beijing and Zhongguancun, and providing “vigorous support to its rapid development.” (CBN Editor, 2020) The AI pledge released by the group was also backed by the Ministry of Industry and Information Technology. This pledge is largely similar to other global principles and the principles laid down in the white paper by the AIDP Expert Committee. Although this pledge has one point of notable distinction under the Article 10: Self discipline and self governance. This article 10, calls for a unique style of governance, which is not seen often in Chinese governance. It calls to ‘strengthen awareness of corporate social responsibility, integrate ethical principles into all aspects of AI-related activities and implement ethical reviews. Promote industry self-governance, formulate norms of behaviour for practitioners, and progressively build and strengthen industry supervision mechanisms.’ This suggestion for periodical ethical reviews, self-governance and supervision mechanisms indicates towards a significantly stronger degree of implementation, enforcement and oversight of AI ethical principles, a type of law which is considered soft law. (Gal, 2020)

Approach by Private Sector Companies for Governance of AI In the case of private sector contributing towards the ethical principles about AI technologies, one of the most high-profile ethical framework has come from the CEO of Tencent, a multi-billion corporation, Pony Ma. Speaking at the World Artificial Intelligence Conference, 2019 held at Shanghai, Mr. Pony Ma said that, “Global governance and cooperation is essential to the development of artificial


154

Artificial Intelligence and Policy in India (2021)

intelligence. The industry must find ways to overcome technological competition, trade disputes and geopolitical conflicts.” Tencent released a document of ethical principles for the governance of AI called the ARCC (Available, Reliable, Comprehensible, Controllable). In the words of Jason Si, Dean of Tencent Research Institute, “Just as Noah’s Ark preserved the fire of human civilization, the healthy development of AI needs to be guaranteed by the ‘ethical ark’.” According to the ARCC Principles, (Tencent Reserach Institute, 2020) 1 AI should be available – It should be ensured AI is available to as many people as possible, to achieve inclusive and broadly-shared development, and avoid technology gap. AI technologies should respect human values such as human dignity, rights and freedoms of individuals. Algorithmic fairness should be maintained and AI algorithms should be reasonable, accurate and unbiased and representative. Steps should be taken to identify, solve and remove bias from the algorithms. 2 AI should be reliable – AI technologies should be created so as to be safe and reliable and should be capable to safeguard against cyber attacks and unintended consequences. Tests should be laid down to make sure that AI systems go through vigorous tests to test the performance and get the reasonable outcomes from the AI technologies. To gain trust of the people, and for people to rely on an AI technology as reliable the makers of the AI technology should comply with privacy obligations and they should safeguard data abuse. 3 AI should be comprehensible – AI systems should be made understandable to public in order to gain their trust and for encouraging people to use it. Measures should be taken to promote algorithmic transparency and algorithmic audit, in order to achieve understandable and explainable AI systems. AI is not just a singular technology, it encompasses within itself various different segments and entities. Efforts should be made to establish different transparency standards for different entity and technical literacy should also be encouraged. The right of an individual to know should be respected and users should be provided with sufficient information about the working of the respective AI systems. 4 AI should be controllable – This according to the document is the most important principle which tech companies should follow while creating AI technologies. The system created should be at all times within the control of the creators or humans and should not at any moment, go beyond the control. This should be done to avoid endangering the interests of the individuals involved in the making of the technology and the people using it as well. Measures should be taken to make sure that the benefits of an AI system substantially outweigh the controllable risks, and that appropriate measures are taken to safeguard against the


Artificial Intelligence and Policy in India, Volume 3

155

risks by defining the boundaries of the AI system. The document also talked about other measures which governments could take to enhance and build the trust of AI in people. Ethical principles are just the beginning, steps such as light-touch rules, like social conventions, should be the next step. After that, mandatory rules, like government-backed rules, should be made and in some cases, even criminal liability should be set to keep the use of AI systems in check. (Tencent Reserach Institute, 2020) In China, the major players in technology related policy and governance debates are the chiefs of the triumvirate of tech companies also called the BAT, which stands for Baidu, Alibaba and Tencent. The chiefs of these companies have a major say in technology related decisions. The three have also spoken their minds on governance of AI as well. The 2018 World Artificial Intelligence Conference (WAIC) provided the platform for all the technological giants of China to deliberate on the issue of governance of AI. The WAIC opened in Shanghai, with the theme "New Era of Artificial Intelligence". The stage at WAIC gathered not only some of the most influential scientists in the field of Artificial Intelligence (AI), but also some essential technological entrepreneurs of China. (CKN, 2018) The approach by Tencent is already discussed in the above paragraphs. Another forerunner in the field of promotion of ethical use of AI has been Baidu, a multinational technology company in China. Robin Li, the CEO of Baidu and also the national political advisor, supports the development of AI and believes that ethics is very important for development of AI. (CKN, 2018) Speaking at the World Artificial Intelligence Conference (WAIC), Robin Li said that a true AI company possess three qualities, which are, AI thinking, AI ability and the ethics of AI. Considering the ethics for AI, the highest principle of AI is ‘safe and controllable’. AI technologies do not have to replace people and surpass them, rather the true of a good AI system lies in to teach people to grow. A true AI company according to him does not just have to technically AI but also culturally. An AI system’s vision of innovation is to promote equal access to technological capacity. Baidu has been in favor of government backed ethical norms. According to Baidu chief Robin Li, “Only by establishing a sound set of ethical norms in AI and properly dealing with the new relationship between machines and humans, can we reap more benefits from AI.” Li spoke at the plenary meeting of the second session of the 13th Chinese People’s Political Consultative Conference (CPPCC) National Committee. (China Daily, 2019) Baidu, in 2018, became the first Chinese company to join a US-led group, Partnership on AI to Benefit People and Society (PAI) which is involved in studies to formulate best industry practices for use of Artificial Intelligence (AI) technologies. The group has big US tech companies such as Google, Alphabet, Apple and Facebook. (WARC staff, 2018) However, in 2020, after the deterioration of the US-China relations, Baidu left the research group. (Daws, 2020)


156

Artificial Intelligence and Policy in India (2021)

Another tech giant is Alibaba.com founder and chief, Jack Ma. Speaking at WAIC, 2018 he said that AI technology revolution has great potential and much can be expected from it. It is disruptive in nature and it is not only a technology, but it is a way we understand the external world, the future world, the human body and redefining our thinking. AI technology will transform the manufacturing industry into service industry. However, if promotion of such technologies towards a greener and more inclusive society is not done, these disruptive technologies such as AI, blockchain and Internet of Things (IOT) would be worthless. Alibaba company has been wary of the issues associated with emerging AI technologies. (Alibaba clouder, 2017) The ethical issues associated with AI visualized by Alibaba are 1. Unemployment – the recent developments in the AI technology realm indicate towards a fully enabled AI world. However, such advancements raise the question of whether or not artificial intelligence will create a surge in unemployment. 2. Wealth gap – In case if the predicted productivity improvement created by AI technologies does not get translated into salary increase, prosperity for workers and consumers will take a hit, and the industry competition will be reduced and the gap of wealth inequality will be broadened. 3. Rights of Robots – there have been recent international developments for “electronic personhood” which will include rights of AI and robots. Once, such developments are made where AI is integrated with consciousness, the question of how to treat such machines will arise. 4. Car Accidents – another very specific ethical issue associated with AI technologies visualized by alibaba.com. The issue of what happens when a self-driving car injures a pedestrian is an issue visualized by alibaba.com as a potential AI ethical issue. (Alibaba clouder, 2017)

Comparison between China and other countries regarding AI ethical principles and governance technology development A comparison between the currently going efforts of China with those other countries in relation to the development of ethical principles and governance technology for AI is necessary for getting a perspective on the efforts made by China. From the discussion above, it can be perceived that in China, the government as well as the private sector companies have been vocal about building ethical principles for AI technologies and for promoting AI technology for the betterment of people and society. If academic research and industrial development in relation to AI is considered, Chinese researchers and AI developers have been actively developing technologies along with international peers. (Ethical Principles and Governance Technology Development of AI in, 2020)


Artificial Intelligence and Policy in India, Volume 3

157

Governmental & Institutional Comparison towards AI governance Jurisdictions around the world have released their ethical guidelines for the regulation of AI. Considering EU, in 2018 EU released the General Data Protection Regulation (GDPR) and later in April of 2019, EU formed HighLevel Expert Group on AI released its ethical guidelines for trustworthy AI. (European Comission, 2019) Considering US, in late 2020 the White House issued an Executive order on maintaining American leadership in AI, and promoting the use of Trustworthy Artificial Intelligence in the Federal Government and demanded that the National Institute of Standards and Technology (NIST) make a plan to develop ethical and technical standards for reliable, comprehensible and trustworthy AI system. (White House, 2020) Along with the United States and the EU, China too has been the forerunner globally for setting up government bodies for nationwide AI governance and ethical initiatives, which are discussed above. The UN too is promoting use of trustworthy AI at the United Nations Educational, Scientific and Cultural Organisation (UNESCO) AI conference held in March 2019. This conference focused majorly on the integration of AI with human values for sustainable development and a greener future. In spite of these national initiatives, there have not been any multinational, multilateral joint action taken up by the world leaders in AI. In the case of private sector companies, western tech giants have made efforts for sound and ethical governance of Artificial Intelligence technologies, both domestically and internationally. In 2016, major tech giants from USA like Alphabet Inc., Amazon, Apple Microsoft, Facebook came together to form a group, Partnership for AI (PAI), for devising best industry practices for creation and use of AI. However, Google has recently faced some issues with their ethics committee at Alphabet Inc., the parent company of Google. In December, 2020 Alphabet Inc. Google’s parent company, fired Timnit Gebru, a celebrated artificial intelligence researcher and one of the most accomplished members of her field. Since December 2020, Google has also fired other prominent members of its ethics committee, the founder of the AI ethics committee, Margaret Mitchell. Google has been the supporter of trustworthy and ethical AI but the recent actions put a question mark over the ethical approach Google is taking towards forming ethical principles for AI. (Silverman, 2021) Academic research and industrial development perspective In this section, we’ll discuss various areas relevant to AI academic research and compare the ethical and industrial development between China and western companies. In the area of data security and privacy, WeBank’s, a Chinese private bank, FATE is one of the major open-source projects for federated AI ecosystem. FATE is the only project in China which supports distributed federal learning among other open-source projects in China, in comparison with Google’s


158

Artificial Intelligence and Policy in India (2021)

TensorFlow federated learning. Federated learning is a model which enhances data security by enabling an approach which deals and feeds on unlocalised decentralized data. (Ethical Principles and Governance Technology Development of AI in, 2020) With regard to the safety and robustness of AI, studies and research around the world have been done to address various issues. Issues such as vulnerability of DNNs have been tried to be curbed by research. Among these efforts, new algorithms developed by Chinese researchers have demonstrated excellent performance in adversarial testing and defense. (Ethical Principles and Governance Technology Development of AI in, 2020) An example of international cooperation in this field has been the example of Baidu, Baidu has worked with researchers from the University of Michigan and the University of Illinois at Urbana Champaign to research on the vulnerabilities of DNNs adopted in LiDAR-based autonomous driving detection systems and have tried to find results in the same regard. (Adversarial Objects Against LiDAR-Based Autonomous, 2019) With regard to transparency and accountability of AI, Chinese researchers as well as western tech companies have made efforts in this regard. Researchers from private sector companies from China, Baidu and Alibaba have time and again actively proposed new methods for interpretation and visualization tools for Machine learning. Western companies such as Facebook, IBM, Microsoft have also lately released their AI expandability tools. These AI expandability tools help in implementing general framework for AI interpretation. A recent example of this is the AI Explainability 360 by IBM. The AI Explainability 360 is an open-source software which helps in integration of 8 different AI interpretation methods and two evaluation metrics. (Mojsilovic, 2019) However, such efforts have not been seen in the Chinese counterparts. Chinese companies should too make such platforms and open-source softwares for enhancing the public trust in emerging AI technologies.

Conclusions Chinese government and private sector companies have been very vocal as well as active in the governance of AI technologies. There have been many examples where the Chinese counterparts have outdone their western counterparts in establishing sincere AI ethical principles and the China has launched nation-wide initiatives for proper AI governance and ethics. However, China’s AI ethics initiatives need to be seen in terms of the country’s culture, ideology, and public opinion. China is a country where it can be perceived that quite often societal good takes precedence over individualistic rights and thus it echoes the involvement of Confucian ethics. In this scenario, it will be interesting to see how the highly humanstic ethical principles play out in China. Also, in case if there needs to be more sincere implementation of these principles it is necessary for


Artificial Intelligence and Policy in India, Volume 3

greater collaboration on international level between the national players.

159

References 1. 2.

3. 4.

5.

6.

7.

8.

9. 10.

11. 12. 13.

Adversarial Objects Against LiDAR-Based Autonomous. Yulong Cao, Bo li. 2019. s.l. : Cornell Review, 2019. Alibaba clouder. 2017. The 4 ethical issues in AI we're all thinking about. alibabacloud.com. [Online] March 30, 2017. [Cited: April 10, 2021.] https://www.alibabacloud.com/blog/the-4-ethical-issues-in-ai-were-all-thinkingabout_72886. Bruce, Sterling. 2019. The Beijing Artificial Intelligence Principles. wired.com. [Online] June 1, 2019. [Cited: April 09, 2021.] https://www.wired.com/beyondthe-beyond/2019/06/beijing-artificial-intelligence-principles/. CBN Editor. 2020. Beijing Officially Unveils Artificial Intelligence Industry Alliance in Zhongguancun. chinabankingnews.com. [Online] Septemer 1, 2020. [Cited: April 10, 2021.] https://www.chinabankingnews.com/2020/09/01/beijing-officially-unveilsartificial-intelligence-alliance-in-zhongguancun/. Cher, Audrey. 2020. ‘Superpower marathon’: U.S. may lead China in tech right now — but Beijing has the strength to catch up. cnbc.com. [Online] May 17, 2020. [Cited: April 08, 2021.] https://www.cnbc.com/2020/05/18/us-china-tech-race-beijinghas-strength-to-catch-up-with-us-lead.html. China Daily. 2019. Baidu CEO wants ethics research in AI strengthened. chinadaily.com.cn. [Online] March 09, 2019. [Cited: April 10, 2021.] https://www.chinadaily.com.cn/a/201903/10/WS5c851998a3106c65c34edc5b.ht ml. CKN. 2018. Shanghai 2018 World Artificial Intelligence Conference kicks off, China’s AI biggest players surfaced. chinaknowledge.com. [Online] September 18, 2018. [Cited: April 10, 2021.] https://www.chinaknowledge.com/News/DetailNews?id=80279. Daws, Ryan. 2020. Baidu ends participation in AI alliance as US-China relations deteriorate. artificialintelligence-news.com. [Online] June 19, 2020. [Cited: April 10, 2021.] https://artificialintelligence-news.com/2020/06/19/baidu-ai-alliance-uschina-relations-deteriorate/. Deloitte. 2019. Global Artifical Intelligence Industry White Paper. s.l. : Delotte, 2019. Ding, Jeffrey and Triolo, Paul. 2018. Translation: Excerpts from China’s ‘White Paper on Artificial Intelligence Standardization’. newamerica.org. [Online] June 20, 2018. [Cited: April 09, 2021.] https://www.newamerica.org/cybersecurityinitiative/digichina/blog/translation-excerpts-chinas-white-paper-artificialintelligence-standardization/. Ethical Principles and Governance Technology Development of AI in. Wu, Wenjun, Teijun, Hung and Ke, Gong. 2020. s.l. : Research Artificial Intelligence Review, 2020, Vol. VI. European Comission. 2019. Ethics guidelines for trustworthy AI. digitalstrategy.ec.europa.au. [Online] April 08, 2019. [Cited: April 11, 2021.] https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. Gal, Danit. 2020. Mapping AI ethics discussions in China. nesta.org.uk. [Online]


160 14. 15. 16. 17. 18. 19. 20.

21.

22.

Artificial Intelligence and Policy in India (2021)

May 18, 2020. [Cited: April 10, 2021.] https://www.nesta.org.uk/report/chinasapproach-to-ai-ethics/mapping-official-ai-ethics-discussions-china/. Laskai, Lorand and Webster, Graham. 2019. Translation: Chinese Expert Group Offers 'Governance Principles' for 'Responsible AI'. newamerica.org. [Online] July 27, 2019. [Cited: April 09, 2021.] https://perma.cc/V9FL-H6J7. Mojsilovic, Aleksandra. 2019. Introducing AI Explainability 360. ibm.com. [Online] August 8, 2019. [Cited: April 11, 2021.] https://www.ibm.com/blogs/research/2019/08/ai-explainability-360/. Silverman, Jacob. 2021. The Sad Implosion of Google’s Ethical A.I. newrepublic.com. [Online] March 9, 2021. [Cited: Apri 11, 2020.] https://newrepublic.com/article/161629/sad-implosion-googles-ethical-ai. Tencent Reserach Institute. 2020. “ARCC”: An Ethical Framework for Artificial Intelligence. tisi.org. [Online] April 09, 2020. [Cited: April 10, 2021.] https://www.tisi.org/13747. The Big Data Security Standards Special Working Group . 2019. Artificial Intelligence Security Standardization White Paper. s.l. : China Electronics Standardization Institute, 2019. The Chinese approach to artificial intelligence: an analysis of policy. Roberts, Huw, Crawls, Josh and Morley, Jessica. 2020. s.l. : AI & Society, 2020, Vol. 36. WARC staff. 2018. Baidu is first Chinese firm to join global AI ethics body. warc.com. [Online] October 19, 2018. [Cited: April 10, 2021.] https://www.warc.com/newsandopinion/news/baidu-is-first-chinese-firm-to-joinglobal-ai-ethics-body/41207. Webster, Graham, et al. 2017. Full Translation: China's 'New Generation Artificial Intelligence Development Plan' (2017). newamerica.org. [Online] August 1, 2017. [Cited: April 08, 2021.] https://www.newamerica.org/cybersecurityinitiative/digichina/blog/full-translation-chinas-new-generation-artificialintelligence-development-plan-2017/. White House. 2020. Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. trumpwhitehousearchives.org. [Online] December 3, 2020. [Cited: April 11, 2021.] https://trumpwhitehouse.archives.gov/presidential-actions/executive-orderpromoting-use-trustworthy-artificial-intelligence-federal-government/.


Artificial Intelligence and Policy in India, Volume 3

161

12 Artificial Intelligence and Criminology: Prospects and Problems Shiwang Utkarsh1 1Research

Intern (former), Indian Society of Artificial Intelligence and Law research@isail.in;

Abstract. The confluence of AI and the social sciences carries marvelous potential. Artificial intelligence can be utilized in the field of criminology and other forensic sciences with great efficiency. This work discusses methods that can be used to make an AI based system develop an understanding of the society. A sociological perspective of the criminal justice system is delineated to discuss the scope of AI therein. We can say that crime is an act incoherent with norms of the society. Keeping this idea at the core, the paper describes various possibilities of using AI for crime etiology. The knowledge of societal norms to detect deviance is central to the methods given. Further, the fallacies and implementational challenges are elucidated to examine the practicality of this approach..

Introduction The rapidity of advancements made in the area of science and technology is a remarkable characteristic of the contemporary times. With the advent of AI and Machine Learning, mankind is making leaps towards a radically different future. The passage of time adds to the complexity of a society. This, in turn, results in an increase in uncertainty, disorder, and entropy. A rise in criminal activity is one of its detrimental symptoms. The discipline of criminology deals at its roots with the nature of crime and its contributing factors. The methods of crime detection, forecast, and prevention need a technological revamp because of the perplexing nature of present-day crimes. The use of AI based machines designed to assist criminologists and forensic scientists in their dealings is not yet a prevalent reality. Methods like facialrecognition, biometric identification, determination of crime-prone areas, and the like are being employed at an experimental level. Expansion of such practices along with more advanced techniques calls for more research and field work. Making an AI understand the nuances of sociology is a prerequisite in developing a system that deals with crime. Concepts of deviance, law, and the criminal justice


162

Artificial Intelligence and Policy in India (2021)

system must be integrated into the concerned AI. On top of this, a large amount of social and individual data is needed to operate an artificially intelligent system at such a great level. Only after these demands are met, questions of law, ethics, and morality can be tackled. Further discussed are the methods that can be utilized by a deep learning AI to aid in the surveillance and well-being of the society. The possibilities of crime detection, investigation, and trial are covered as well. The discussion then goes on to deal with the fallacies and challenges pertaining to the use of AI in the social and criminal sphere.

Nature of Society and Crime This thing, what is it in itself, in its own constitution? What is its substance and material? And what its causal nature (or form)? And what is it doing in the world? And how long does it subsist? -Marcus Aurelius (121-180 AD)91 Criminology begins itself with the question, “What is the nature of crime in itself?” To answer this, a keen observation into the functioning and formation of society needs to be done. As per the Social Contract Theory, advocated by John Locke, Jean Jacques Rousseau and Immanuel Kant, society is formed by the coming together of all members to avail mutual benefit (Cudd, 2018). This results in their collective longevity and quality of life being enhanced. Above all, a society needs to survive. For this, it requires a basic structure that ensures order and peace within. Anything that may pose a threat to the order and structure of the society is deemed unwanted. The usual functioning of the society is based on norms. These norms are further based on a multitude of factors like, ethics, morality, convenience, beliefs, and more. These norms provide a general idea to the members on how to function and maintain themselves in the society. That is to say, these norms seek to ensure order and tranquility. What violation of norms create is incoherence. This incoherence disturbs the natural order of the society. The problem arises when an individual finds himself in conflict with the norms of the society. This may happen either unknowingly or deliberately. The matter of importance here is the reason that led to the conflict. The underlying reason beneath a criminal act or display of deviant behavior consists of sociological and psychological factors. This is where the discipline of criminology primarily operates. We can define crime as an act incoherent with norms of the society and thus resulting in disturbance of order. To ensure that order prevails, society creates a regulating mechanism for itself. It is called the criminal justice system. Laws are 91

(Long, 1994) Marcus Aurelius was Roman Emperor and a Stoic philosopher. This quote provides a logical approach to understand the nature of things. Based on its lines, the concepts of society and crime are explained here.


Artificial Intelligence and Policy in India, Volume 3

163

created in accordance with the norms and their violation is dealt by the criminal justice system. Laws are based upon the idea of justice. They seek to ensure safety, equity, and reasonableness amongst the men it governs. Law is an instrument of the criminal justice system which is used to define norms at the base level. Norms are the background data which is used to detect anomalies through juxtaposition (Kahneman, 2011). If one is aware of a set of norms, anything that opposes them immediately comes to notice. This implies that a knowledge of norms is a prerequisite for an understanding of law and the society. In case an anomaly arises, there are two possibilities. First is that the anomaly is removed or subverted. Second is that is accommodated into the set. It is imperative that the right choice is made here as both possibilities are mutually incompatible. This is where the understanding of justice comes into play. Ideas of morality and the statutory law are applied in making this choice. Thereafter, the decision to punish or acquit is made. Penalization is the method of subversion of the anomaly whereas acquittal is the consequence of its accommodation. Above described is the nature of crime and a sociological view of the criminal justice system. The present society has grown exquisitely complex. Due to this there is a stringent requirement of clear laws and a strong sense of justice. We already have a plethora of statutes and an elaborate judiciary in place. The matter of concern here is the relevancy of the laws. The society has changed drastically since the time the laws that govern it were written. The set of norms are undergoing constant change. The postmodern era and the peculiarity of events therein is extreme. The concept of deviance and its causes are not yet properly ingrained into the system. A law reform as a consequence of a major unfortunate event is not a rare sight. But seeing such a reform take place considering the factors that lead to deviance amongst the citizens is not so prevalent.

AI, Norms, Deviance and Reduction of crime At this point we are aware that norms are imperative for the society to maintain itself. Humans know that they are required to abide by the norms to continue their peaceful existence. The challenge here is if an AI can develop the same, or at least a similar, understanding. And if it can, then what is the utility of such an AI. These problems are addressed below. When an AI develops itself in a self-sufficient manner, the process is known as Deep Learning. This is achieved by replicating the functioning of the human brain. The algorithms operate similar to a brain’s neural network. Here, the AI exercises the methodology of trial and error at a grand scale. The machine is given a large amount of unlabeled data as input. They make use of logic and memory to identify the data. This is how it learns about the quality and quantity of the input. With time and repetition the machines gets more efficient at identifying data and labelling it. Thereafter, the data is passed through another algorithm which is the part of the same ANN (Artificial Neural Network). Here,


164

Artificial Intelligence and Policy in India (2021)

the data is processed to produce desired outcomes through analysis, comparison, and calculation in accordance with the given commands. This type of functioning requires great amounts of data and a strong feedback mechanism. Now, if the AI is given ample amount of data regarding the society and its members’ behavior, it can produce striking outputs. By the use of deep learning and specially designed ANNs the AI will be able to draw conclusions regarding general human behavior. For example, by going through the local travel data of an individual over a month, an AI can determine his favorite destinations and predict his movement. Also, it can work out similar conclusions for people of the same age group living in the same area with fair accuracy. So, if the AI is given more data qualitatively and quantitatively, then it can determine general behavior of a set of individuals in more ways and with greater accuracy. Once a number of observations and conclusions regarding the set of individuals are being produced by the AI with consistent accuracy, it can move on to the next step. That is to attain an understanding of the physiology and psychology of an average individual by using relevant data. This can be done in an “IFTHEN-ELSE” manner. For example, “IF a person drives to a restaurant, THEN he must be hungry. ELSE he must be getting food for some other person.” This type of data may include a number of “ELSE” clauses as there are a number of possibilities like going to a restaurant for using the restroom, attending a meeting or something completely unpredictable. Through deep leaning this type of structured data is formulated by the AI on its own after processing raw input; external feeding is not required. The unpredictable causes can be dealt by using an exception handling mechanism. This mechanism will list down all the unusual causes that it comes across. The list can be referred to as and when required. However, the algorithms may inherently consist of some physiological and psychological aspects coded into them. Then the ANNs will work to connect the dots with accuracy and efficiency. The physiological and psychological data can again be used as a means to further define the structured data and add to its value. For example, if the speed of his vehicle is greater than usual then he may either be starving or stressed or in a hurry. These possibilities describe three motivations inter alia numerous others, that are physiological, psychological, and circumstantial, respectively. Ultimately, this is one method by which an AI can begin to understand humans. In furtherance of this understanding the AI can then move on to deal with data concerning a larger number of individuals. This is how it will start to develop an understanding of a specific society. On this basis, the AI will be able to make fairly accurate predictions and conclusions regarding the members and the collective. After attaining this level of intelligence, the AI will be requiring the understanding of complex academic works to further improve its intelligence. Only after this stage can an AI readily recognize the norms of the society. Statistics play a very important role here. Curves are a strong method of determining trends. Fortunately, the efficient use of stats is an easy task for an


Artificial Intelligence and Policy in India, Volume 3

165

AI given its tremendous processing power. 3.1 Identification of Deviance Now that the norms are determined and the AI is equipped with knowledge of the specific society, the final task can begin. This task is of understanding and detecting deviance. The accumulated data will be used as a background for detection of anomalies in individual behavior (Kahneman, 2011). Mind that the background data will constantly be undergoing minor changes. As long as the AI is able to maintain the authenticity of the background data it can efficiently detect deviant tendencies. Once this determination is made, a plethora of possibilities present themselves. The understanding of deviance can aid in crime forecast and prevention, reformation of the deviant before he indulges in crime, suggestion of measures aimed at avoiding the origination of deviance and curbing the contributing factors. A second degree of tasks follow after the detection of deviance is successfully executed by the AI. These tasks involve human cooperation with the machines and their output. Once an AI gives out a report on suspicious behavior in an area, the authorities are required to make a confirmation. This is a simple yet imperative intervention as this will save many people from unnecessary trouble given the margin of error in the concerned report. Note that this will not add burden to the officials but will save them tons of resources as the crime is avoided altogether rather than being dealt with post commission. Moreover, this is a strong requirement as the contemporary times are not prepared to accept the use of AI at such a grand level. The combined efforts of man and machine will attribute legitimacy to the whole process and make it more acceptable to the society. It will naturally act as a system of checks and balances with the final call resting with the wisdom and conscience of man. 3.2 A Subtle Approach Gaining of large amounts of data that is actually useful is a tough task. Convincing people, even by law, to give consent for using their data is not very easy as well. Data of travel, transactions, health, and even shopping is considered sensitive by the public given the degree of threat that comes along. So, developing a surveillance system or any other mechanism that continuously collects and processes such data will encounter stark repulsion from the citizens. One way to get around this issue is by delimiting the operation of AI based analysis to a specific sphere rather than a universal one. The government can choose to legislate an act which prescribes for the use of such AI only in certain cases. Consider that a person is suspected to be involved in dubious activities and such suspicion is based on the usual surveillance systems or on the usual investigation process. Then the law enforcement agencies may make use of AI based criminological techniques after obtaining a warrant for the same from a court. This system of warrant will legitimize the use of personal data of such individuals in cases like this. Also, this will facilitate the smooth induction of AI for general supervision after methods like these become prevalent with time and usage. Using this system of warrants or permits will be more accepted by the


166

Artificial Intelligence and Policy in India (2021)

citizens as the involvement of machines is duly sanctioned by the judiciary rather than being used incessantly and authoritatively by the executive. The same approach may be used for investigating suspects after a crime has been committed. This will help in easy and accurate identification of the perpetrators and will save time and resources of the agencies. Hence, such methods will speed up the process of trial and add to the efficiency of the criminal justice system because of the stronger evidence produced by the machine. Detecting white collar crimes, embezzlements, corruption, tracking fugitives and prison runaways will become a fairly easy task. Not only will it aid in the efficiency of detection, redressal, and re-capture, but it will also act as a strong deterrent given the now increased chances of apprehension. Adding to the difficulty and repercussions of a crime, or making it less inviting in other ways, acts as a strong social control and is achievable by the use of technology (Technology and Social Control, 2015). 3.3 Investigation When the detection of deviant or suspicious behavior becomes easy a plethora of possibilities arise. Oftentimes, it is the case that the law enforcement agencies have a strong sense of suspicion over a person. Usually this is not enough to make an arrest. As there is not enough evidence, the officials resort to close observation of the person’s activities to collect proof and make a case. If the use of AI based criminological systems become a reality, the police can apply for a warrant to collect his data and use the AI. Obtaining this kind of warrant will be relatively easy than one for an arrest given the relatively lesser gravity of such executive action. However, it is incumbent in the authorities to protect the data from leakage or exploitation. The confidentiality of the AI reports must be maintained carefully. The results of such investigations can be quite positive. If the suspicion of the authorities is correct, it may reveal a person’s connections with terrorist outfits, anti-national groups, violent syndicates etc. An in-depth AI based analysis of a person’s movement and shopping data alone can reveal a lot about him. What a person buys and where he visits can tell what he may plan to do. With real-time alerting systems, the authorities can be informed immediately if a person buys ingredients used in making an IED. Developing machine learning methods of identification of terrorist and extremists by using academic knowledge is a feasible process. There are certain characteristic themes of a militant-extremist mind-set that are observed in such persons and they can be detected by assessment models (Assessing risk for terrorism involvement, 2015). Such models analyze the likelihood of a person engaging in terrorist activities. These methods exist conceptually and turn out to be logically sound. Therefore, it is surely possible that they can be reproduced into a virtual environment and make use of real time data to determine risks. The laws of preventive detention will find more application and will gain a broader scope. It is fairly possible that Article 22 of the Indian Constitution will be


Artificial Intelligence and Policy in India, Volume 3

167

expanded to include provisions regarding detention of individuals based on the suspicion/threat reports produced by an AI. Obviously, such reports will require the support of a physical investigation in some cases. However, actual implementation of such method requires an exhaustive reformation of IT laws and the right to privacy will require a redefinition. The question that whether a fundamental right can be compromised on the basis of mere suspicion will arise. This is further discussed in the legal challenges section of this paper. Crime-scene Simulation A camera powered by augmented reality and AI can be designed for the purpose of re-creating crime scenes. Deep learning methods can be used to recognize blood splash patterns, broken or misplaced objects, weapons, fingerprints etc. and to collectively analyze the inputs. This can be used to develop a model of the crime scene as it was before the commission. This technique can be further developed to create computer simulated versions of the actual crime that took place. These tasks are usually performed by police officers or crime-specialists and are time consuming. With the advantage of automation and high processing speed, AI can be used to achieve similar results almost instantaneously. Possible Fallacies As the functioning of an intelligent machine depends on the interaction between algorithms and data, fallacies in either will result in an incorrect output. On this basis, technical fallacies can be broadly classified into two categories. 1. Algorithmic Fallacy 2. Input Data Fallacy Man’s reluctance to accept new ideas and big changes leads him to develop a bias against machines. This is discussed as the third fallacy. 4.1 Algorithmic Fallacy Say if the algorithm or a function thereof has a bug. This bug will reflect in all the results that the algorithm produces. As long as the developers or the users have the knowledge of existence of such a bug, the problem can be rectified, and the corrupt outputs can be corrected as well. The problem rises to a concerning stage when there no knowledge whatsoever about a bug in the code. This results in the output being considered legitimate and actionable. The gravity of mishappening that could occur in such a case is tremendous. An erroneous code can knock the first domino in a chain where the last one causes a man to be incarcerated. Let us consider an example of this algorithmic fallacy. Say a function f(x) takes ‘x’ as input data and processes it to produce output ‘y’. If there is a bug in f(x) which causes it to give ‘z’ as a result when ‘x’ is given as input, there exist a


168

Artificial Intelligence and Policy in India (2021)

fallacy in the function. (Consider that ‘z’ is the output when ‘w’ is given as an input.) Now, if y and z are easily distinguishable then detection of the bug is easy. Knowing the nature of input is also required. Otherwise, it will be tough to find a bug/error/fallacy in f(x) and the repercussions may be catastrophic. If output y, in the long run, leads to a person being deemed innocent and z means that the person may be involved in suspicious activity, it is not hard to imagine the gravity of this fallacy. This fallacy in output can be primarily dealt in two ways. First is the technical solution that involves constant debugging of code, sandbox testing, regular maintenance check of algorithms and code structures, and cross-examination by different developers. Also, if the algorithm makes use of concepts from a particular discipline, professionals of the concerned disciplines can be made to verify the logic being used. Second is the general solution which involves use of man’s physical and mental faculties to verify the outputs, detect and examine functional absurdities, and make best use of his wisdom. However, this will be a tough task and would require specific set of carefully constructed protocols on top of skills, knowledge, and expertise. 4.2 Input Data Fallacy The second fallacy may arise if the input data itself is corrupt. Given that identification and rectification of such data lies beyond the scope of an AI’s deep learning curve at that point of time, this may produce results similar to those produced in case of an algorithmic fallacy. The only difference here is that the function f(x) is correct but the input that it receives is not. Considering the above example, if ‘x’ and ‘w’ cannot be differentiated by the AI and is processed by f(x) the result will always be ‘y’ when such shall not be the case. Also, in such a case the function may produce an absurd or unexpected result. This fallacy can be detected if either the AI learn to identify corrupt input, or if such identification is made by the developer/user and fed into the machine. The repercussions are similar to those mentioned above. The Input Data Fallacy can be rectified only in two ways. First is that the AI learns to identify corrupt data and deal with it accordingly. Second is by adding a separate filter between the input and the algorithm that checks the authenticity of data. One example of this “filter” is the data type constraints used in programming languages while receiving inputs. This method ensures that corrupt/invalid data does not even enter the processing stage and thus saves resources. Scaling up this method of using constraints is a viable tactic and will also facilitate the AI in developing an exception handling mechanism for itself. 4.3 Bias Against Machines The functioning of an AI in the real world is not purely technical. As we are trying to develop a system that simulates a human brain, and which will ultimately compete against an actual human, complications are bound to arise. In


Artificial Intelligence and Policy in India, Volume 3

169

a concerned field, conflict between an AI and an experienced professional in terms of ideas, outputs and conventions are highly likely. Operating under the same roof and dealing with the same problem may result in an incoherence because of the reasons discussed below. Let us say an AI and an experienced professional are dealing with the same problem. As both give out their outputs and conclusions regarding the issue, a discrepancy is observed. The AI and the professional have proposed two different approaches to tackle the problem. There is a choice to be made regarding which approach to follow considering the efficiency and viability of both. The professional advocates that his method is better given its success rate in previous cases. The method given by the AI is one never seen before. But given that it is a machine developed for the specific purpose and has also been tried and tested earlier, the choice of whom to follow is a tough one. Assume that the two approaches are afar and cannot be reconciled into one. In this case, if the problem is of a grand scale and attracts serious consequences, the authorities responsible for making the choice may incline towards the opinion of an experienced human and disregard that of a newly made machine. This decision could possibly be the wrong one, though. Consider the situation below. Say, in the process of dealing with the problem the AI develops a more efficient and unique method. As it operates according to this new method, the machine may produce an unconventional output. However, in this process, the AI progressed in its deep learning curve and surpassed the knowledge of the professional. It will be a mistake if this output is disregarded merely on the basis of it being new and against the usage or standard operating procedure. This is not an unlikely scenario as humans have a strong sense of ego and tend to fight for what they have invested their effort in. Oftentimes, it is ignored that efforts invested and results produced may not be necessarily corelated in a direct proportion. Also, given the gravitas of the problem, the authorities may be disinclined to incur a risk. Rejecting the output of an AI in this way will result in something undesirable. There is a strong possibility that ASI (Artificial Specific Intelligence) may become better than humans at the specified field but not in all fields. This is because it is possible that the AI may be considering more factors, with precision, that affect a situation than a human can imagine. Given that the ASI is not more intelligent overall, the fact that it is actually proficient in a particular area must not be overlooked. This fallacy includes discrepancy of knowledge, man’s reluctance to accept new ideas, his distrust on machines, and his ego. Ultimately, this results in rejection of an efficient solution for a preferred one. This tendency to stick to conventional means and safety is actually counterproductive in cases like this. The situation discussed above is likely to become prevalent but not absolute or omnipresent. It is entirely possible that a machine may actually be wrong in certain cases and its human counterpart may produce better results. This can especially be the case where a strong sense of ethics, justice and wisdom is


170

Artificial Intelligence and Policy in India (2021)

required. We can see AI becoming a reality, however, developing Artificial Wisdom is still a gargantuan challenge.

Implementational Challenges Induction of AI into the field of criminological studies and practice poses multiple challenges over different levels and areas. Moreover, the consequences that follow after AI has been implemented are to be considered as well. An intelligent strategy that tackles all social, legal, and technical hurdles while ensuring smooth functioning post induction is a dire requirement. Major challenges and obstacles that come into the light when AI is employed by the Criminal Justice System are discussed below. However, the discussions are not limited to this aspect and also deal with the general challenges. 1. Data Usage 2. Institutionalization 3. Legal Framework 4. Social Acceptance 5. Ethical Issues 5.1 Data Usage

It is evident that the usual functioning of an AI requires a continuous flow of large amounts of data. The challenge here presents itself at multiple levels. Firstly, it needs to be determined, by exhaustive testing, whether the quantity of data available is sufficient. Unless enough data is supplied to the AI for analysis, it cannot understand the contemporary norms of the society. If such is the case then the machine will not be capable of detecting deviance. Secondly, there needs to be an efficient data collection strategy in place. Even if the data is available, collecting it promptly is imperative for the AI to operate. This is needed to ensure the flow of authentic data into the machine. Thirdly, storage of huge amounts of data poses itself as a major hurdle. Storage units, which can hold great quantity of data are not only expensive but occupy great physical space as well. Also, these devices are required to be fast enough to compete with the processing speed of the AI so that the overall performance is not bottlenecked. The safety of the facilities that hold these units is a grave concern too. Fourthly, there comes the most popular concern regarding data privacy. Not only ensuring the secure storage of data is required but obtaining consent of persons to give out their data is the greatest challenge. Protecting of data from crackers and hackers will be a tough task as they have access to advanced methods and technology as well. On the other hand, unethical use of data by the companies themselves are a strong concern these days. The concern of privacy is a strong and legitimate one. The threats range from usage by companies and authorities to breach by crackers and hackers.


Artificial Intelligence and Policy in India, Volume 3

171

Furthermore, obtaining consent for data collection and use from the public becomes harder because of this concern. 5.2 Institutionalization

Putting AI on the actual ground of practice is not as simple as buying a computer and placing it on a table. Formation of a strong support network of technicians and developers is a prerequisite given the prevalent unfamiliarity of the crowd with use of technology. Instituting a body that oversees the technical functioning and maintenance of the AI is needed. It must be noted that such institutions may develop in many forms. For example, there may be a company working as security guards for AI based systems. Their job will be to protect the data in case of a breach and to provide mechanism to prevent such breaches. Another institution can act like a storehouse with their task being the secure storage of data. They will be expected not to meddle with the data and provide it to authorized bodies only. Similarly, a different company can provide for a network of technical experts to solve problems for the bodies using AI and also to teach them how to use such systems. A body, governmental or not, may be required to avail services of all or some of these companies to make good use of AI in their field of operation. Such is the level of institutionalization required to ensure smooth functioning of a world pervaded with AI systems. Another challenge that comes attached with institution of such an exhaustive machinery is that of costs. The expense of buying exotic machines, cumbersome hardware, heavy software, storage units, and servers will be great. The application of a widespread and interconnected system that functions incessantly will attract heavy electricity costs and installation charges. Maintenance and replacements will only add to all this. 5.3 Legal Framework The great degree of institutionalization and threat that comes along with the implementation of AI demands a strong legal framework. Firstly, a clear set of exhaustive legislations are required. The laws need to be formulated after consulting technicians, criminologists, other professionals. To go about this tedious task, the government may choose to form a committee dedicated for supervising the formulation of laws and institutions concerning AI. A balanced interference of government may become a dire need for protection of citizens from private bodies against exploitation. When criminologists make use of AI for studying crime and forecasting it, great care needs to be exercised. A law must govern such use so that private information of individuals is not spied upon and no person faces legal trouble without substantial evidence against him. The commonly used phrase in our law, “reason to believe” will need more advanced definitions. It may even turn into a doctrine as the reason to believe that a person is deviant and strongly inclined to commit a crime in the near future must be concrete. It is crucial that these reasons to believe are strong enough to implicate a suspension of the right to privacy. If


172

Artificial Intelligence and Policy in India (2021)

this aspect of law is clarified and updated to suit the technological era, it will aid in the further delineation and reformation of the IT laws so that they may include AI, privacy, and ethics. The Supreme Court of India has already declared the right to privacy as a fundamental one under articles 14, 19, and 21 of the Indian Constitution.92 Furthermore, Sri Krishna Committee which was constituted in 2017 recognized the need for a “legal framework relating to personal data” for the citizens of India to ensure growth in the digital landscape (Jindal). The Indian legal system is clear on its stands and understands the need of a reform. If a constitutionally valid method can be worked out that allows controlled use of the citizen’s data, it will be a huge leap towards the incorporation of AI into our society. The liabilities in case of an unlawful act emerging from the use or misuse of AI needs to be clarified as well. 5.4 Social Acceptance For many people Artificial Intelligence still means talking robots. People fail to understand the true nature and scope of an AI based system. It is seen as something completely new and little do people know that AI is already connected to their lives via the Big Tech companies. Introducing AI into a society where people have recently been acquainted with the technology is a challenge. There is a natural sense of denial when an outcome, which is usually given by a human, is produced by a machine. It is viewed as an encroachment when technology begins to eat up jobs. Although, this “encroachment” is overshadowed by the convenience it provides. With the incorporation of AI into the criminal justice system and at other places, the demand for jobs will undergo a qualitative change. As the AI occupies one particular job, it creates many others. For example, a FinTech AI may replace a banker but will create employment for an engineer. If it is not ensured that these machines are introduced as a helping hand and not a substitute for employees, the society will be reluctant in accepting it. To facilitate smooth introduction and operation of AI into the social sphere awareness needs to be spread. The population should be taught about intelligent systems, their use, and related laws. The presence of such machines into the daily lives needs to be normalized through various campaigns. Also, the efficiency, functioning and legitimacy of the machines needs to be demonstrated before the public so that they are not scared of unintended consequences or violation of their private lives. 5.5 Ethical Issues A discussion about AI cannot be completed without talking about ethics. As the machines simulate human brains, perform human-like activities, and most importantly, have intelligence, their status is a big concern. Should they be deemed as a juridical person with rights or as a usual machine? Questions like 92

Justice K. S. Puttaswamy (Retd.) and Anr. vs Union of India And Ors. (2017) 10 SCC 1: One of the arguments made by the appellant were that the government is planning to create a “surveillance state” which is against the ideals of freedom and personal liberty.


Artificial Intelligence and Policy in India, Volume 3

173

these need to be answered in order to convert AI into a prevalent reality. In case of an incorrect output arising due to a fallacy, who shall be held liable? As there is no person who actually committed a mistake knowingly. However, we can say intuitively that the operators and owners of the machines are to be held responsible. Situations like these put burdern upon the shoulders of those in power. Also, there is a question that if an AI is made into a juridical person and commits a fault, will it be able to put forth arguments for its defense? If yes, then are those arguments to be entertained? That is to say, will the principle of natural justice, Audi alteram partem, be applied or not? Some doubts will be eradicated with the formation of laws and some with the usage by society. The questions in this area are endless. With the advancements made in the research and development of AI, the gravity of these questions increases. As the scientists are trying to understand consciousness and want to create it in labs, more and more interesting questions arise. Although, the greatest question still remains that whether an AI be given the status equivalent to a human in the future.

Conclusions The use of systems powered by Artificial Intelligence is being researched and developed at a great pace. Mankind is trying to create a pseudo-human entity to help itself. The concepts of intelligence, wisdom, consciousness, and learning are acquiring broader definitions. Science is striving to recreate, or at the least simulate, nature to serve man according to his will. The time taken for the evolution of AI is much less than what humans took to develop the same level of intellect. This pace can be attributed to the tremendous amounts of data available, enormous processing power, and most importantly, the guidance of human beings. The evolutionary process of human beings had none of these. Does this mean the evolution curve of AI progresses exponentially? Can human beings simulate a speedier process of evolution for a being that they created? Will the degree of evolution ever surpass that of humans? With time this area of research is gaining more information, developing a better understanding, and producing greatly advanced technology. This progress is assisted by new laws being formed, ethical issues being addressed, and the population gaining awareness about the state of affairs concerning the growth of artificial intelligence and machine learning. The pace at which we are moving can lead to literally anything. If the AI develops self-awareness, what will it think of its evolution, about humans, and about itself? Will it consider itself a helper of man and have a sense of gratitude towards its creator or will it view mankind as a subpar species and think of itself a messiah for reform?

References


174

Artificial Intelligence and Policy in India (2021)

1. Assessing risk for terrorism involvement. Borum, Randy. 2015. 2, Washington DC : Association of Threat Assessment Professionals, 2015, Journal of Threat Assessment and Management, Vol. 2. 2169-4842. 2. Cudd, Ann and Seena Eftekhari. 2018. Contractarianism. s.l. : The Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University, 2018. 3. Jindal, Poorvisha. Right to privacy in India: Its sanctity in India. iPleaders. [Online] [Cited: February 5, 2021.] https://blog.ipleaders.in/know-theright-to-privacy-in-india-its-sanctity-in-india/amp/#References. 4. Kahneman, Daniel. 2011. Thinking Fast and Slow. New York : Farrar, Straus and Giroux, 2011. 5. Long, George. 1994. The Meditations by Marcus Aurelius. s.l. : The Internet Classics Archive, 1994. 6. Technology and Social Control. Marx, Gary T. 2015. 2, Massachusetts : Elsevier Ltd., 2015, International Encyclopedia of the Social & Behavioral Sciences, Vol. 24. 9780080970875.


Artificial Intelligence and Policy in India, Volume 3

{Policy Briefs}

175


176

Artificial Intelligence and Policy in India (2021)

13 Explainable AI in India: Policy Brief Karan Ahluwalia1 & Nalin Malhotra2 1Contributing 2Contributing

Researcher, Indian Society of Artificial Intelligence and Law Researcher, Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. This is a Policy Brief submitted for the Indian Strategy on AI and Law Programme.

Introduction Artificial Intelligence refers to a dedicated field of computer science that attempts to emulate and recreate human intelligence in computer systems. One of the ways to do so is by designing a code that is capable of learning through exposure to various sets of training data- data is categorised, annotated and filtered for any noise- so as to allow the software to analyse the relationships among constituents of said data sets thereby enabling them to create generalisations with regard to the rules that govern such relationships. By continued exposure to other similar data sets- the software is given the opportunity to test the validities of generalisations derived in previous data sets with those of the one it is analysing in that moment- this allows it to modify, recreate or discard previous generalisations in order to come closer to the actual rules that govern those data sets. This is a form of experiential learning allows the system to build upon its previous knowledge in order to become more adept at analysing data-sets of a similar nature with each successive data set that it works on. Theoretically, this is designed to emulate the way in which humans learn, i.e., by making mistakes and rectifying them. There are many approaches to this process- two of which are Machine Learning and Deep Learning- the latter is of more significance in relation to the concept of explainable Artificial Intelligence. It functions on a principle similar to the human nervous system. It consists of the creation of small computational units called “nodes” or “neurons” in multiple layers which combine to form a framework that processes information in a manner similar to how pixels combine


Artificial Intelligence and Policy in India, Volume 3

177

on a screen to form an image93. Each “layer” of nodes processes a very specific aspect of the information so presented before passing it off to the next layer to pick up where it left off, and so on until the output layer is reached. This layered nature of Deep Learning allows it to learn, derive from, and process larger quantities of data than was possible through the use of traditional Machine Learning. The problem with Deep Learning is that once the source code has been created, even the creators of the program find it difficult to explain exactly how the software “thinks, learns and works”. This leads to the creation of so-called “Black Boxes” in the sense that the user of the program only knows what information can be inputted and what can be expected as the outcome- how exactly this outcome was arrived at is not revealed to them94. Even when such logical “Black Boxes” are not an issue, as is the case with Machine Learning- developers are unlikely to every make source code for their Artificial Intelligence software public and therefore the same problem is encountered- we cannot answer the question of “how did the machine reach this conclusion?” This understanding might not be of much significance with regard to Artificial Intelligence that makes food recommendations but it definitely assumes importance when the decisions determine prison sentences95, employee performance96 etc. “Explainability” of Artificial Intelligence has been defined as “the collection of features of the interpretable domain, that have contributed for a given example to produce a decision (e.g., classification or regression)”97. The EU has recognised this issue and included the “right to explanation” in its General Data Protection Right (GDPR) regulations98. In fields such as medicine, defence and justice systems, not only is it important to arrive at the correct decisions, but also to arrive there via understandable and acceptable reasoning. This requirement has called for the creation of “White Box” Artificial Intelligence- these are kinds of Artificial Intelligence software whose logics are understandable by domain experts and therefore their reasoning for a particular decision can be deduced 99. In this way, we can analyse the generalisations and rules derived by the software and modify them if they are found to be inappropriate. A recent report has revealed how the software used by US courts to predict the likelihood of recidivism in criminals was severely biased against people from the Africanhttps://royalsociety.org/~/media/policy/projects/machine-learning/publications/machinelearning-report.pdf 94 https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fairaccountable-and-transparent-is-crucial 95 https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-islearning-all-our-worst-impulses 96 https://www.courthousenews.com/houston-schools-must-face-teacher-evaluation-lawsuit/ 97 https://www.sciencedirect.com/science/article/pii/S1051200417302385 98 https://www.bloomberg.com/news/articles/2018-12-12/artificial-intelligence-has-someexplaining-to-do 99 https://www.nature.com/articles/s42256-019-0048-x 93


178

Artificial Intelligence and Policy in India (2021)

American community and was 77% more likely to give them a higher chance of recidivism (and therefore a longer jail sentence) as opposed to people of Caucasian descent100. This bias is a result of the biased training data used to train the software and is a real-life example of the well-known adage- “garbage in garbage out”. Had this software been a “White Box” Artificial Intelligence tool, experts could have easily checked the software’s decision-making process and realised that it had imbibed the racial bias harboured by the judicial systemadjustments could then have been made to mitigate that bias from its decisions. However, since the software is of the “Black Box” kind, there is no real way to understand how the bias came to be- only that it is there. To counter this problem, certain approaches have been developed. Layer-wise Relevance Propagation (LRP) is one of the simplest techniques whereby one input given into a “Black Box” system is varied while keeping all other variables constant in order to determine which variable affects the outcome to the greatest extent. Once this has been isolated, it can be determined by logical deduction and to a fair degree of accuracy- what the logic being used by the system is101. There have also been attempts to train Artificial Intelligence to explain their logic in linguistic terms which can be interpreted by any person- thereby gaining an edge over LRP which can only be interpreted by a domain expert. Any Artificial Intelligence model that can explain itself will pave the way for use of AI in many hitherto-unreachable domains such as medicine, defence, education etc. and will create trust in the minds of consumers thereby increasing the ambit of use of such software and incentivizing developers to delve deeper into its possible applications. Models for Explainable AI: Explainable AI presents a trade-off between accuracy and interpretability. Though as AI models become more and more complicated their accuracy increases, they also become too complicated and become the aforementioned Black Box models. As a result, developers have formulated certain interpretability models such as • Lime: is an acronym for Local Interpretable Model-Agonistics Explanations. What this does is provide an explanation why the model made an assessment of a given situation. This is done by perturbating different features instead of providing an explanation of the entire model102. This provides humanreadable explanation of different features used in a model and provides a better insight into the model. • ELI5: acronym for Explain Like I’m 5 this is a python package that is used to explain Machine Learning predictions and is often used to debug various

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing http://danshiebler.com/2017-04-16-deep-taylor-lrp/ 102 https://towardsdatascience.com/explainable-ai-interpretability-of-machine-learning-models412840d58f40 100 101


Artificial Intelligence and Policy in India, Volume 3

179

algorithms such as Catboost, KERA etc.103 • SHAP: Shapely Additive Explanations also called SHAP. Like the previous two this is a model interpreter that provides the information in a shapely and additive format. This is globally as well as locally interpretable and can be used for tree-based models as well.104 • SKATER: This is an open-sourced python library that is used to explain the black box models both globally as well as locally. This is a unified framework that allows explanation for all forms of models that have real world applicability105. These models use various tools to explain the functioning of less complex AI such as106 Feature Importance: A particular feature of the entire model is isolated and then deleted. This is done to see how important this feature is and whether the deletion of such a feature increases the errors by a lot or not. So, if it does increase the errors then it was an important feature in the model. Partial Dependence Plots: These graphs are used to showcase the effect of a change in the features of a with everything else remaining constant i.e., ceteris paribus. This model checks the effect at a global level rather than a local level. Individual Conditional Expectation Plots: Like the Partial Dependency Plots this model also checks the effect of a change in a feature would have in an isolated setting, however this is done at a local level rather than at a global level. Where is AI being used in India? 1. Manufacturing- Manufacturing is one of the most ideal and ripe areas for application of Artificial Intelligence for a multitude of reasons which include but are not limited to: • Repetitive nature of actions that can tire out or otherwise render human complacent- for this reason it is easy to replace human workers with robotics, • Requirement of a high degree of uniformity of the final products, • Requirement of testing and quality control at various levels in the case of mass-produced articles, • Requirement of skilled and semi-skilled labour but without the added https://www.analyticsvidhya.com/blog/2020/11/demystifying-model-interpretation-usingeli5/ 104 https://towardsdatascience.com/explainable-ai-interpretability-of-machine-learning-models412840d58f40 105 https://towardsdatascience.com/explainable-artificial-intelligence-part-3-hands-on-machinelearning-model-interpretatione8ebe5afc608#:~:text=Skater%20is%20a%20unified%20framework,using%20a%20model%2D agnostic%20approach. 106 https://www.analyticsvidhya.com/blog/2021/01/explain-how-your-model-works-usingexplainable-ai/ 103


180

Artificial Intelligence and Policy in India (2021)

stresses of trade unions etc., • Requirement of the assembly line to be sufficiently flexible so that it can easily be upgraded and modernised to increase efficieny, • Requirement of production with a small margin of error etc. As has been seen in recent decades- robotics and Artificial Intelligence have come into wide use for manufacturing and processing products such as automobiles, petroleum, steel, electronics etc. Explainable AI is essential in the manufacturing industry. A factor can lose up to 5-20 percent of its manufacturing capacity due to downtime107. Thus, AI that is not surrounded in a black box would be applicable as the application of such AI can be later analysed because of the openness provided by replacing the metaphorical black box with a glass box. One of the models of use for such AI could be predictive maintenance, where the AI in question predicts which machines would need to be replaced before they break down based on the previous data. As the cost of replacement for these machines is very high, it becomes crucial for the AI being used to be accurate which could be verified because of using an explainable AI model. In order for AI to be applicable in the manufacturing sector it must follow these principles: - Explainable, Meaningful, Explanation Accuracy and Knowledge Limits108. 1. Explanation: As the AI being used would determine a variety of highly costly tasks such as is the case in predictive replacement where the cost of replacing heavy machinery is high. The results of the AI need to be verifiable which would thus necessitate asking questions such as The kind of algorithm being used, what is the model as per which the AI works and Whether there are any input parameters that should be considered? 2. Meaningful: The information derived from the use of the “glass box” must be meaningful for the end user. As there can be different end users, the information derived must hold meaning for such different users based on their knowledge and skills. Providing a user who is not a developer with intricate, highly technical information would be of no use as they would not be able derive any meaning from it. 3. Explanation Accuracy: The explanation given must be accurate, which means that the explanation provided for the AI must be the same that is being utilised for producing an output. If the explanation is in turn false or faulty it would reduce the consumer trust and the value of the AI. 4. Knowledge Limits: This limits the AI based technology from providing faulty or wrong information. Thus, the AI would only show the outcomes set for the system. https://www.akira.ai/blog/ai-in-manufacturingindustry/#:~:text=Introducing%20Explainable%20AI&text=Explainable%20AI%20in%20ma nufacturing%20improves,monitoring%2C%20and%20supply%20chain%20optimization. 108 Id. 107


Artificial Intelligence and Policy in India, Volume 3

181

We can take the example of Tata Steel which has been using “methods such as regression models in machine learning, predictive modelling, neural networks, gradient boosting, among others to generate insight from various data points”109. On the manufacturing-side of the spectrum, company faced many challenges in areas such as material handling during transit, theft, prolonged exposure of finished products to the elements, high volumes of customer inquiries etc. To counter this, they partnered with FarEye. They developed a system that is essentially a machine learning-based tracking and routing software that allows them to have real-time visibility of their in-plant and in-transit operations110. On the other hand, to increase their efficiencies in the increasingly-competitive market of raw steel production- Tata Steel and its competitor- JSW Steel have both invested in “connected factories” which are powered by Artificial Intelligence-enabled tools that seek to minimize wastage of resources and increase the profit margins of producers significantly. In doing so, they hope to save about Rs. 300 Crores over the next few years111. 2. Retail and F&B- this has been one of the first industries to apply Artificial Intelligence to its day-to-day operations for the following reasons: • There is a need to analyse shopping trends and the relative demands for different goods in order to stock articles efficiently, • Retail items are fast moving goods and the same must be tracked in realtime to ensure that the establishment is not left without stock, • Once customer preferences have been profiled, providing them with customised shopping recommendations improves the overall shopping experience for them • In the food industry, order prediction by Artificial Intelligence enables restaurants to minimize wastage by way of pre-prepared food. McDonalds has been known in the F&B industry to use Artificial Intelligence to predict what kinds of orders are going to be placed in the restaurant on any given day based on past sale records of that particular day of the week so as to ensure that sufficient quantities of food can be prepared beforehand thereby minimizing waiting times and food wastage112. Similarly, Reliance Retail has embraced Artificial Intelligence to enable stores to https://analyticsindiamag.com/how-is-ai-being-used-in-the-steel-manufacturing-industry/ https://analyticsindiamag.com/how-tata-steel-uses-ai-a-casestudy/#:~:text=With%20FarEye's%20AI%2Dbased%20real,powered%20by%20machine%20l earning%20algorithms. 111 https://www.moneycontrol.com/news/business/companies/ai-iot-and-machine-learning-itsdigital-speak-at-tata-steel-jsw-steel-4277971.html 112 https://cio.economictimes.indiatimes.com/news/business-analytics/how-tech-helpedmcdonalds-save-28-lakh-units-of-electricity/71948248 109 110


182

Artificial Intelligence and Policy in India (2021)

stock assortments of garments based on their previous sale records, customer preferences are analysed in order to provide bespoke shopping recommendations on their online store thereby increasing the volume of sales through impulse buying113. Nayasale, a Kochi-based start-up has taken this concept to the next step by launching India’s first fully autonomous retail supermarket store that is devoid of any cashiers or lines- completely managed by Artificial Intelligence as far as billing, order recommendations and security are concerned114. 3. Railways- this service has historically formed the backbone of the Indian economy and is the only reasonable means of transportation for most Indians. The Indian Railways have been plagued with 2 major problems over all these years: • Concerns over safety- the railways, they technology and systems have been under the scanner for years due to the large number of trainrelated accidents that characterised much of train travel in the 2010s. • Concerns over operational inefficiency- the railways, despite having their own budget until very recently, have always been perceived as a large drain of the exchequer’s tax contribution and there have been calls from across the spectrum to adopt methods and machines that counter the perceived inefficiency of the system. • Artificial Intelligence is being looked at and implemented in the railways in a myriad of ways-analytics in train operations, passengers ticket booking, maintenance of systems, freight operations, and railways assets115. The Railways have also used Artificial Intelligence to create a predictive maintenance system to monitor the condition of all rolling stock and require alerts for maintenance to reduce overall instances of maintenance-related failures in equipment. 4. Consulting and Auditing- auditing firms have adopted Artificial Intelligence in a big way for the following reasons: • Many of their tasks include scanning and perusing data-rich documents where details can be missed out due to human falliability, • Further, many such tasks are repetitive, cumbersome and can therefore introduce the element of error by complacency • Stakes involved in their transactions are very high and therefore there is a need for scrupulous vetting that can only be done by a computer. https://inc42.com/buzz/reliance-retails-fashion-vertical-looks-to-leverage-ai-to-expandfootprint/ 114 https://analyticsindiamag.com/india-first-ai-supermarket-watasale-kochi/ 115 https://www.deccanherald.com/national/indian-railways-to-use-artificial-intelligence-dataanalytics-to-improve-efficiency-926899.html 113


Artificial Intelligence and Policy in India, Volume 3

183

For these reasons, the big 4 auditing firms have funded and themselves developed many software that can take human tasks involved in auditing and do they to a greater degree of accuracy and in less time- than a human can- so that said human can be more meaningfully engaged in decision making processes rather than clerical work. Deloitte has developed a software called LeasePoint that helps then advise clients with regard to the best possible utilization of utilities present on their estates and properties116. PwC has develpped Artificial Intelligence calledGL.ai which due to its Natural-language processing abilities, is able to decipher complex agreements, contracts and minutes of meetings in order to provide crisp and well-reasoned insights to companies117. EY has developed a new audit technology named EY Helix GL Anomaly Detector (EY Helix GLAD) which would help in detecting data anomalies by the way of machine learning in larger datasets, rather than completely replacing the need of auditors this technology simply flags anomalous datasets that are later evaluated by the team of auditors118. KPMG has developed Ignite- a software that can do a multitude of tasks including but not limited to call centre analytics, anomalous event predictions for business decisions and checking for document compliance with relevant laws119. 5. Start-ups and service aggregators- in recent years, many start ups have cropped up around the country that essentially aggregate all the local providers of a service and then present them to a customer to receive services from. Service can include transportation, food delivery, medical consultations, general delivery and chores, beauty services etc. • These companies need to capture and analyse customer preference so as to modify and recommend service providers effectively • They also need to track their service provision partners so as to ensure safety and well being of their customers For these reasons, applications such as Ola, Uber Swiggy, Zomato, Practo and Urban Company have adopted Artificial Intelligence-driven analytical tools to not only achieve the aforementioned goals, but to also derive a wide variety of information about customers that can be further used for improving their business models120.

https://www2.deloitte.com/us/en/pages/operations/solutions/about-our-real-estate-andfacilities-management-transformation-services.html 117 https://www.pwc.com/gx/en/about/stories-from-across-the-world/harnessing-the-powerof-ai-to-transform-the-detection-of-fraud-and-error.html 118 https://www.ey.com/en_in/better-begins-with-you/how-an-ai-application-can-help-auditorsdetect-fraud 119 https://www.icaew.com/technical/technology/artificial-intelligence/artificial-intelligencearticles/how-artificial-intelligence-will-impact-accounting 120 https://analyticsindiamag.com/report-state-of-artificial-intelligence-in-india-2020/ 116


184

Artificial Intelligence and Policy in India (2021)

Legal Liability of AI Systems The question of legal liability of an AI is one that has now started to be considered more and more as AI becomes an increasing reality. Whether these new technologies could be held criminally liable or whether this liable is to extended only civilly? Whether the end user or the developer of the AI could be held liable? or Whether the AI is directly liable and if so, then how would such liability be enforced? These are only a few of the questions that have been raised.

Criminal Liability of AI In the discussion of criminal liability, it is first imperative for us to categorise the kinds of criminal situations that may usually be encountered? Gabriel Hallevy121 has categorised these as such: • Situations where the actus reus involves an action and situations where actus reus only involves failure to take action. • Situations that required knowledge to form mens rea, situations that require only negligence i.e., a reasonable person’s knowledge to form mens rea, situations of strict liability that require no knowledge to form mens rea. He propounds three models in order to reach at a possible solution to the question of liability. 1. Perpetrator via Another: In this model the perpetrator of the offence is an innocent or mentally deficient person that is not able to form mens rea to commit an offence. But if the perpetrator upon someone else’s instruction performs an offence the person upon whose instruction such an offence was committed would be deemed to have been liable for such an offence. For e.g., in the US case of Conyers v State and State v Fuller, an owner of the dog was held responsible for the dog attacking someone, if the owner had instructed his dog to do so.122 It was presented that similarly in the above model a developer for an AI software could be considered to be held liable. 2. Natural Probable Consequence: In this model an accomplice to a single or a series of action is prosecuted, if the natural and most probable consequences of such actions are criminal. Thus, the user of an AI based software could thus be prosecuted if it is proved that they were aware of the natural and most probable consequence of their actions was criminal. A distinction between the software that would be able to form knowledge to know their actions would be criminal and the ones who won’t be able to would be created, where the latter category could not be held liable for crimes that necessitate the formulation of mens rea. Hallevy G.: The Criminal Liability of Artificial Intelligence entities. http://ssrn.com abstract=1564096 (15 February 2010). 122 Conyers v. State, 367 Md. 571, 790 A.2d 15 (2002); State v. Fuller, 346 S.C. 477, 552 S.E.2d 282 (2001). 121


Artificial Intelligence and Policy in India, Volume 3

185

3. Direct Liability: In this model the crimes require both the formulation of an actus reus and mens rea and then would hold the AI system directly liable. Admittedly the formulation of an actus reus is much easier than that of a mens rea. Therefore, cases of strict liability that don’t require a formulation of mens rea could be held applicable for AI systems. There are also considerations of the available defences that an AI system, the users or the developers that may require further discussions such as whether a malfunctioning program could claim a defence similar to that of defence of insanity? Or whether AI systems affected by viruses could claim a defence of intoxication?123

Civil Liability When considering the civil liability for an AI system one of the most relevant tort i.e., the tort of negligence, practically jumps out for usage. The tort of negligence is most often alleged in the cases of software defects, AI system malfunction etc.124 Gerstner has classified this tort into 3 elements in order to be applicable to AI.125 1. There existed a duty of care 2. There was a breach to that duty of care 3. This breach causes an injury to the duty of care It was discussed that it is obvious that an AI system would have a duty of care, where the discussion would arise would be the extent to this duty i.e., whether an expert level system would have a duty of care of that of an expert or is it that of a professional. Whether a breach of this duty could be considered because of the fault of an AI system? As per Gerstner yes, in fact he gives numerous examples of possible breaches in this duty like the omission of a fault that could have been detected and corrected by a developer, inadequate warnings, etc. Whether a breach would cause an injury to the party also depends to an extent on the kind of action taken by the AI system, if the AI system had only made recommendations rather than undertaking an action then finding a breach would have been difficult. There also exists a consideration of whether an AI system would be considered a product or a service. As per Gerstner similar to the case of electricity being considered a product, she suggests the same for an AI system as that would necessitate the developers to produce a product free from design, manufacturing

John Kingston, “Artificial Intelligence and legal Liability” Tuthill G.S.: Legal Liabilities and Expert Systems, AI Expert (Mar. 1991). 125 Gerstner M.E.: Comment, Liability Issues with Artificial Intelligence Software, 33 Santa Clara L. Rev. 239. http://digitalcommons.law.scu.edu/lawreview/vol33/iss 1/7 (1993). 123 124


186

Artificial Intelligence and Policy in India (2021)

defects etc126. Contrary to Gerstner belief Cole127 considers it applicable for AI systems to be considered a service and states that AI systems may be considered as providing reasonable conclusions from larger datasets but also necessitates the need to follow the rule of absolute liability in Section 520-524 of the restatement of torts128 as in cases of “ultrahazardous material”.

Concluding Remarks From the aforementioned discussion we have noticed the need for interpretability models that explain the “black box” AI systems which have more and more been utilised in various sectors in India. Often these complex AI models are being used when there has not been a proper framework of ascertaining liability in cases of these complex AI systems. The above discussion aimed to provide how already accepted legal provisions and those of the common law could be utilised when it comes to determining the liability of AI systems as a third party.

John Kingston, “Artificial Intelligence and legal Liability” Cole G.S.: Tort Liability for Artificial Intelligence and Expert Systems, 10 Computer L.J. 127 (1990). 128 Restatement (Second) of Torts: Section 552: Information Negligently Supplied for the Guidance of Others. (1977). 126 127


Artificial Intelligence and Policy in India, Volume 3

187

14 European Union’s Legislative Proposal on AI Governance: Policy Review Aathira Pillai1 & Rahul Dhingra2 1Research 2Research

Intern, Indian Society of Artificial Intelligence and Law Intern, Indian Society of Artificial Intelligence and Law research@isail.in;

Synopsis. This is a Policy Brief submitted for the Strategic & Civilized AI initiative.

Introduction Continuous technological innovation is not only producing increasingly sophisticated products, but it is also making these software packages more accessible. It is making it easy for an increasing number of people and organisations to use and create these technologies. While the global democratisation of technology is a desirable development, it cannot be said for all technological applications that are being produced. The proliferation of AI algorithms used in financial companies, corporations, governmental agencies, police, and other organisations is already having an impact on the community. These AI systems can and do make decisions that have profound consequences in people's lives. As a result, the use of various innovations should be governed, or at the very least supervised, to prevent the technology from being misused or abused for detrimental purposes.(Why governments need to regulate AI, 2019) Lawfully controlling Intelligence can ensure that AI safety is incorporated into any future AI development project. This means that every new AI, irrespective of its sophistication or simplicity, will go through a design process that will immediately focus on lowering non-compliance and failure risks. To assure AI safety, policymakers must include a few must-have principles in regulation. These principles must include non-weaponization of AI technology as well as the culpability of AI proprietors, programmers, or producers for the conduct of their Ai technologies. Executive Summary In line with this, the European Commission (the "Commission") announced its proposal for an Artificial Intelligence Regulation on April 21, 2021. (The "AI Regulation"). The proposal is the outcome of the Agency's years of effort, which


188

Artificial Intelligence and Policy in India (2021)

included the release of a "White Paper on Artificial Intelligence." The AI Regulation intends to establish in the EU a robust regulatory regime for Artificial Intelligence ("AI"). The goal is to create a legislative structure that offers the legal certainty required to encourage AI research and investment while simultaneously protecting fundamental rights and ensuring that AI technologies are handled correctly. (European Union: The EU's New Regulation On Artificial Intelligence, 2021) The European Parliament passed numerous AI-related resolutions in October 2020, inclusive of ones on ethics, responsibility, and copyright. In 2021, measures on AI in criminal issues, as well as in education, culture, sphere, were observed adopted to set out robust and immaculate legal framework. This concept suggests a broad regulation policy for Artificial intelligence that is both equitable and measurable. This proposal is confined to the unadorned prerequisites to recognise the potential problems associated with AI, without inordinately restricting or hampering the scientific progress. The proposition creates a set of harmonised criteria for the evolution, market implementation, and usage of Ai technologies in the Community, with the goal of providing a single AI terminology that is prospectively objective. A board on artificial intelligence is constituted with the intention to implement the proposed regulations under the governance of the member states. This policy brief attempts to present the key provisions of the proposed AI law, their implications for various stakeholders, barriers to AI development, and policy proposals and alternatives.

The European Approach to Reliable AI The AI Regulation's primary provisions are the adoption of: • AI technologies' constraining principles: This provision would impose restrictions on information and its administration, documenting, openness and data distribution to consumers, human oversight, and resilience, precision, and protection on owners and consumers of elevated Ai technologies. Its central element, foreshadowed in last year's White Paper on Artificial Intelligence, is the necessity for ex-ante compliance check to ensure that elevated Ai technologies fulfill the requirements before they can be put up for sale or made operational. A stipulation for a post market surveillance system to identify and alleviate difficulties in use is another major advancement. • Facial recognition row: It should come as no surprise that the first significant area of AI to comply with regulatory laws is facial recognition. This technology is extremely intrusive and has the potential


Artificial Intelligence and Policy in India, Volume 3

189

to significantly impact the lives of all citizens in a variety of ways. Throughout the previous decade, AI technologies connected to facial recognition and surveillance has been the source of numerous concerns. These technologies have a more discriminating potential than others. Therefore, a dedicated discussion over regulating these technologies is welcomed. • Intolerable risk policy: AI algorithms that pose an obvious danger to people's safety, prosperity, and interests will be prohibited. This encompasses AI systems or programmes that control people's behavior in order to circumvent consumers' free choice (e.g., toys with voice control promoting risky behaviour in kids) and tools that allow authorities to conduct "social scoring." This will be accomplished by keeping a list of specific restricted AI systems developed after categorizing technologies as high risk, limited risk, and minimal risk. Any AI technology used in key infrastructure, academic or vocational training, security materials, work opportunities, worker management, and selfemployment, integral private and public services, law enforcement, migration, asylum, and border protection management, judicial system and democratic processes, and so on is considered high risk and subject to stringent requirements. In general, their live use in publicly accessible areas for law enforcement objectives is illegal. Similarly, AI technologies with explicit transparency obligations, where users should be aware that they are interacting with a machine so that they can make an informed option to continue or pause are considered limited risk technologies. Other harmless AI enabled games and applications are considered to be of minimal risk and the proposed legislation doesn’t intervene here. (Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence, 2021) • Approach to AI excellence: The Coordination Strategy has been comprehensively updated, and concrete collaborative measures for collaboration are proposed to guarantee that all activities are linked with the European Strategy on Intelligence and the European Green Deal, while taking into consideration new problems introduced by the coronavirus outbreak. It presents a vision for accelerating investments in AI, which can help the recovery. It also seeks to accelerate the implementation of national AI strategies, eliminate divergence, and solve global concerns. “The revised Coordinated System will make use of funds from the Digital Europe and Horizon Europe initiatives, as well as the Recovery and Resilience Facility, which has a target of 20% digital expenditure, and Cohesion Policy programmes.” (Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence, 2021) • Fines and penalties: Fines of up to EUR 30 million or up to 6 percent of annual revenue, whichever is greater are envisaged in case of non compliance with the new regulations. The vast amounts indicate the sincerity of the


190

Artificial Intelligence and Policy in India (2021)

commission to achieve the goal of governing AI.

The AI Regulation's Scope of Enforcement Application to Consumers and Organizations: The AI Regulation offers a comprehensive legislative scope that encompasses all elements of the creation, marketing, and use of AI systems. The AI Regulation will apply to providers who place AI systems on the real economy or put AI systems into service, whether they are based in the Member states or in a third country, users of AI systems in the EU and suppliers and consumers of Ai technologies based in a foreign country where the system's outcome is used in the Union. Article 114 of the Treaty on the Operation of the European Union (TFEU), is the legal framework for the proposal which allows for the adoption of measures to guarantee the formation and functioning of the internal market aiming a single digital market summary. A proper functioning of the internal market upon the set up harmonized rules with due scrutinisation placing on the Union Market as a distinct AI system. To reduce the dangers to basic rights and safety presented by AI that are not covered by any current legal frameworks, high-risk AI systems must meet rigorous criteria for high-quality data, documentation and traceability, transparency, human oversight, accuracy, and robustness. Article 288 TFEU in consonance with the regulation formed would decrease legal fragmentation and promote the creation of a single market for lawful, safe, and reliable AI systems. European Artificial Intelligence Board: In terms of administration, the Commission has proposed that the proposed laws be overseen by national competent regulatory and supervisory authorities, while the establishment of a European Artificial Intelligence Board will assist their execution and accelerate the development of AI norms. The EAIB will help national supervisory authorities and the Committee work together effectively; it will coordinate and contribute to Commission guidelines, and assist national supervisory regulatory bodies and the Commission in ensuring uniform implementation of the Law. Furthermore, voluntary standards of conduct for non-high-risk AI are advocated, as well as regulated sandboxes to enable responsible development.The intelligence board is expected to include members from all AI development stakeholders. This board is designed to serve as a focal point for bringing AI developers and regulators together. National competent bodies, member Countries must appoint national competent bodies as well as a national supervisory agency to provide help and direction on the AI Directive. Commercial monitoring of Intelligence system is necessary by Member Federal officials. If an authority considers that an Artificial intelligence system poses a hazard to life, security, or basic rights, the body must


Artificial Intelligence and Policy in India, Volume 3

191

evaluate the Ai software and, if required, take corrective action. The bodies are further required to penalize developers in case of any violations. Violation of the AI Rules is punishable by fines of up to EUR 10m - EUR 30m, or 2% - 6% of global annual revenue, whichever is greater. The amount of the fine levied is determined on the type of the violation. (European Union: The EU's New Regulation On Artificial Intelligence, 2021) The AI Law will be implemented by regulatory agencies, with no complaint system or explicit enforceable rights for people. It is uncertain whether Member Countries will designate data security supervisory authorities, standard development bodies, or other organizations to fulfill the function of "relevant authority." Notably, the AI Law does not mimic the GDPR's "one-stop shop" mechanism, raising worries about consistency and collaboration throughout the 27 Member States.

Challenges & Omissions The European Union's draft artificial intelligence (AI) policy, which was unveiled on April 21, is a severe blow to Silicon Valley's widely held belief that the government could perhaps leave growing technology free. The proposal outlines a sophisticated legal framework that prohibits some AI applications, highly controls high-risk applications, and moderately governs less hazardous Ai technologies. Also, the draft legislation has the potential to discourage new inventions. The following are some of the gaps and omissions in the draft regulation: • Algorithmic bias remains largely unaffected: The recitals supporting the rule are replete with allusions to the well-drafted issues about the hazards of prejudice. However, the regulation's text is notably light on the requirement for performing and releasing disparate impact assessments. There is nothing concrete in the works to address the issue of algorithmic prejudice. Although, in various instances, the legislation refers to differential impact analysis. The information governance rule allows Artificial intelligence providers to use information about sensitive attributes like color, sexuality, and ethnicity to provide “bias monitoring, detection, and correction.” According to the resilience rule, some automation technologies must guard against “possibly biased outputs” A system's software design must incorporate "measures used to evaluate... prejudiced affects," as well as relevant data concerning "anticipated unintended consequences and sources of risk to... basic rights and discrimination." These references are largely ambiguous and do not expressly call for impact studies on ethnic minorities. Furthermore, the paperwork, which logically must include a prejudice analysis, is not required to be shared to users, the general public, or others who may be impacted by biased algorithms. It is only available upon request to authorities. In contrast, the Act specifically requires system


192

Artificial Intelligence and Policy in India (2021)

reliability reviews and transparency. (Machines learn that Brussels writes the rules: The EU’s new AI regulation, 2021) • Unharmed Big Tech: Despite being the subject of considerable and growing concern about the usage of Artificial intelligence algorithms and the emphasis of the majority of cutting-edge applied Research in this field, Big Tech emerges almost unharmed under the new AI regulations. The technologies employed in social networking, search engines, online shopping, android market, mobile applications, and operating systems are not considered high risk under the legislation. Some algorithms employed in ad tracking or recommendation engines may be banned as deceptive or exploitative practises. However, as previously said, this would require a review by a regulator. The biggest threat to average people's liberties is posed by major tech companies. The new regulation's incapacity to deal with the threat posed by the big players adds to the anguish of the already beleaguered common man. The demand for AI regulation was mostly driven by uncontrolled AI behemoths such as Facebook and Google, which were functioning freely and frequently exceeding boundaries. If these firms are not brought inside the purview of the proposed law, proponents of AI regulation should not consider it a victory. Their demands are from accepted as of now. • Limited information disclosure: The regulation is scant on information that must be supplied to anyone who would be affected by Ai technologies. The legislation demands that people be told when they "interact with" an AI system or when their feelings, sexuality, religion, ethnicity, or sexual preference are "identified" by an Automated process. They must be informed when “deepfake” technologies deliberately deflate or modify material. In certain circumstances, however, this is not the case. People, for example, do not need to be informed when they are computationally sorted to assess eligibility for public services, finance, training, or employment. Creating measures that allow for the discretionary dissemination of information may allow lack of transparency to creep back into the domain of AI. Furthermore, the huge tech firms can get around the regulation's weakly outlined criteria. • Ineffective compliance evaluation: The compliance evaluation obligation is far less preventive and revealing than it seems. The compliance evaluation is a technique, not a paper, and is an internal check-off for the majority of high-risk Ai program vendors. As a result, there is no independent review available for the people or the regulator to review. Instead, the public receives a "mark" confirming compliance with the regulations affixed to the Ai model. AI system suppliers must create a "declaration of conformance" and "keep it at the disposal of" regulators, although even this declaration of regulation compliance might be kept under wraps. (Machines learn that Brussels writes the rules: The EU’s new AI regulation, 2021)


Artificial Intelligence and Policy in India, Volume 3

193

Evaluation & Suggestions

1. This policy seeks to control the market and examine compliance with the obligations and regulations for all high-risk AI systems that have previously been placed on the market by market surveillance authorities. Ex post enforcement is accountable for deploying the AI system. Systems used for remote biometric identification, critical infrastructure safety, and other highrisk systems are included on the list. With facial recognition systems producing spurious findings and being forbidden in the United States due to racial bias, voice biometrics could be a viable solution to improve forecast accuracy. Voice biometrics is a unique biological identifying instrument subject to algorithmic credentials. 2. Despite the fact that the EU's joint coordinated plan intends for a risk-based approach, distinguishing between AI applications that pose unacceptable risks, assessing the compliance of trustworthy AI is confusing, especially given the difficulty of implementing EU legislation. The European Intelligence Artificial Board and the European Data Protection Board, as newly constituted bodies with the power to issue directives and administer harmonized laws with penalties imposed for failure to conduct due diligence, would become arbitrary if the entire power was vested without any governance and implemented according to the whims and fancies of the majority of member states. 3. The Directive's analysis revealed the necessity for a varied set of norms and standards at the European level. The number of significant occurrences or AI performances that constitute a serious incident or a breach of fundamental rights obligations was found to be widespread among the AI deployed systems. Though an unified legislative action could boost the internal market regulation; legal uncertainty and the Union Involvement would not be adequate to ensure the effectiveness of the obligations with divergent national rules in safeguarding the fundamental and human rights. 4. With the EU's introduction of new harmonized rules for the establishment of a new regulatory framework; in order to institutionalize AI for the promotion of Digital Europe, investment and participation in R&D should be encouraged for accelerating the Commission's resources to develop a trustworthy AI. 5. In order to ensure that overriding public interests such as health, safety, consumer protection, and the preservation of other essential rights are met, this proposal places some restrictions on corporate freedom. These restrictions are fair and limited to the bare minimum required to avoid and mitigate severe safety challenges and potential infringement of basic human rights. To ensure that both corporate and individual interests are secured and safeguarded in the Union Market, evidence - based analysis based on factual findings must be investigated using standard operating procedures.


194

Artificial Intelligence and Policy in India (2021)

Conclusions The European Commission's Artificial intelligence regulation plan is perhaps the most recent addition to an ambitious digital legislative agenda announced progressively over the last two years by Brussels. Its Digital Services Act and Digital Markets Act targeted the behaviour of U.S. platform behemoths, which may explain why this Artificial intelligence legislation initiative appears to be aimed elsewhere. The committee has also openly presented the AI law as a defence of European values against less ethical Chinese AI researchers. The European Union takes pride in creating legal systems that have an influence beyond its boundaries, as evidenced by the GDPR. However, whether its Automation legislation becomes the dominant worldwide set of norms in a rivalry with the United States and China is far from certain. The commission's recommendation reflects broad analysis and precise decisions on tough policy issues. For that reason alone, it will be valuable on a global scale, even as it advances alongside the technology it tries to master. The proposed law is now being considered and discussed by the European Council of Europe. The Rule will enter into force 20 days after it is published in the Official Journal. The Rule will take effect 24 months from that date, while some parts may take effect sooner. Despite its flaws, the AI Regulation has been widely hailed as the new "GDPR for AI," and it is largely seen as the Agency's thorough and courageous initiative to pave the charge in one of the most quickly evolving fields of technology since the invention of the Internet. It will be interesting to see what happens in the next days.

References 1. Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence. release, Press. 2021. s.l. : European commission, 2021. 2. European Union: The EU's New Regulation On Artificial Intelligence. Rob Corbet, Colin Rooney , Olivia Mullooly , Ian Duffy , Ciara Anderson , Caoimhe Stafford , Aoife Coll , Alison Peate , Siobhán O’Shea and Rachel Benson. 2021. s.l. : Mondaq, 2021. 3. Machines learn that Brussels writes the rules: The EU’s new AI regulation. Propp, Mark MacCarthy and Kenneth. 2021. s.l. : Brookings, 2021. 4. Why governments need to regulate AI. Joshi, Naveen. 2019. s.l. : Allerin, 2019.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.