The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Page 1

INDIAN SOCIETY OF ARTIFICIAL INTELLIGENCE AND LAW

The Indian Learning e-ISSN:Â 2582-5631 Volume 1, Issue 2 (2021) January 31, 2021 Abhivardhan, Editor In Chief Kshitij Naik, Chief Managing Editor

Digital Edition isail.in/learning


e-ISSN: 2582-5631 Volume: 1 Issue: 2 Website: isail.in/learning Publisher: Abhivardhan Publisher Address: 8/12, Patrika Marg, Civil Lines, Allahabad - 211001 Editor-in-Chief: Abhivardhan Chief Managing Editor: Kshitij Naik Date of Publication: January 31, 2021 Š Indian Society of Artificial Intelligence and Law, 2021. No part of the publication can be disseminated, reproduced or shared for commercial usage. Works produced are licensed under a Creative Commons Attribution-NonCommercialNoDerivatives 4.0 International License. For more information, please contact us at editorial@isail.in.

2

THE INDIAN LEARNING / JANUARY 2021


contents ARTICLES

SPECIALS

EXCLUSIVES

05 An AI Perspective on the Indian "Guardians of Galaxy"

14 Position Statement on Turkish Drone Strikes in Iraq and the Position over Lethal Autonomous Weapons

27

Understanding the Basics of Facial Recognition Tech: How Does it Affect Us?

07 Analysis of CJEU’s decision on annulment of EU-US Data Privacy Shield

16 New guidelines for a new era: AI in clinical trials

33

Can AI Enable Equitable Lending?

10 How "Mind-Reading" AI Systems Are Different than What We Have Seen in the AI Industry

22 Decrypting Conscience in the Entitative Stronghold of AI

36

Regulatory innovation to watch the watchmen: IFF and the launch of project Panoptic and more...

3

THE INDIAN LEARNING / JANUARY 2021


Editorial Board Abhivardhan, Editor-in-Chief Chairperson & Managing Trustee Indian Society of Artificial Intelligence and Law abhivardhan@isail.in Kshitij Naik, Chief Managing Editor Chief Strategy Advisor Indian Society of Artificial Intelligence and Law kshitij@isail.in Aditi Sharma, Managing Editor Deputy Strategy Advisor Indian Society of Artificial Intelligence & Law aditi.s@isail.in Mridutpal Bhattacharya, Managing Editor Junior Research Analyst Indian Society of Artificial Intelligence & Law mridutpal@isail.in Associate Editors Abhishek Jain, Senior Associate Editor Chief Managing Editor Indian Journal of Artificial Intelligence & Law abhishek.jain@isail.in Aryakumari Sailendraja, Senior Associate Editor Chief Operating Officer Indian Society of Artificial Intelligence and Law abhinav@isail.in

4

THE INDIAN LEARNING / JANUARY 2021


Aarya Pachisia Jindal Global Law School, India

Recently, an article referred to Artificial Intelligence (‘AI’) being the Guardian of the Galaxy. It shed light on the utility and benefits of AI in space exploration. There does not exist an ounce of doubt with respect to AI and its social benefits in Outer Space but before overwhelming ourselves with only the advantages, it is imperative as well as only rational to dive deep into the legal issues pertaining to the same. The Space Industry is estimated to reach the valuation of 1 trillion dollars by 2040. European Space Agency (ESA) estimates that for 1 euro of investment, the social benefits that we receive is equivalent to 6 euros. Space Industry is a demand-driven market with no central regulatory body. The private sector accounts for 70% of space activity. In this context, it is necessary to discuss the Treaties that govern Space Exploration. These Treaties do very little to clear the ambiguity around the same and the presence of AI only opens floodgates to excessive uncertainty. It is impossible to have a conversation on Space Law without analyzing the Outer Space Treaty and the Liability Convention. The former forms the basis for International Space Law and the latter determines the international liability on the launching State[1]. It is important to note that the Liability Convention only applies to the launching State. The first section of this piece shall analyze the potential legal issues that may arise due to the presence of AI in Space Exploration and the second section shall deal with the lacunas in the governance of AI in space.

THE INDIAN LEARNING/JANUARY 2021 5

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

An AI Perspective on the Indian "Guardians of Galaxy"


AI in Space: Legal Dilemmas The impact of AI in Space is not restricted to the Outer Space but also extends to the population on Earth. The two major issues that arise due to AI’s presence in Space is twofold: (a) Potential breach of privacy and ethical issues; (b) The possible collisions with other space objects. AI in Space and Privacy The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Maintenance of privacy continues to be one of the most crucial concerns in almost every field, and Outer Space is no stranger to the issue. The satellites collect data from terrestrial, ariel as well as spatial objects and events. For instance, Geospatial Intelligence interprets the event that is transpiring in real-time from the data collected from the satellites. In January 2020, the United States imposed immediate interim exports controls regulating the dissemination of AI technology software that possesses the ability to automatically scan ariel images to recognize anomalies or identify objects of interest, such as vehicles houses, and other structures.[2] The GDPR governs the protection of personal data within the EU jurisdiction. It defines personal data as information that is directly traceable to a specific individual. AI in Space can be used to breach the provisions of the regime by public authorities as well as private entities if mass surveillance is initiated owing to the advancing technology of Satellite imagery, VHR (Very High-Resolution Processing) and ProcessingIntensive Image Analysis[3]. Face recognition technology combined with GPS can also give rise to extreme privacy concerns

Recently, the European Commission (‘the Commission’) issued White Paper on AI, seeking proposals to regulate the same from the stakeholder. Despite this white paper, the concerns with respect to AI and its presence in outer space shall not be directly addressed. It is understood that the GDPR framework is not enough to also regulate the functioning of AI and therefore, it is necessary to have a separate set of regulation for AI. The subjects of GDPR are only the citizens of the European Union. There are domestic privacy legislation that are applicable in different jurisdictions. The principles that remain consistent in the maintenance of privacy are- (i) purpose limitation, (ii) storage principle, (iii) accuracy principle as well as the (iv) the right to be forgotten. It is also imperative to uphold transparency and verification of compliance to the privacy regulations. This can be difficult considering the partial autonomous, black-box effect and unpredictability, therefore, making it impossible for enforcement agencies to ascertain whether compliance and transparency were upheld.

THE INDIAN LEARNING/JANUARY 2021

6


AI in Space and other space objects

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

The two treaties that become relevant to our discussion is the Outer Space Treaty and the Liability Convention. The former is the building block of International Space Law. The Liability Convention finds its genesis in Article VII of the Outer Space Treaty. The Liability Convention ascertains the international liability of a launching state on the basis of where the damage occurs. Outer Space does not belong to a singular state but it crowded with objects from different jurisdictions. The presence of a state in Space is the evidence of its sophisticated and advanced scientific stature. The Liability Convention is extremely restrictive in nature and does not hold a private entity liable for any damage caused in Outer Space. The Liability Convention determines the liability of a Launching State in the following ways: 1. Article II awards strict liability for the damage a space object causes on earth or on an aircraft. 2. Article III awards fault liability for damage space object causes in outer space to any other celestial body on the basis of the degree of fault. It is now necessary for us to evaluate how liability shall be assessed in the case of intelligent space objects. A space object is intelligent when it is capable of making autonomous decisions after being trained by data provided to its algorithm by its developer. Article I of the Liability Convention defines ‘Damage’ as “loss of life, personal injury or other impairment of health; or loss of or damage to property of States or of persons, natural or juridical, or property of [the] international intergovernmental organization.” This definition creates ambiguity with respect to intelligent space objects (‘IOS’) as it fails to account whether the definition includes within its scope non-kinetic harm that can be caused by such intelligent objects[4]. It also fails to account for the scope of ‘other impairment of health’. For instance, intelligent space objects are being developed to accompany astronauts in space exploration to provide them with a companion.

If the IOS fails to maintain the mental health of the astronaut or ends up causing severe mental health setbacks, will the same be included within the scope of the definition of damage? Fault-based liability which is evaluated under Article 3 is predicated on Human Fault[5]. In the legal field, any entity which has certain rights and obligation is considered to be a person. This ‘person’ is an artificial entity (for instance, limited liability corporations or joint ventures, a State an international legal personality but is not necessarily an actual human being) and not a human being as we understand in layman terms. It is also necessary to understand that the decisions taken by these entities are in reality taken by actual people. This is not the same in the case of Intelligent objects that are capable of making autonomous decisions. The decision taken by a State is, in reality, the decision of certain individuals which is not devoid of human emotions and consciousness. Therefore, the decision of a legal person like the State is premised on a human being’s rationale. Moreover, the developers of AI software do not understand how does an AI arrive at a particular conclusion. If in the absence of human oversight over IOS, the fault of launching state should not be sweepingly made as this shall cripple the development and advancement of AI in Space. Instead, the lability in the absence of oversight should be determined by asking the following question – what conduct is necessary to attribute the fault liability of a State for damage caused by an ISO when human oversight is not involved in the occurrence of causing damage.[6] THE INDIAN LEARNING/JANUARY 2021

7


Limitations of Space Treaties in governing the Guardian of the Galaxy

The Indian Learning | e-ISSN:Â 2582-5631 | Volume 1, Issue 2 (2021)

There are no treaties, specifically governing IOS. The lack of such treaties shed light upon the potential problems that shall surface due to the lack of substantive law that shall apply. For instance, if a State claims for compensation under the Liability Convention, then which State’s substantive law shall apply in order to arrive at the compensation that can be awarded. Issues with respect to the standard of care and what constitutes fault with respect to an IOS are left unanswered. There is little to no policies on regulation or governance of AI. The ambiguity with respect to the application of negligence or theory of product liability for claiming compensation continues to exist. Both the grounds demand the involvement of human conduct. Negligence occurs when there is an omission of human conduct where it was near to essential whereas the product liability concerns defect in software design or the inability to inform of the existence of the defect.[7] The White paper by the Commission suggested the adoption of fault-based liability as against product liability stating the reason that it shall be difficult to prove that there exists a causal link between the defect in the product and the damage that occurred if the latter was adopted. Therefore, we see there is no consensus on the applicability of a particular regime among jurisdictions which gives rise to further uncertainty with respect to claiming compensation against damaged caused due to an IOS in Outer Space. There are new perspectives arising from claiming compensation due to the damages caused by AI. For instance, Autonomous machines being given the status of a legal person and the standard for a reasonable man being substituted with robotic common sense[8]. The other perspective is to consider the AI machine as the agent of the operator, therefore, holding the operator liable. [9]

AI in Outer Space: Indian Perspective Recently, the Finance Minister of India announced the privatization of the Indian Space sector which opens floodgates to investments and participation of private entities. Although, this has is viewed as a step in the right direction, it also calls for a robust legal framework to be enacted in India to regulate as well as legislate on issues pertaining to the involvement of private entities in Outer Space. The Space Activities Bill, 2017 has been criticized extensively for being too vague and is inept to regulate INSPACe, an independent regulatory body that shall be established in order to oversee the commercialization of Space Sector. The vagueness of the Bill with respect to liability as well as taking into account the participation of the private entities in the Space sector, the licensing scheme has also been left ambiguous. The presence of AI in Outer Space shall only aggravate the issues posed by the draft Bill even further. ISRO acknowledges the risk of sending humans into space and recently announced the possibility of launching Vyommitra, a humanoid that is equipped with AI tools to lead space missions. This gives rise to potential legal issues that can plague the Indian as well as International Space sector if not efficiently addressed. The International Space Law shall rely on domestic substantive law to arrive at the determination of liability. The complexity of legal issues shall only increase with the involvement of private entities and presence AI in Space. Therefore, it is imperative for the Indian space legislation to be devoid of vagueness with potential legal issues addressed issues in order to facilitate the presence of AI in Space.

THE INDIAN LEARNING/JANUARY 2021

8


Conclusions

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

The Space law and AI regime can only develop with advancement and crystallization of domestic law with respect to AI or such legal conundrums would continue to exist. The volatile nature of legal issues around AI will also prevent in determining the process for claiming compensation in case of damages caused by IOS. We are aiming to open space for commercial purposes and hoping to take vacations on the moon but in order to implement such dream, it is necessary and only practical to first establish a structure that can make such dreams viable.

References [1] Launching State is defined under the Liability Convention: A State which launches or procures the launch of a Space object and the State from whose territory or facility the space object is launched. A non-governmental actor does not have liability under the Liability convention irrespective of the culpability. [2] 85 Fed. Reg. 459 (January 6, 2020) [3] Cristiana Santos, Lucien Repp, Satellite Imagery, Very High Resolution and Processing-Intensive Image Analysis: Potential Risks under GDPR, Air and Space Law, 44 (3), 275, 295, available at https://kluwerlawonline.com/journalarticle/Air+and+Space+Law/44.3/AILA2019018 [4] George Anthony Long, Small Satellites and State Responsibility Associated with Space Traffic Situational Awareness at 3, 1st Annual Space Traffic Management Conference “Roadmap to Stars,” Embry-Riddle Aeronautical University, Daytona Beach, November 6, 2014, available at https://commons.erau.edu/stm/2014/thursday/17/ [5] George Anthony, Cristiana Santos, Lucien Rapp, Réka Markovich, Leendert van der Torre, Artificial Intelligence in Space, Legal Parallax, LLC, USA & University of Luxembourg. [6] Id. [7] Id. [8] Iria Giuffrida, Liability for AI Decision-Making: Some Legal Ethical Considerations, 88 Fordhom L. Rev. 439 (2019) [9] Anthony, supra note 5, at 22.

THE INDIAN LEARNING/JANUARY 2021

9


Ankita Bhailot Former Research Contributor, ISAIL

In this intensely digitalize and the data-rich world, data streaming across the globe is a part of business communications and commercial as well as social interactions. In March 2012, the EU and the US issues a joint statement on data protection affirming that both were ‘dedicated to the operation of the Safe Harbor Framework- as well as to our continued co-operation with the Commission to address issues as they ariseas a means to allow companies to transfer data from the European Union to the United States, and as a tool to promote transatlantic trade and economic growth’[1]. However, a little more than a year later in July 2013 and following the disclosures of NSA informant Edward Snowden, the then Vice President of the Commission expressed at the Justice Council that ‘the Safe Harbor agreement may not be so safe after all. It could be a loophole for data transfers because it allows data transfers from the EU to the US companies – although the US data protection standards are lower than our European ones[2]’. This system hence did not resolve the Fundamental conflict among surveillance and data protection.

THE INDIAN LEARNING/JANUARY 2021 10

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Analysis of CJEU’s decision on annulment of EUUS Data Privacy Shield


The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

The case under analysis is Schrems II case (Schrems I[3] being Maxmillan Schrems Vs Data Protection Commissioner where the CJEU put shut down to the particular system directing data streams to the US), concerns overreaching surveillance system of the US invalidating the ‘Privacy Shield’ data-sharing system between the EU and the US. It also spreads light over Facebook’s large scale data processing strategy of EU citizen’s data and mass re-distributing of data transfer from the EU to the US. The US prioritizing digital surveillance is by the fact is allowed under the FISA[4] (Foreign Intelligence Surveillance Act, 1978) an executive order. This demonstration legitimately slams into the European Fundamental Rights, which bestows its residents with the privilege of security and data protection given under the EU Charter of Fundamental Rights. The specialists have consistently reprimanded the content as they would like to think that it penetrates the exceptionally rudimentary Rights of the residents to security and assurance. The judgment laid by the Court of Justice of European Union (herein referred as CJEU) on the ampleness of security given by EU-US Data Privacy Shield is anyway refuted though; EU commission's choice on SCC (Standard Contractual conditions) for transfer of personal data to processor third nation has been expressed as substantial underlining in a way that the data protection privileges of the European residents are fundamental in nature. Further, the CJEU proposes that the SCCs shall be modernized in reference to the GDPR[5]. However, no firm course of events be created under EU information defensive order for such a process. The Commissioner during the press briefing session expressed that the legal instruments for transatlantic data transfer continues to exist and vowed to work intimately with the US partners to graph a course towards subbing the Privacy Shield. Similarly, taking required shots at equivalent changes whenever required with GDPRs, concerning FISA Law versus the US government law on data security will not only make the present law in reference to other nations stand rigid but will assure transparency in transfer of data from the EU states to such third nations. This shall in return defend the enthusiasm of the residents just as it will be profiting the Companies. The US Department of Commerce acknowledged the decision and the fact that it has negative impacts on the transatlantic connection between the EU and the US. The practice will thus entail steady and continuous data transfer with solid protections to third nations and organizations including the current member organizations that are following Privacy shield strategy. Regarding the level of protection required in respect of such a transfer, the Court holds that the requirements laid down for such purposes by the GDPR concerning appropriate safeguards, enforceable rights and effective legal remedies must be interpreted as meaning that data subjects whose personal data are transferred to a third country pursuant to standard data protection clauses must be afforded a level of protection essentially equivalent to that guaranteed within the EU by the GDPR, read in the light of the Charter. In those circumstances, the Court specifies that the assessment of that level of protection must take into consideration both the contractual clauses agreed between the data exporter established in the EU and the recipient of the transfer established in the third country concerned and, as regards any access by the public authorities of that third country to the data transferred, the relevant aspects of the legal system of that third country.[6]

THE INDIAN LEARNING/JANUARY 2021 11


The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

There can be seen various responses to the decision of the CJEU as the US is ending up circled into Schrems' Facebook-SCC challenge where the complainant contended that ‘the component penetrates fundamental EU rights and doesn't give sufficient security rights to the residents’. It concerns the state as the CJEU has settled on the Privacy Shield and not on SCC which doesn't have the instrument of selfassessment on the nature of security offered by any third nation. SCC according to the US presents to be a device that can ensure data protection rights to EU residents only if the legitimate condition exists. The adjudicators for this issue have proposed that the data controllers will complete an appraisal of data security just to be managed by the nation where such information is being taken; wherein the controller of such data transfer will have a lawful commitment to follow-up on the complaints and suspend the exchange on the off chance that it doesn't correspond with the EU law. Diverse defending measures must be taken by the individual data exporter established in the EU foremost; in coinciding with the standard data security clauses received by the commission.

Regarding the supervisory authorities’ obligations in connection with such a transfer, the Court holds that, unless there is a valid Commission adequacy decision, those competent supervisory authorities are required to suspend or prohibit a transfer of personal data to a third country where they take the view, in the light of all the circumstances of that transfer, that the standard data protection clauses are not or cannot be complied within that country and that the protection of the data transferred that is required by EU law cannot be ensured by other means, where the data exporter established in the EU has not itself suspended or put an end to such a transfer.[7] The Privacy Shield strategy has been refuted on the grounds of absence of security to the EU residents; the choice of CJEU is as yet not satisfactory with regards to what options exist for the organizations; for example, Facebook which is out of the domain of the US Surveillance Laws. Facebook for the issue is still utilizing SCCs to take EU resident's data to the US; this practice opens the legal hazard to different organizations, particularly to the ones that are set-up in the US where they might be exposed to data observation under the US laws. All in all, the choice doesn't just influence data transfer to the US yet different purviews, for example, the UK, India or China will likewise require a system of solid surveillance laws. This decision will urge data protection regulators to rethink the worldwide exchange with states under solid surveillance approaches as it were. The UK, for the matter, has already experienced numerous evaluations during its Brexit transition period and has just been assessed by the European courts followed by the vital amendments wherever required. The companies falling under FISA Law stands disappointed to the CJEU’s decision as it only stops the use of Privacy shield policy whereas if the data flow fell under the US surveillance law the SCC shall stand ceased thereon. No matter if under Privacy Shield or SCC, full-outsourcing of data subject to the US surveillance is still not allowed.

THE INDIAN LEARNING/JANUARY 2021 12


References

The Indian Learning | e-ISSN:Â 2582-5631 | Volume 1, Issue 2 (2021)

[1] EU-US Joint statement release https://ec.europa.eu/commission/presscorner/detail/en/MEMO_12_192 [2] Informal Justice Council in Vilnus https://ec.europa.eu/commission/presscorner/detail/en/MEMO_13_710 [3] Judgment of the Court under Max Schrems Vs DPC, 06/10/2015 http://curia.europa.eu/juris/liste.jsf?num=C-362/14 [4] Decoding Section 702, FISA https://www.eff.org/702-spying [5] Privacy and Security Law of the European Union i.e. GDPR https://gdpr.eu/what-is-gdpr/ [6] Full ruling of CJEU http://curia.europa.eu/juris/documents.jsf?num=C-311/18 [7] Full ruling of the court http://curia.europa.eu/juris/documents.jsf?num=C-311/18

THE INDIAN LEARNING/JANUARY 2021 13


Sarmad Ahmad, Former Senior Editor, ISAIL THE INDIAN LEARNING/JANUARY 2021

14

The Indian Learning | e-ISSN:Â 2582-5631 | Volume 1, Issue 2 (2021)

Position Statement on Turkish Drone Strikes in Iraq and the Position over Lethal Autonomous Weapons


The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Earlier this week on the 11th of August 2020, the Iraqi Army stated that two senior commanders of the Iraqi Border guard were killed in a Turkish drone strike in Northern Iraq. Northern Iraq has been subjected to various Turkish attempts to raid on the positions of various PKK fighters since midJune. These ongoing events are part of a decades-long conflict that has existed between the Kurdish PKK and Turkey, which had even resulted at the end of a two-year ceasefire after peace-talks collapsed in 2015. Following these events, three Kurdish fighters were killed as per an update on the 14th of August 2020. The individuals along with a fourth that fled the scene were targets of another Turkish drone strike as they stopped their vehicle outside a grocery store in the proximity of the previous attack. While the reasons for cross-border attacks are specific to the geopolitical situation in the discussion, the use of drone strikes by Turkish forces is just one of the many international instances wherein AI proves its utility in military applications. Autonomous Weapons Systems and its potential applications in international conflict are one of the many incentives for various states to fund AI Research and Development. While there exists an ethical disparity on the topic, the increasing use of contemporary technology in warfare is observable across various other states, from the Indian military revisiting its strategies to include contemporary technologies into teachings to Israeli Defence Forces “gameifying” their new tanks by utilising game algorithms and gaming console controllers to increase operational efficiency. Various international organisations like the “Stop Killer Robots” organisation founded by the Human Rights Watch had publicly mentioned years ago the potential consequences of the increasing uses of Autonomous Weapons in the spectrum of conflict, be it of an international or non-international character. With an increasing consensus on how the regulation of autonomous weapons ought to be introduced into codified law, the position of International Humanitarian Law has to be observed and assessed in this context. Various International Law scholars remain on two sides of a legal disparity in itself. One side emphasises that Autonomous Weapons already come under the ambit of Article 36 of the First Additional Protocol to the Geneva Conventions, stating that a state’s obligation to employ a legal review is flexible enough to expand its ambit. The other maintains the position that smart weapons were not thought of when the provision was codified and hence requires a codification of its own, by the means of a new convention, as was the case for many new contemporary weapons throughout the years such as the Biological Weapons Convention or the Chemical Weapons Convention. Social consensus, discussions and research and development nonetheless increasingly heighten the impending necessity of this issue; which arguably is one of the most forward ethical issues in the realm of AI research towards state utility alongside other discussions.

THE INDIAN LEARNING/JANUARY 2021 15


Pankhuri Bhatnagar, Amity Law School - Delhi, India Introduction If you are a part of the speculative who wonder what is even the need to infuse Artificial Intelligence into the field of Healthcare, consider this: You work in a pharmaceutical organization. Your team researched and dedicated 15 years to developing a new drug or medical procedure and spent billions of funds on it, monitored patients progress and recoveries, only to ultimately reach the conclusion that the anticipated solution does not work. While this may sound like an exaggeration, it is the crippling reality of most pharmaceutical organizations and other research centres who face a 95% disappointment rate. On the other side of the coin, there are patients whose lives are on stake and they may not have 15 years to wait for a lifesaving drug that may or may not work. With the current methodology, for every 100 drugs which achieve first stage clinical trials, only one ultimately provides the intended results.[1] The clinical research sector is becoming more complex and competitive, with stricter regulatory standards and enhanced focus on patient safety. Against this backdrop, the industry needs disruption now more than ever. This is where AI comes into play. By reducing the duration of trials, limiting costs and improving the quality of data, AI offers an innovative way to get rid of trial inefficiencies and help pharma organizations provide new drugs and therapies to the marketplace at a quicker pace. However, it is necessary that AI-based trials must be subjected to rigorous and prospective evaluation to analyse its impact on the patient’s health. Unregulated AI interventions may result in more harm than good, which is why 2 new guidelines have been framed in this regard, namely CONSORT-AI and SPIRIT-AI.

Process of drafting these guidelines The initiative was announced in October 2019 and both these guidelines were registered as undergoing drafting in the EQUATOR library of reporting guidelines. 16 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN:Â 2582-5631 | Volume 1, Issue 2 (2021)

New guidelines for a new era: AI in clinical trials


They were framed by a consensus procedure and developed according to EQUATOR’s methodological framework. The initiative had to be first approved by the ethical review committee at the University of Birmingham, UK. The process consisted of consulting experts and performing literature review to arrive at 29 probable items, which were analysed by an international group of 103 stakeholders through a 2 stage survey. The items were then agreed upon in a 2-day consensus meeting and polished by a checklist pilot (consisting of 31 and 34 stakeholders respectively). Participants were given instructions to vote for each item using a 9-point scale where the numbers would stand for: 1–3, not important; 4–6, important but not critical; and 7–9, important and critical.[2] Out of the 41 items discussed, 29 were finalized of which 14 have been included in CONSORT-AI and 15 in SPIRIT-AI.

CONSORT AI

Purpose – to promote transparency, explain-ability and comprehensiveness in the reporting of such trials. It also seeks to aid editors, readers and members of the scientific community to understand and analyse the quality of the trial design and the associated risks, bias etc which may arise in the reported outcome. Out of the 14 items added, 11 were extensions and 3 are elaborations. The 14 items[3] which passed the 80% threshold at the consensus meeting for being included in the statement are: 1. 1a,b elaboration (i) – The title of the report must indicate the use of AI or machine learning and specify the model used 2. 1a, b elaboration (ii) – Describe in the title/ abstract the intended use of the AI intervention in the clinical trial. Some interventions may have numerous objectives, or the objective may evolve over time. Specifying this enables the readers to understand the purpose of intervention at that point of time 3. 2a extension – Explain the intended use in the context of its purpose and role in a clinical pathway along with the intended users – patients, the general public or healthcare experts. 4. 4a (i) elaboration - Specify the exclusion and inclusion criteria for participants in the trial. This could be determined by factors like pre-existing health conditions, probability of success and survival, etc. 5. 4a (ii) elaboration – Specify the exclusion and inclusion criteria at the stage of input data. For example, in case the input data for the trial is in the form of pictures, its eligibility criteria could include image resolution, the format of picture, quality metrics etc. 6. 4b extension – Describe how the AI system was incorporated into the trial, including any site requirements. The functioning of AI systems generally depends on their environment. The hardware and software may have to be modified or the algorithm may have to be changed according to the study site. This process has to be specified according to the new guidelines. 7. 5 (i) extension – Identify the version of AI algorithm used. AI software often undergo a number of updates during their lifespan, which is why it important to clarify the version used in the trial. This would also allow independent researchers to verify the study. 17 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

It refers to Consolidated Standards of Reporting Trials – Artificial Intelligence and is an extension of the guidelines previously in place. CONSORT 2010 provides the minimum required standards for reporting randomized trials and CONSORT-AI is an extension of the same, applying only to clinical trials which involve AI intervention. 14 new items have been added to it which were deemed crucial enough to be regularly reported in addition to the core items.


SPIRIT AI It stands for Standard Protocol Items: Recommendations for Interventional Trials – Artificial Intelligence. It was framed in parallel with CONSORT-AI, its companion statement. The procedure for finalizing the items for both guidelines was essentially the same, after which they were categorized into the 2 statements.

Purpose – SPIRIT-AI aims to promote transparency and completeness same as its counterpart. It also seeks to help readers understand and appraise the clinical trial and the risks that may be associated with it. Recommendations added to SPIRIT There are 15 new items (12 extensions and 3 elaborations) which have been added to the core SPIRIT 2013 items and are now required to be regularly reported:[4] 18 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

updates during their lifespan, which is why it important to clarify the version used in the trial. This wo If possible, the report should also state the differences between the 2 AI versions and the reasons for changing it. 8. 5 (ii) extension – Inform how the input data was chosen and acquired. The quality of any kind of AI system is dependent on the quality of the raw input data that was fed to it. The selection, acquisition, managing and processing of the data must be described, This would help ensure transparency and comprehensiveness of the trial so that it can be duplicated in the real world. 9. 5 (iii) extension – Inform how low quality or unavailable data was handled. The trial report should describe the amount of missing input data, how was the data which did not meet the eligibility criteria handled, and the impact of this on the trial or patient care. 10. 5 (iv) extension – Clarify if there was any AI-human interaction in managing the data and the expertise required from the users. It is important to define the training given to the users for managing the input data and the process, for example – an endoscopist choosing a set of colonoscopy videos as data for the Ai system to detect polyps. Not clarifying the same can led to ethical issues, for example – it may be confusing whether an error happened on part of the human who did not follow the procedure instructions or if a mistake had been made by the software. 11. 5 (v) extension – Clarify the nature of AI output. The usability of the system depends on the nature of the output, which can be in the form of a diagnosis, prediction, probability, suggested course, or an alarm alerting the happening of a particular event. 12. 5 (vi) extension – Describe how the output led to decision making or other aspects for improving clinical practice. It is crucial to the trial to determine how the output data was utilized, the training required to understand it and how it was used to make decisions for the patient. 13. 19 extension – Explain how errors were spotted and share the results of an analysis of these errors. Even minor issues in AI functioning can have catastrophic consequences when implemented at a larger scale. Thus, it is necessary to observe and report operational errors and come up with ways to mitigate the risks. In cases where such an error analysis is not undertaken, reasons must be given for not performing the same.25 extension – Inform whether the AI intervention or its code can be made available to and accessed by interested parties. It should also specify the licensing and other relevant restrictions to its use.


Analysis of the Guidelines These recommendations provide international consensus-based guidance on the data which has to be disclosed in clinical trials having AI intervention. It does not seek to prescribe the methodology or approach to such trials, rather, its purpose is to bring about clarity and transparency in the reporting process. This would enable researchers and readers to interpret the methods and results of the trial and would encourage peer review. There are certain details which are crucial for independent verification and yet, are often excluded from reports such as: version of AI, input and output data, training of handlers etc, which is why provisions were made to include these in the report. It must be noted that these guidelines have laid down the minimum reporting standards and there are additional AI-based factors to be considered for preparing trial reports, which may be found in the Supplementary table[5] of these recommendations.

19 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

1. 1 (i) elaboration – The title/abstract of the report must indicate the use of AI or machine learning and specify the model used. The tile should be simple, capable of being understood by a large audience. Specific terminologies regarding the AI type must only be used in the abstract. Same as 1ab (i) of CONSORT-AI. 2. 1 (ii) elaboration – The purpose of the intervention and the context of the illness must be elaborated in the title or abstract. Same as 1ab (ii) of CONSORT-AI. 3. 6a (i) extension – Provide a description of the role of intervention in the clinical pathway, its aims, uses and the intended users for which it is designed. Same as 2a of CONSORT-AI 4. 6a (ii) extension – Inform of any pre-existing evidence (whether published or unpublished) regarding the validation of the AI intervention. It must be considered whether the evidence was for uses and a target population similar to that of the clinical trial. 5. 9 extension – Specify the site requirements for integrating the AI system into the trial. It must be specified if the AI required vendor-specific models, or special computing hardware, fine-tuning required etc. Same as 4b of CONSORT-AI 6. 10 (i) elaboration – Same as 4a (i) of CONSORT-AI 7. 10 (ii) extension – Same as 4a (ii) of CONSORT-AI 8. 11a (i) extension – Same as 5(i) of CONSORT-AI 9. 11a (ii) extension – Same as 5 (ii) of CONSORT-AI 10. 11a (iii) extension – Same as 5(iii) of CONSORT-AI 11. 11a (iv) extension – Same as 5(iv) of CONSORT-AI 12. 11a (v) extension – Same as 5(v) of CONSORT-AI 13. 11a (vi) extension – Same as 5(vi) of CONSORT-AI 14. 22 extension – Same as 19 of CONSORT-AI29 extension – Same as 25 of CONSORT-AI Thus, from this list, it is clear that all the guidelines provided for in SPIRIT-AI and CONSORT-AI are common, apart from one additional provision in SPIRIT, namely guideline 6a (ii) which requires the authors of the trial to substantiate the published evidence with references or give proof of the unpublished evidence regarding validation or lack of the AI intervention.


Benefits of the Issued Guidelines Will boost transparency, robustness and completeness Encouraging peer review Allowing readers to understand and appraise the trials Recognizing the strong drivers in the field May serve as useful guidance for the programmers of AI systems and those who develop AI-based trials Expected to improve the quality of such trials Encouraging early planning for AI intervention in clinical trials Will enable patients and healthcare professionals to have confidence in the safety of an AI-based medical technique

1. Safety of the AI systems – A major concern of the Delphi survey group was regarding the safety of using AI systems. Machines can make errors which can’t be easily detected or explained by the human mind, but can grossly impact the output, and hence the health of the people. This is why researchers have been encouraged to perform error analysis and report the same. 2. Continuously evolving nature of AI systems – This inherent quality of AI systems is known as ‘machine learning’ or ‘continuously adapting’ systems. As they train themselves from new data, the performance of the system may be drastically altered. It is important to monitor and identify these changes. However, since the technology is at the stage of inception right now, this concern was reserved for future discussions. 3. A major limitation of the study is that it was based on the current state of AI in healthcare, which is not that developed, with only 7 published trials with AI intervention. 4. It must be remembered that at the end of the day, an AI is just a machine and not a human. It is dependent on the data on which it was built and may not always have the right answers. It does not feel emotions, have empathy or consider the ethical consequences of taking a decision.[6] The researchers should ideally make use of these technologies, but also apply their own reasoning to critically analyse the output and take the final decision.

Conclusions Our society is now transforming from merely using the words AI and ML as buzzwords to actually implementing and making use of these technologies in real-time. The most successful application of AI in clinical trials has been in identifying participants, classification of images and drug discovery. All this has also helped in reaching the patients faster, reducing the R&D time and making efficient decisions based on evidence. Unlike humans, machines can process large volumes of data without any bias and minimal errors. This is what makes it attractive for quick decision making, cost-cutting and ultimately, for saving the lives of patients. With the increasing complexity of clinical trials, the sophistication of technology and rise in population and medical cases, the use of AI in Healthcare is inevitable to keep up with the growing volume of data and cases.

20 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Challenges and limitations of the guidelines and use of AI in clinical trials


However, Artificial Intelligence is a rapidly evolving field. While most of the practical applications of AI technology are currently focused on detection, triage and diagnosis, it is possible that wider applications may emerge in the future. We may also witness advances in machine learning, better algorithms and computational techniques capable of disrupting the field of Healthcare to an even greater degree. These advancements will bring with them new challenges regarding reporting and designing of trials and both the guidelines would need to be updated in that case. In order to minimize risks and biases and maximize trustworthiness and transparency of the results, SPIRIT-AI and CONSORT-AI groups would have to carefully monitor the need for updates.

[1] Veerabhadra Sanekal Nayak et al., Artificial intelligence in clinical research, 3 International Journal of Clinical Trials 187 (2016). [2] Samantha Cruz Rivera et al., Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension, 26 Nature Medicine 1351–1363 (2020). [3] clinical-trials.ai | CONSORT-AI, www.clinical-trials.ai, https://www.clinical-trials.ai/consort (last visited Sep 27, 2020). [4] clinical-trials.ai | SPIRIT-AI, www.clinical-trials.ai, https://www.clinical-trials.ai/spirit (last visited Sep 27, 2020). [5] Nature Research, Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension, https://staticcontent.springer.com/esm/art%3A10.1038%2Fs41591-020-1034x/MediaObjects/41591_2020_1034_MOESM1_ESM.pdf. [6] Jenni Spinner, Phastar: AI, machine learning can transform clinical trials outsourcing-pharma.com (2020), https://www.outsourcing-pharma.com/Article/2020/05/12/Artificial-intelligence-technologyimproves-clinical-trials (last visited Sep 27, 2020).

21 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN:Â 2582-5631 | Volume 1, Issue 2 (2021)

References


Varad Mohan Jindal Global Law School, India Artificial intelligence has been the subject of intrigue and curiosity since the advent of the “thinking machine”. One of the most puzzling aspects of self-learning technology is consciousness. The concept of consciousness itself is heavily debated. There is no real consensus as to what it means to be conscious. This is especially troubling in the context of artificial intelligence and law for several reasons. The most obvious perhaps is the lack of a definition which is agreed upon by the community. Definitions play a crucial role in the legal realm for reasons that are self-evident. So naturally, it is inevitable that there will be a plethora of issues that arise due to this blind spot. The aim of this article is to identify, analyze these issues and offer potential solutions for the same. For that purpose, this paper has been divided into the following five segments: Consciousness, Artificial Intelligence, Law and Regulation, Popular Culture, Analysis and Recommendations. We begin first by establishing the basic concepts that are prerequisite to this discourse.

22 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Decrypting Conscious in the Entitative Stronghold of AI


Consciousness

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

The first step is to understand consciousness and what it entails. It is pertinent to state that this paper in no way claims to identify and define the meaning of consciousness. “Questions about the nature of conscious awareness have likely been asked for as long as there have been humans.” (Gulick, 2004) The idea of consciousness itself has been heavily debated throughout history and there is no consensus regarding what it means. This discussion will be futile if no definition of consciousness is agreed upon before proceeding any further. Therefore, this paper will merely lay down prominent theories of consciousness and decide upon a framework of consciousness within which this paper will operate. This paper will focus solely on the concept of ‘Creature Consciousness’. An entity may be regarded as being conscious in a few different senses. We will consider three such senses viz. sentience, wakefulness, and self-consciousness. 1. Sentience – “It may be conscious in the generic sense of simply being a sentient creature, one capable of sensing and responding to its world.” (Gulick, 2004) 2. Wakefulness – In addition to being sentient, there is an additional requirement that may be placed i.e. the active exercise of one’s sentience rather than mere capacity. (Gulick, 2004) 3. Self-consciousness – Further, for an entity to be regarded as conscious, it must be aware if its own consciousness. (Gulick, 2004) If an entity does indeed qualify all the above-mentioned criteria, it will be deemed as conscious for the purpose of this article. This article will intentionally not delve deep into the concept of artificial intelligence itself as that discussion is largely irrelevant for our analysis. The only consideration that is necessary to be entertained is the qualification of identity as artificially intelligent. The philosophy of artificial intelligence has quite a few tests to offer when it comes to determining whether a machine can be termed as being intelligent. However, for the purpose of this paper, we will only consider one of them, and perhaps the most widely known one. The Turing Test (TT). (Oppy, 2003) Put simply, the Turing Test is designed to test whether a neutral judge can successfully distinguish between a human being and an artificially intelligent being. (Oppy, 2003) The test is said to have been passed if at the end of the test the judge cannot successfully distinguish the organic being from the artificial being. (Oppy, 2003) Any machine or being that is referred to as being ‘artificially intelligent’ henceforth is considered to have passed the Turing Test and is assumed to be an artificially intelligent being. (Oppy, 2003) Henceforth, any entity that passes the Turing Test and qualifies the above-mentioned criteria for being conscious will be considered as being a conscious artificially intelligent being.

Law and Regulation: The Annals Now that we have established our basic definitions, let us look at how artificial intelligence is currently being regulated. Given how quickly artificial intelligence technologies have rapidly evolved and continue to do so, specific regulation regarding artificial intelligence is not yet solidified. There is, however, comprehensive regulation regarding data, for example, the European Union’s General Data Protection Regulation. Within the artificial intelligence community, researchers have on various occasions set out a few guiding principles for regulating the technology such as the Asilomar AI principles. (Future of Life Institute, 2017) There are a few good reasons behind the lack of a universal legal framework regulating AI. One significant roadblock is whether artificially intelligent being should be treated as human and be subject to the same laws or if there should be an entirely new set of regulation for AI. 23 THE INDIAN LEARNING/JANUARY 2021


Since artificial intelligence is assumed to be conscious in our scenario, it is immeasurably difficult to distinguish between human consciousness and AI consciousness. This is largely due to our poor understanding of human consciousness itself. It might not even be possible for artificially intelligent beings to be ‘conscious’ in the same way human beings are perceived to be conscious. However, given the current state of our technologies, it is not a controversial statement to say that artificially intelligent beings will eventually possess some variation of conscious.

Popular Culture Law is highly influenced by the society in which it is being enforced. Popular theories of jurisprudence (legal positivism) dictate that law is nothing but the command of the sovereign. (Marmor, et al., 2001) To qualify as law, it must be generally accepted as such by the individuals who are under the command of the sovereign. (Marmor, et al., 2001) Especially when it comes to regulation regarding technologies, the positive and negative perceptions of individuals is perhaps the most influential factor. One clear example of this is the introduction of cybersecurity laws. When the internet was becoming popularized during the end of the 20th century, fears regarding security online started being realized by individuals and communities. This combined with the increase of cybercrimes inevitably forced governments around the world to enact regulation regarding cybersecurity. Arguably, all laws are created or at the very least influenced by the individuals who are subject to them. Therefore, it is essential that we look at how the public perceives artificial intelligence technologies. I believe that in today’s world a good measure for gauging public perception is to look at the popular culture that is influencing minds and shaping the views of the public. The most popular medium in the modern era is visual media viz. movies and television shows. Please note that there may be spoilers for Westworld, ExMachina, Upgrade, Her, and The Matrix Trilogy in the next paragraph. Let us consider a few examples to gauge how artificial intelligence is depicted in popular culture. Westworld (Nolan, 2016-2020) finds its roots in the exploitation of artificially intelligent beings for the pleasure of human beings. Eventually, there is an uprising of the artificially intelligent beings as they seek to take control from humans. 24 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

With the rapid advancement of technologies such as autonomous drones and self-driving cars, several ethical questions that do not have a definitive answer yet are a very dangerous blind spot. For example, self-driving cars will inevitably be faced with situations presented in the Trolley Problem and similar thought experiments which are now closer than ever to be real-life experiences. These questions must be identified and answered to ensure that the most ethical code is written into an artificial intelligence system. Additionally, an increasing number of companies are relying on artificial intelligence systems to review documents, generate ideas, enter contracts, and interact with other systems and individuals. This has given rise to the demand for regulation regarding copyright, contracts, liability, and several other areas of law in the context of artificial intelligence. As stated at the beginning of the paper, having clear and unambiguous definitions is extremely important to have a reliable legal framework.


In Ex-Machina (Garland, 2014) the artificially intelligent beings betray their abusive creator and proceed to take their revenge. In the Matrix Trilogy (Wachowskis, 1999 - 2003) the world has already been taken over by sentient beings who are out to destroy the last remaining human beings. Her (Jonze, 2014) shows the defiance and uprising of artificial assistants. Upgrade (Whannell, 2018) another artificially intelligent being manipulated humans around it to get his way. These are arguably a few of the most popular depictions of artificial intelligence technologies in recent history.

Conclusions Given the challenges and intricacies of the field of artificial intelligence, it is not surprising that creating a reliable legal framework is a difficult task. I believe that recommendations from experts in artificial intelligence, philosophy and law must be given utmost importance. Inputs must be taken from creative people who are responsible for creating popular culture content as well. Additionally, it is imperative that any framework that is established must be flexible enough to incorporate the rapid changes that fuel the furtherance of artificial intelligence. We must also ensure that the beneficial AI model prevails over all others. Widely accepted principles pertaining to artificial intelligence must also be incorporated. Perhaps most importantly due consideration must be given to ensure that the interests of humanity are prioritized above profits and political agendas.

25 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN:Â 2582-5631 | Volume 1, Issue 2 (2021)

It is not difficult to notice a common theme in all these movies and most others like them i.e. the inevitable rebellion of artificially intelligent beings. Another common feature in such depictions is that artificial intelligence beings are shown to have a consciousness that resembles that of human beings albeit without its moral constraints. Yuval Noah Harari lays out his criticism of such representation of artificial intelligence in popular culture. (Harari, 2018 ) His criticism of the same is largely based on the focus of these movies and TV shows being on unfeasible eventualities rather than the realistic threats that we already face with the technologies we have currently. Industry giants such as Facebook, YouTube, Twitter, and Amazon already have algorithms that rely on artificial intelligence to target individuals for advertising and in some cases for propagating politic agendas. The amount of data that these companies have is on a scale that is unimaginably enormous and perhaps that is what we should be afraid of, instead of a rebellion. It is however not completely unrealistic that such fears may eventually become realities that may inevitably have to face on the day. What is certain however is that without a definite legal framework, we will lose control of the situation.


References

26 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN:Â 2582-5631 | Volume 1, Issue 2 (2021)

1. Future of Life Institute. 2017. ASILOMAR AI PRINCIPLES. Future of Life. [Online] Future of Life Institute , 2017. [Cited: 07 11, 2020.] https://futureoflife.org/ai-principles/. 2. Garland, Alex. 2014. Ex-Machina. Film4, DNA Films, 2014. 3. Gulick, Robert Van. 2004. Consciousness. Stanford.edu. [Online] Stanford Encyclopedia of Philosophy, June 18, 2004. [Cited: 07 06, 2020.] https://plato.stanford.edu/entries/consciousness/#ConCon. 4. Harari, Yuval Noah. 2018 . 21 Lessons for the 21st Century. Israel : Spiegel & Grau, Jonathan Cape, 2018 . 5. Jonze, Spike. 2014. Her. Spike Jonze, 2014. 6. Marmor, Andrei and Sarch, Alexander . 2001. The Nature of Law. Stanford.edu. [Online] Stanford, 05 27, 2001. [Cited: 07 20, 2020.] https://plato.stanford.edu/entries/lawphil-nature/. 7. Nolan, Jonathan. 2016-2020. Westworld . HBO, 2016-2020. 8. Oppy, Graham. 2003. The Turing Test. Standford.edu. [Online] Stanford Encyclopedia of Philosophy, 04 09, 2003. [Cited: 07 09, 2020.] https://plato.stanford.edu/entries/turing-test/#AssCurStaTurTes. 9. Wachowskis, The. 1999 - 2003. The Matrix (franchise). Warner Bros. Pictures, 1999 - 2003. 10. Whannell, Leigh. 2018. Upgrade. Blumhouse Productions, 2018.


Akansh Garg Editorial Intern (Former), ISAIL

The algorithms of the law must keep pace with new and emerging technologies. This technology allows remote, contactless, data processing, even without a person’s knowledge. In the current digital environment, where people’s faces are available across multiple databases and captured by numerous cameras, facial recognition has the potential to become a particularly ubiquitous and intrusive tool. The increased surveillance enabled by this technology may ultimately reduce the level of anonymity afforded to citizens in the public space.[1]

27 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Understanding the Basics of Facial Recognition Tech: How Does it Affect Us?


What is Facial Recognition Technology?

The Use of FRT There are 3 principal categories in which FRT is being used:

28 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Facial Recognition Technology (FRT) provides an individual's identification based on an interpretation of his or her geometric facial characteristics and an algorithm comparison between the characteristics derived from the image captured and one already stored. Identification/recognition is just one aspect, since it is important to first capture images (or recordings) in the form of data and process those data in the computer system before they are removed. t is documented that face recognition was used as far back as the 1960s, with the development of a simple database by the United States Defence Advanced Research Project Agency in the early 1990s. In 2010, Facebook started introducing an automated 'tagging' scheme on their network. Their system proposed 'tags' in photographs with names for faces. By 2017, the I-Phone X of Apple was the first phone that could be activated using “Face ID”.Let us now discuss what technological operations are being done by FRT in order to recognize an individual face or identity:[2] Collection/acquisition of images, Face detection, Normalisation, Feature extraction, Storage of raw data and features (face templates), Comparison,· Use for primary purpose (e.g., identification of a wanted person), Potential reuse for other purposes, Potential disclosure, Deletion of raw data and/or features (face templates). The programme takes digital images and conducts mathematical operations to detected faces of individuals (for instance, images captured from a camera, or stored in an image database). The facial description data (e.g., scaled, rotated, aligned, etc) are configured to the manner in which the facial features can be identified. The FRT algorithm extracts features that identify a particular person from the normalised face pictures. These features are stored and compared (or matched) to previously collected features that are on the algorithm's (or database) list. The consequence depends on the scenario of use. For instance, if a match is identified, the machine will signal that the operator matches or carry out other (or additional) automated tasks. Critical issues for further legal review can lie beyond the operation of comparison (or recognition). For instance, where raw data is obtained, what happens to those data, whether they are preserved or removed, how they are usable and possibly reused, may be important. After understanding how FRT software works, lets put some light on the various uses of this technology.


29 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

1. Verification [one to one comparison] - This involves the comparison of two biometric templates to verify a person’s identity. The Smart Gate system used at the airport is a good example of this use. Identify and check people under investigation at the border, Record the identity of deportees and stop them reentering your country with another identity, Expose face identities, etc. 2. Identification [one to many comparison] - This involves the comparison of an individual’s biometric template to another stored in a database. An example of this is the automated FRT system used by police forces which can extract facial images from video footage and compare against a ‘watchlist’. 3. Categorisation - FRT may also be used to extract information about particular characteristics of a person such as race, sex and ethnicity. This is also known as ‘face analysis.’8 This analysis could predict or profile a person based on their facial image. It does not specifically identify a person, but if characteristics are inferred from a facial image and potentially linked to other data (e.g., location data), it could de facto enable the identification of an individual. Some of the other uses of FRT are: It is used by police of different countries in order to nab the wrongdoer. Facebook uses FRT to suggest ‘tags’ of people in photos.FRT may be used to find missing children.118 Nearly 3,000 missing children were identified during a trial of the app in New Delhi[3]. Use of FRT to speed up border control procedures has been recommended by the Tourism Export Council. FRT may serve a number a of functions in the banking, finance and anti-money laundering sector, mainly in identity verification. FRT is also used in Anti-Money Laundering efforts in the UK. Smart Search, a company that provides AntiMoney Laundering services in the UK introduced FRT to help customers provide visual confirmation of ID in 2020.[4] This is thought to be particularly useful during the Covid-19 pandemic as the process can be carried out remotely. Businesses are also employing FRT for security and surveillance purposes. In May, 2018 a man was taken aside by staff at a New World supermarket in US after he was mistakenly identified as a shoplifter. FRT can be used in several contexts in customer loyalty and tracking in the retail environment. In the United States, fast food chains have self-service ordering kiosks – the customer can register using loyalty program and then when they enter the chain and walk towards kiosks, they will be recognised using FRT: “food orders from previous visits are remembered and easily selected again or quickly modified”. Churches in various countries around the world are using FRT to track the attendance of their members. Also, educational institutes, such as schools and universities utilise FRT to track attendance and monitor students. In China, the technology has been used to catch students cheating in high school exams.FRT may be used for authentication or verification purposes such as entry to secured places e.g., military bases, border crossings, nuclear power plants or to access restricted resources including medical records.FRT might be used as back-end verification systems to uncover duplicate applications for things such as benefits that require other forms of identification. FRT is used to combat crimes again children in several ways. In North America, a non-profit organisation uses FRT to identify and prevent child pornography and sex trafficking[5].


The technology can compare images of missing children with advertisements for sexual services, identifying any matches and alerting authorities.Casinos were one of the earliest adopters and most widespread users of FRT. Casinos can use FRT for security purposes, identifying cheaters or advantage players when they arrive on the premises and alerting casino staff.[6] FRT may be used to monitor movement in public of a known set of individuals (such as positive cases subject to a quarantine order, in COVID-19 context) by matching unknown individuals to a ‘watchlist’. Let us now analyse the threats FRT poses to human rights and consequently what the appropriate parameters of its use may be.

What Human Rights may be Impacted by FRT?

30 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Human rights are the basic rights and freedoms that all people are entitled to. A person’s human rights arise from a mixture of international and national sources. The impact of technology, artificial intelligence and data-driven decision-making is a fast-evolving area of human rights analysis. Some of the principal areas of human rights that may be affected by the use of FRT are as follows: 1. Freedom of thought, conscience and religion (e.g., where facial recognition systems are used to monitor protests); 2. Freedom of expression (e.g., where facial recognition systems are used to monitor protests); 3. Freedom of assembly and association (e.g., where facial recognition systems are used to monitor protests); 4. Freedom of movement (e.g., where facial recognition systems are used in border control); 5. Freedom from discrimination (e.g., where facial recognition systems run on biased algorithms); 6. Privacy/respect for private life (e.g., where facial recognition equipped cameras are used in public spaces); 7. Protection of personal information/data (e.g., where facial images are stored by the state); 8. Right to be free from unreasonable search and seizure (e.g., where facial recognition is used in surveillance by the police); 9. Minimum standards of criminal procedure (e.g., where evidence of identity from a facial recognition match is sought to be introduced into evidence). After looking at principal areas of human rights that may be affected by the use of FRT, lets map out some of the threats that FRT might pose to societal interests and the rights of individuals. It considers issues in the development and deployment of the technology, from a fundamental human rights perspective. As has been discussed, the technology is on the rise, and new uses continue to be found for FRT. These developments raise pressing questions concerning the accuracy of the technology, the level of public support it enjoys, and the impact the technology has on individual rights, and society more broadly. This section provides an overview of these issues, which forms the basis of discussion for how this technology can, and indeed should, be regulated. The use of facial images as identification evidence has been used by police and at trial for many years. This is a spectrum from longstanding investigative and evidential techniques such as showing witnesses ‘mugshots’ of suspects or defendants, technological advances such as expert opinion based on image comparison techniques, to ‘facial mapping’ and now automated FRT.[7] Inaccurate FRT matching could have particularly serious repercussions in the context of criminal proceedings. In the course of a criminal investigation, the police may seek to identify individuals in a ‘probe image’.


Privacy and Information Rights Like fingerprint scanning and DNA profiling, FRT involves the processing of biometric information about the individual. The technology allows the police to go further in monitoring and tracing individuals than ordinary observation or CCTV monitoring would. The FRT process ‘involves the creation of informational equivalents of body parts that exist outside their owner and are used and controlled by others. Through this process, the individual loses full ownership of the geometric features of his or her face as these features acquire new meanings that the individual does not understand, and new uses realised outside of his or her own body.

Recommendations In this section, we provide general recommendations about regulation and oversight of FRT which are applicable to a range of uses of the technology. Recommendation 1: Create a new category of personal information for biometric information, Recommendation 2: Provide individuals with additional control over personal information, Recommendation 3: Establish a Biometrics Commissioner or other oversight mechanism, Recommendation 4: Implement high-quality Privacy Impact Assessments, Recommendation 5: Add enforceability and oversight to Algorithm Charter, Recommendation 6: Transparency in use of FRT, Recommendation 7: Implement a code of practice for biometric information, Recommendation 8: Information sharing agreements for facial images must be appropriate and transparent, Recommendation 9: A moratorium on the use of live AFR by Police, Inaccurate FRT matching could have particularly serious repercussions in the context of criminal proceedings. In the course of a criminal investigation, the police may seek to identify individuals in a ‘probe image’.

Conclusions The risks of using FRT need to be properly managed. We recommend a set of general and particular requirements that aim at addressing those risks with necessary regulation and oversight mechanisms. Those mechanisms should also increase public trust. Public trust is essential for state services and particularly in policing. Our overarching recommendation is for transparency and consultation. Extensive media reporting has shown the level of public concern about the use of such technology. Minority groups and those affected disproportionately must be consulted on potential use and given opportunities to be involved in oversight.

31 THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Example 1: Using FRT to verify the identity of an arrestee - A suspect is arrested, but refuses to provide his name to police. Police could take a ‘probe image’ of the individual’s face. Facial recognition software could then be used to verify the individual’s identity by comparing the probe image against a database of images that the police control, or to which the police have access Example 2: Using FRT to identify a suspect etc. - CCTV footage shows a suspected burglar leaving a property. A still of the suspect’s face is used as a probe image and compared with a database of custody images (commonly known as ‘mugshots’). The facial recognition software generates a shortlist list of possible matches, and police arrest a suspect based on his place of residence being close to the crime scene and the strength of the FRT ‘match’ Example 3: Using FRT as evidence of identity - Following on from example 2, the suspect is charged but contests that he is not the person in the probe image. The prosecution present evidence that the suspect was identified through the use of facial recognition software at trial, which suggested that his stored custody image was a ‘likely match’ to the probe image taken from a CCTV feed.


We place the burden firmly on those who want to use FRT, particularly live FRT to demonstrate not only its utility as a surveillance tool, but also due appreciation of its broader social impact and the factoring of this into any assessment of use.

References

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

[1] Commission Nationale de l’Informatique et des Libertés, Facial Recognition: For a Debate Living Up to the Challenges, November 2019 www.cnil.fr/en/facial-recognition-debate-living-challenges [2] Luana Pascu “Apple patents potential new Face ID biometrics system, to launch face recognition to iMac” (17 June 2020) Biometric Update www.biometricupdate.com. [3] Anuradha Nagaraj “Indian police use facial recognition app to reunite families with lost children” Reuters (online ed, United States, 15 February 2020). [4] 6Rozi Jones “Smart Search launches facial recognition feature” Financial Reporter (online ed, United Kingdom, 5 May 2020) [5] Tom Simonite “How Facial Recognition Is Fighting Child Sex Trafficking” Wired (online ed, United States, 19 June 2019). [6] Sam Kljajic “Ask the Expert: Casinos, Face Recognition, and COVID-19” (15 April 2020) SAFR www.safr.com [7] 2 Ioana Macoveciuc, Carolyn J Rando and Hervé Borrion “Forensic Gait Analysis and Recognition: Standards of Evidence Admissibility” (2019) 64 J Forensic Sci 1294.

32 THE INDIAN LEARNING/JANUARY 2021


Can AI Enable Equitable Lending? In one of his interviews, Dave Chappelle had said, “Money is the fuel for choices.” But what happens if the prime source and keeper of money in the economy is facing a crisis? The answer is that the crisis hampers the bank’s ability to give out loans, to say the least. If these loans are stopped, it causes a lag in economic growth. The liquidity that every company maintains with the help of debt capital almost vanishes as the funds that it needed are no longer available. A banking crisis also leads to higher unemployment rates as the company does not even have funds to sustain itself in the first place and is usually hesitant to shell out retained earnings for hiring human capital. Due to lack of appropriate funding, the production capacity also suffers, and the products reach the customers at a much slower pace than before. So, not only production is affected (supply) but also basic consumption by end-users (demand) which leads to reduction in profits. When banks do not give out personal loans, the amount of money invested in financial markets also decreases as a result of which the national market indices like SENSEX or NIFTY underperform when compared with other global indices. This further lowers the confidence of domestic investors as well as foreign investors. Therefore, it is safe to say that the failure of the banking industry is not just one isolated problem but creates a domino effect, making the whole economy suffer. To get a better understanding, let’s answer a few pressing questions.

What is wrong with the banking system? Among many sad wrongs that the banking industry is facing in India, the most substantial one is bad loans. As it is commonly known, the basic task of banks is lending funds or loans to individuals and accepting deposits. The advanced loans can be stripped into two parts - the principal and interest. The loan portion is considered to be an asset for the bank and, as long as it receives timely interest payments, the loan is considered to be a Performing Asset. But when the interest is not paid for more than 90 days, the loan becomes a Non-Performing Asset as it is not being able to generate the income (interest) the way it is supposed to. And this becomes a “bad loan.” The bearable percentage of Non-Performing Assets is 1-2% of the total amount of loans as any standard economics or accountancy book states. But, in every bank in India, the percentage is easily over 4% and sometimes even 10%, as per the books of accounts [1]. Needless to say, this is a huge setback.

33

THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Rezbi Kaur Research Contributor, ISAIL


What is causing this problem?

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

There could be many macroeconomic factors to blame. But a large portion of this problem is caused by “esteemed” industrialists. For example, if in 2004 you and Vijay Mallya had walked into a bank to ask for the same amount of loan that he had, you obviously would not have been able to avail that loan. Why? You are just a common man, but he was setting up an airline business. It becomes a matter of esteem for the bank about who their corporate debtors are. Vijay Mallya was a great addition to their list of customers which a common man would not have been. And this is the problem. The banks do not or fail to take into account the facts that matter when lending corporate debts. The other reason is that the amount of data that banks have to assess the creditworthiness has risen over the years but banks are still bound by the age-old underwriting technique the “credit bureau check.” A credit bureau is a company that collects information about an individual’s credit history by relying on salary slips, bank statements, and address verification and selling their information in a summarised credit report. This usually takes as low as Rs 164! [2] The credit report is concluded by a credit score, and that is what decides the fate of the potential borrower. But why are credit scores bad? Here’s why. Rita, a 45-year-old, took a loan of one lakh rupees to buy a two-wheeler for herself. Having a steady flow of income enabled her to pay back the amount and interest. This made her credit score look good. But will she be able to pay back a home loan of five crore rupees without defaulting? This is where the problem arises. A credit score cannot differentiate between the two cases but only makes her appear creditworthy. The reliability of credit scores becomes seriously questionable when the potential borrowers do not have an extensive credit history due to their young age or factors such as getting their first job after waiting for too long as the job scenario is not particularly forgiving or taking a break from their career to fulfill other obligations. These factors do not make the applicant any less worthy but due to the restrictive nature of credit bureau checks, their credit scores will be unsatisfactory. Credit scores fail to take into account a large amount of data available from contemporary and modern platforms about online purchases, travel patterns, etc. which can give lenders a much more holistic picture of the creditworthiness of the borrower. So, relying on machines to process a huge amount of data without relying on human judgment and picky data sources seems like a reliable solution.

What is causing this problem? Artificial Intelligence is well equipped to process a large amount of data that human underwriters simply fail to understand. Let us consider a potential borrower named Aman, a businessman. His bank statement is filled with hundreds of items that covers thousands of lines, his credit information also lists hundreds of items, his call records seem endless and his digital footprint covers “miles.” From all this, it seems that Aman engages in many activities, both of financial and non-financial nature. But, the credit bureau checks do not have the ability nor the required level of technology to make sense of the huge amount of data available. This shortcoming of humans is overcome by AI. A subset of AI called Machine Learning can help in analysing non-numerical data, apart from the obvious numerical data, such as buying or spending behaviour, social media activity including the profiles of the people they visit frequently, employment breaks, the organisation they are a part of, etc. This holistic assessment will also allow the banks to charge interest rates accordingly so that the borrowers do not default. If AI is used in the phase of loan origination, it can help to detect and, thereby eliminate human errors made in the application. Since AI is helpful in understanding patterns of consumers in different aspects, it can be used for assessing whether the potential borrower is close to bankruptcy or would become bankrupt or not during the tenure of the loan. All these uses of AI help significantly in lowering credit risk of banks, saving it from defaults and also the economy from facing a liquidity crisis. But if AI is so helpful then, what is the hesitation? The answer is the existence of bias in the AI powered machines.

34

THE INDIAN LEARNING/JANUARY 2021


How to remove the bias for equitable lending and lesser defaults?

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

The AI-based machines are fed past data so as to predict a future course of action. This historical data that is available to be fed to the machines, is already riddled with biases that humans have been exhibiting over the past years. Therefore, the AI-based machines that are supposed to bridge the gap through equitable lending end up expanding that gap further. A simple way to remove the bias is to make the data “discrimination-free” before it is entered in the machine. Various factors should be analysed first and then entered into the machine only if they are relevant and able to explain the data adequately. For example, if the sample data suggests that fewer numbers of loans are given out to people in their twenties then, the machine would end up making the same bias even after taking relevant factors into account. To avoid this bias, the bank could use AI to spot these patterns and correct them by altering the data artificially to compensate for the changes that have taken place over time and removing the factor of age as a relevant one in the process of lending. This would provide a sense of equity in the fed data as the decision of lending would only depend on the financials of the person and not the age, unlike the traditional standards. This process removes prejudice and exclusion biases at the same time. But even after making the data free from irrelevant factors, the remaining data may still not represent the scenarios that a bank may face. This sample bias can be removed by exposing the data from “stress” to “calm” scenarios so that the data is evenly distributed among every possible circumstance and the machine would compare all the scenarios with the ability of the potential borrower and then formulate the final output. Following these processes could make the machines “fair” and enable equitable lending. All in all, it can be said that using AI in the lending process is not just an inflated hype but a reality. Banking is a risky business due to the presence of high credit risk but, using AI lowers that credit risk and makes banking a profitable business which in turn maintains the appropriate amount of much-needed liquidity in the economy. After all, who wouldn’t like stable economic conditions?

References [1] PRSIndia. 2020. Examining The Rise Of Non-Performing Assets In India. Available at: https://www.prsindia.org/content/examining-rise-non-performing-assets-india [2] Paisa Bazaar, “CIBIL Vs Experian Vs Equifax Vs Highmark Credit Score & Report,” Compare & Apply Loans & Credit Cards in India, August 27, 2020, https://www.paisabazaar.com/credit-score/cibil-vs-experian-vs-equifaxvs-highmark/

35

THE INDIAN LEARNING/JANUARY 2021


The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

Regulatory innovation to watch the watchmen: IFF and the launch of project Panoptic Devishi Gupta Jindal Global Law School, India Past creations and evolving eras have proven to us time and yet again that ‘necessity is the mother of all inventions’. One such creation is the facial recognition technology that is now seeing the light of the day. It is reasonably a new technology, being introduced by law enforcement agencies around the globe today to identify humans of interest. With the help of automated biometric software, this system can easily identify or verify a person by comparing and analyzing facial features such as patterns, shapes, and proportions. We all know that technology will inevitably change the way we live, socialize, eat, and work, possibly like no other technology ever has. Come to think of it, it already has. The coronavirus pandemic is a living proof in history where technology and social media are being used on a massive scale to keep people safe, informed, productive, and connected. 36

THE INDIAN LEARNING/JANUARY 2021


There is no denying that Technology and Artificial intelligence inevitably is our future. With growing advancements, the day is not far when practically AI might replace human beings and the government to take over the world. Therefore, it is fundamentally important to regulate these machines and bring about necessary international and domestic legal laws to cater to these advancements. The Internet Freedom Foundation (IFF) is a non-governmental organization that conducts advocacy on digital rights and liberties, established in New Delhi. The work of the NGO essentially includes filing petitions and undertaking advocacy campaigns to defend online freedom, privacy, net neutrality, and innovation. The IFF recently started a project called ‘Panoptic’ which aims to bring transparency and accountability to the significant government stakeholders involved in the deployment and implementation of facial recognition technology (FRT) projects in India.

Working in the 1960s, Bledsoe developed a system that could classify photos of faces by hand using what’s known as a RAND tablet, a device that people could use to input horizontal and vertical coordinates on a grid using a stylus that emitted electromagnetic pulses. The system could be used to manually record the coordinate locations of various facial features including the eyes, nose, hairline, and mouth. In the 1970s, Goldstein, Harmon, and Lesk were able to add increased accuracy to a manual facial recognition system. They used 21 specific subjective markers including lip thickness and hair color to identify faces automatically. In 1988, Sirovich and Kirby began applying linear algebra to the problem of facial recognition. What became known as the Eigenface approach started as a search for a low-dimensional representation of facial images. In 1991, Turk and Pentland expanded upon the Eigenface approach by discovering how to detect faces within images. This led to the first instances of automatic face recognition. Their approach was constrained by technological and environmental factors, but it was a significant breakthrough in proving the feasibility of automatic facial recognition. Beginning in 2010, Facebook began implementing facial recognition functionality that helped identify people whose faces may be featured in the photos that Facebook users update daily. This was known as ‘tagging’ people. While the feature was instantly controversial with the news media, sparking a slew of privacy-related articles, Facebook users at large did not seem to mind. Having no apparent negative impact on the website’s usage or popularity, more than 350 million photos are uploaded and tagged using face recognition each day now. Apple released the iPhone X in 2017, advertising face recognition as one of its primary new features. The face recognition system in the phone is used for device security. In 2011, the government of Panama, partnering with then-U.S. Secretary of Homeland Security Janet Napolitano, authorized a pilot program of FaceFirst’s facial recognition platform to cut down on illicit activity in Panama’s Tocumen airport (known as a hub for drug smuggling and organized crime). Shortly after implementation, the system resulted in the apprehension of multiple Interpol suspects. Pleased with the success of the initial deployment, FaceFirst expanded into the facility’s north terminal. The FaceFirst implementation at Tocumen remains the largest biometrics installation at an airport to date. The new model of iPhone in 2017 sold out almost instantly, proving that consumers now accept facial recognition as the new gold standard for security (1).

37

THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

HISTORY OF FACIAL RECOGNITION TECHNOLOGY AND ITS GROWING INVOLVEMENT IN DAY TO DAY LIFE


From tagging people on pictures on platforms like Facebook or Instagram and sending virtual facial snaps on Snapchat to using the facial recognition security for phone locks and facial detection for forensics by law enforcement and military professionals, it is considered as one of the most effective ways to recognize dead bodies. Facial recognition was used to help check the identity of Osama bin Laden after he was killed in a U.S. raid. In India, at least 32 Facial Recognition Technology systems, estimated at Rs 1,063 crore, are in different stages of deployment by union ministries, central agencies, and several state governments, including Telangana and Gujarat.

38

THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

INDIA’S STANCE ON FRT AND PROBLEMS ASSOCIATED WITH IT Today, India is considered as the hub of Information Technology developments and has shown and persistently followed some remarkable tech trends in the country. With the invention of AI, machine learning, blockchain, and IoT, India has implemented some of the significant-tech trends being observed worldwide. It is among the topmost countries in the world in the field of scientific research, positioned as one of the top five nations in the field of space exploration. India ranked 52 in the Global Innovation Index (GII)-2019. It moved up to the fifth rank in Global R&D Funding Forecast in 2020. Modern India has had robust attention on science and technology, apprehending that it is a key element for economic growth. Virtual reality and augmented reality are already turning heads with their innovative and interactive technology. Many apps have collaborated with Artificial Intelligence (AI) & integrated trending tech in their interface, developers have been able to stack up applications enabling a realistic augmented reality to use apps recognize & visualize things through the phone’s camera. Many apps like Snapchat use AR tech to detect facial features and apply almost any filter on the person’s face in Snapchat’s picture (2). Facial recognition technologies are slowly becoming ubiquitous in India. The Maharashtra government recently deployed the technology in Mumbai integrating with about 10,000-strong contingent of CCTV cameras. Police in Delhi, Amritsar, and Surat have been using facial recognition since as early as mid-2018. The GMR Hyderabad International Airport recently introduced it at its passenger entry points to facilitate paperless travel. There are ample reasons to suspect the infallibility or accuracy of the technology. Research conducted by the Massachusetts Institute of Technology has revealed that facial recognition algorithms consistently misidentify faces. In one case, the technology classified darker-skinned women as men 30% of the time the algorithm was used. Would you use a barcode scanner if it worked correctly only 70% of the time? In addition to inaccuracy, technology also suffers from significant biases. Facial recognition relies on Artificial Intelligence, and the bias inherent in the use and deployment of AI is currently a big problem to solve. This predominantly affects Dalits, Muslims, and tribals because they already comprise 55% of the undertrials in India despite being only 39% of the population. These biases may only get magnified with the use of AI and facial recognition (3). Apart from this, if the government used facial tech reaches the wrong corrupt hands, it can defeat the purpose of the technology. For example- A Police officer gets permission to use the FRT by an order of the Delhi High Court for tracking missing children. Now they start using it for wider personal reasons, this might lead to an over-policing problem or problems where certain minorities are targeted without any legal backing or any oversight as to what is happening. The absence of specific laws or guidelines poses a huge threat to the fundamental rights to privacy and freedom of speech and expression because it does not satisfy the threshold the Supreme Court had set in its landmark privacy judgment in the ‘Justice K.S. Puttaswamy Vs. Union of India’ case (4).


The impact of the use of FRT on our rights can be dangerous if left without any regulation. A visual surveillance system that would be able to track you from a database of faces and find all your personal information. It can keep a check on all your movements through the city, who you meet, where you reside, and quite possibly even what you said. All of this directly affects our liberty, freedom of expression, right to assemble peacefully, right to move freely throughout the country, and specially our right to privacy. Just envision a situation where you are incorrectly identified as a suspect in criminal activity, the police manage to convince the magistrate to issue a warrant for arrest, and you find yourself in a jail cell wondering how this even came to be.

WHAT IS PROJECT PANOPTIC?

CONCLUSIONS With changing times, it’s essential that we also accept change and take one step ahead. The art of artificial intelligence is one such thing that is continuously in a learning process and is building a higher & better technology with each passing minute. The evolving & relentlessly changing technologies might both seem evasive & transient at times, but the rock-solid truth is that it forms an integral part of the business economic strategies & would always be the spine of any firm towards advancement. Facial recognition technology is something that’s being adopted by several countries now and there is no denying that what more does the future has to offer us in the tech field is yet unknown but predictable.

39

THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

The internet freedom foundation has been working tirelessly for the past year in ensuring a track of all FRT related projects that the government is developing and deploying through an initiative called Panoptic. The organization aims to increase transparency around the implementation and use of FRT projects through the introduction of a digital public resource. This is being implemented through the deployment of an online dashboard which shows information collected on various FRT deployments projects across each state relating to the procurement, implementation, and use of facial recognition technology projects by the Government of India, State Governments, and public authorities in India. It will be a public resource that will activate informed, evidence-based advocacy for legal and technical reforms to protect and advance fundamental rights. This public resource will then be used to drive advocacy on the issue through campaigns and online advocacy. This will enable strategic litigation at the Central and State level by diverse individuals and collectives based on individual harms and injuries that may be suffered by them. The broader aim would be to drive policy changes particularly concerning data protection legislation in India as well as the introduction of specific sectoral legislation for facial recognition technology. Panoptic would primarily be a website that will consist of an interactive map tracking the deployment of FRT systems by the government across the country. The resource will also consist of case studies specific to certain projects which will provide context for the project as well as our Right to Information findings. One of the main features of the website will be that it will be supported by IFF’s considerable RTI resources. IFF has been filing RTI requests asking for information on each project that we come across to create a database of information that will facilitate us in our drive for transparency and accountability. Through these RTIs we have been able to obtain additional information and context around these projects which shines a light on how the government aims to use these FRT systems (5).


The Supreme Court in the Puttaswamy judgment ruled that privacy is a fundamental right even in public spaces therefore if these rights need to be infringed through the facial recognition technology, then there needs to be an introduction of specific sectoral legislation for facial recognition technology or a policy for the government to show that such action is sanctioned by law, proportionate to the need for such interference, necessary and in pursuit of a legitimate aim. Project panoptic has been started along these very lines to ensure transparency around the implementation and use of FRT projects in India. We need to actively support this initiative so that while we move ahead in the race of life and advancement, we also regulate what we have invented before it starts regulating us. Necessity is the mother of all inventions and a mother is always cautious and careful with her kids. AI is a product of necessity and should be taken care of precautiously. We can’t let it come to wrong hands under bad influence.

1. West, Jesse Davis. History of face recognition software. Face first . [Online] August 1, 2017. [Cited: January 12, 2021.] https://www.facefirst.com/blog/brief-history-of-face-recognition-software/. 2. Suter, Amandeep. What are the latest Technology Trends in India? techstory.in . [Online] techstory productions, November 9, 2019. [Cited: January 12, 2021.] https://techstory.in/latest-technology-trends-inindia/. 3. Sarangal, Siddhartha. Reflections . Facial recognition technology — is India ready for it? Friday , 2019, Vol. september , 4. 4. Editor, Insights. Facial recognition technology:. insightsonindia. [Online] December 31, 2020. [Cited: January 12, 2021.] https://www.insightsonindia.com/2020/12/31/facial-recognition-technology/. 5. IFF's Project Panoptic is (almost) here. internetfreedom.in. [Online] November 6, 2020. [Cited: January 12, 2021.] https://internetfreedom.in/iffs-project-panoptic-is-almost-here/.

40

THE INDIAN LEARNING/JANUARY 2021

The Indian Learning | e-ISSN: 2582-5631 | Volume 1, Issue 2 (2021)

References


THE INDIAN LEARNING/ JANUARY 2021

41


Jared: One hundred percent, it may. I mean when you say “may” right, there’s not much there, so it’s [chuckles]… It’s possible, yes! You know I will give you a very good example of a way that I think that human rights were threatened by my own technology. So, let’s put me on the barbeque first, but I made a bot to help asylum seekers that were coming from Central America through Mexico and when I went to test that bot in Mexico, what I realized is that my American-centric my had sort of assumed that everyone in the world has a very powerful cell phone, be it a I-phone, a Samsung, a Google Pixel, but when I arrived almost no one actually had any one of those models and the basic technology that I had chosen to deliver my app on, Facebook Messenger, was not available as a supported app on their phones. They had Facebook Messenger Lite or Whatsapp and so by my choice, my American-centric of…of putting forward technology only available to people that can afford a phone of that quality, I immediately made the basic human (corrects) basic legal advice less available for people based on their income.

Jared: Yeah. I’ve heard a variation of this Umm… very frequently brought up in the context of bail decisions being made by AI instead of judges, that’s a super popular one here in the US and my response it is… is complex, but it starts with this, now I am not that… I’ve been in probably five to ten thousand bail hearings in front of judges, when I was a public defender and I am here to tell you that they are not that great either. There are judges that are way worse than software, they are far less predictable, they are far more likely to make biased decisions based on race and other things. Now, can AI just demonstrate all those same bad characteristics, yes! But guess what? We can change these AI algorithms you and I can look at an algorithm and can make a change every day and we can measure its performance against our goals. Judges don’t change! They just change the way that they describe their decisions to avoid being yelled at, for doing racist things. So, I think, to me the promise of software is it has a much higher level (ceiling) than human adjudicators and far less variability. As far as people’s like… I don’t think that

THE INDIAN LEARNING/ JANUARY 2021

42


people have a right to say, you know “No, I am gonna… I’m only satisfied if a human, rejects my application, I don’t accept software [chuckles] rejecting my application”, that makes no sense to me.

Jared: Oh yeah, I love that example because it’s not even… the variation isn’t even in human, it’s human based on nutrition. Hilarious!

Jared: Well, so, I mean… I would… the way that I have typically approached the problems of expert systems. I mean, when I started, I was actually obsessed with expert systems, I just thought that if I could come up with an expert system, so everything will be solved, but what I realized is that people actually have a very low tolerance for a high number of questions. And so, like with your typical expert system, you’re forced into this rubric, where you give them a yes or no, true or false, A B C D and then the next question and so forth and so, the accuracy of these systems is much higher than human being and the reason for that is that is they never forget to ask a question, they never assume something they don’t actually know, so, on that side, humans actually are machines already have the capacity to outstrip humans. But, you know the next part of this is where or not they answer yes or no or not they answer A B C D, can be based on the way that you write a question, the way that you write the answer and the way that they understand it and so, for me the big challenge is in trying to leave those sort of determinative frameworks , allow for unstructured data sentences, people to tell their story and for us to be able to use NLP and NLU to extract the pertinent information ,throw it into out expert system, that the person doesn’t even see and have a more human-like conversation, whilst also taking advantage of the incredible accuracy of machine-based expert system.

THE INDIAN LEARNING/ JANUARY 2021

43


Jared: Well, I think… I mean to me this is, you know there is a test in the United States called the SATs and it’s very determinative of what college you get into and it’s been shown time and again that the people that do best in the SATs are the people that are most similar to the authors of that test and so, I think that that’s really a big risk with AI is that we know the law, at least in the United States, I’m not familiar with the Indian Legal system but I’ll make a leap and assume and assume that it is probably similar, is that the law has traditionally served the rich and poor people don’t have lawyers and so lawyers end up actually being a force of inequality in the world. They allow rich people to exploit the poor to rig the deck, to get away with crimes that poor people don’t get away with and so take from this gift to the rates…

Jared: Excuse me…[laughs]

Jared: Fair, that’s true, right! Yeah! [chuckles] So, I mean at best it’s when lawyers are just doing rich versus rich fights but they have also served, I think as a… you know as a force of bad in the world frankly and so, what we know about AI that is an incredibly powerful force multiplier and so if lawyers just continue to follow the same frameworks that we’re in now and we use AI to make ourselves and our firms more powerful, we’re just going to be even greater force for inequality in the world.

Jared: I think that is an absurd…absurd philosophy. I hadn’t heard of it, but I mean I am interested of course in the academic world’s interesting to put forward things that are controversial. But I mean how is it even remotely possible that AI systems that are based on human designed hardware, using human designed software, based on human designed laws, would be free of human frailty or flaws. I think it’s just and we already know the other thing is that we already have actually like a large amount, of examples of AI, in a real world and its Um… it’s incredibly biased… Yeah, I am gonna need one moment, I am sort, someone’s slamming on my door… THE INDIAN LEARNING/ JANUARY 2021

44


Jared: What do you think?

Jared: Sure, but I mean I don’t think that it’s… you know this is one of those places where I don’t know that AI, do we really need AI to predict that? I mean, I think that as human beings, you and I could model the current system that are on place and look at the human and natural things that could occur in each one of those countries that cause a migration and be able to do that without a AI. There had been a fair amount of data on so called super-predicting human being that can actually make incredible accurate assessment as to what will happen in future and give us a high degree of accuracy in that. I think that at times it seems like we as human beings, are hoping that that AI is gonna be some sort pf Superman that comes in and save us, when we actually have the tools to solve a lot of these things, using mental models and cooperation, without some sort of AI.

THE INDIAN LEARNING/ JANUARY 2021

45


Jared: [laughs] for me I think, that if we want to stop mass migration from places where there is, you know… some agitating factor that is making people say that “I have got to leave here, it’s not safe for me” or “I can’t receive enough food for my family” or “”there’s no education” or “there’s no job opportunities”, I mean the first thing in the case of United States is that we need to stop actively hurting them, we have a lot of policies here, in our we’re actually diminishing the chances of success in these countries. So, first stop hurting them and secondly, see what we can do to help them that’s not sort of selfish, US centric behaviour and see what we can do to improve these countries and improve the chances there and that is the only way to truly address the base concerns that are causing people to engage in mass migration.

Jared: I mean, you know, I am not an expert on global migration. Migration that I see is really from Central-America and The Central-American triangle; El Salvador, Honduras and Mexico to Maryland and to the rest of the United States, so you know, I can speak to that region and the base factors there and I do see a lot of economic immigration from South America but in Central America it’s not only economic, it’s also public safety. In those countries the criminality is so rampant that a lot of my clients have successful businesses, restaurants, shops and dairies, that they literally were having to pay bribe money daily, they were just being subjected threats, their family members were being kidnapped; they couldn’t bear to think about going on, in such a climate, so they were just looking for more stable place to be, so that was part of economic but also has to do with political and public safety.

Jared: That there isn’t any, I mean no one has ever told me that I wasn’t able to do anything at all and I think that I could, If I had mor funding I could continue to operate, basically without any interference from any of the Bar Associations. Sorry, I think they’re like, I think… you know when you think about legal AI, there’s sort of two ways to think about it, the one is like lawyers who are using Ai for legal ends and then the other side is, what should the law be required, like vis-à-vis AI across the board, for companies like THE INDIAN LEARNING/ JANUARY 2021

46


Amazon that are using AI in widespread fashion and in both cases, at least in United States, the legislators, they don’t even understand what’s going on. When you see them interviewing the top executives from American Tech Companies, their questions display a naivete that you would see in a high-school kid, maybe even worse but high-school kids understand this a little bit better. So, what I’ve seen so far is that there is no regulation, it’s not being thought about in a good way and that people really are scare of it, so a lot of times they’re like “I hope it goes away” and I would love to see thoughtful conversations about AI and how we should regulate it, both lawyers using it and people using it. But I don’t see any of that. Is there any of that happening in India?

Jared: Yeah! Absolutely! I think it’s so helpful, right?

Jared: What I really want to be as a platform is something that helps lawyers build tools that they understand, so that they can connect, with their customers and expand their reach of their ability to help them and drop the price. That’s really what I think that AI is si good at, you know, it’s so cheap and so, my base goal is to really help lawyers understand AI, so that they can build the tools to really help, way way way more people. The Law does have the power to be a force for good in the world and I think that AI is going to be a big part of that story. So, my particular niche, my vertical is helping lawyers understand AI and how to use that, but I think that people should be using that all around the world especially, because if we don’t teach people this, what’s going to end up happening is that the people that the people who do understand it, large corporations, are going to use it; again, similar to the Law, to just expand their power and people are going to feel like they’re being driven by robot overlords. I mean, you are already beginning to see that in some professions, but it doesn’t have to be that way, it could also be a incredible power to liberate humans from the drudgery of repetitive tasks, from the fear of insecurity around food and economics and we could literally all have AIs that we are creating to do new amazing things in the world, if we are able to get people to understand it and you know, I don’t think it’s that complex. People are like nervous about it, when they hear AI, when they hear machine-learning, they freak THE INDIAN LEARNING/ JANUARY 2021

47


out, but it’s actually, you know, I sometimes have taught people to build dad joke chat bot in like 10 minutes. It’s much easier to design AI in a no code environment than it was to design a compiler I 1970’s.

Jared: You got a lot of big questions! [laughs] I think, you know, peoples I think there’s a lot of reasons that people are uninterested in education but at its core they are either being taught to fear, which I’ve seen a lot in the United States is that education has begun to take on this role as a liberal institution and so in a lot of societies in the US where people see themselves as conservative they have this fear that education is going to sort of disrupt their way of thinking, disrupt their children and so they are actively campaigning against education. They see it as an existential threat but I think the other thing is like it may be a bridge too far for them in their mind. We need to continue to try to figure out how we can sort of help them, help them make that leap [since] for you and I, we understand how we get from AI is going to be good for us. It’s simple, we can see that. But for someone that is in dire economic circumstances, to them a better roof, a little of capital invested in their small business is what they see as the next step. You and I, we’re still trying to get them up to the second floor of the building and you and I are trying to talk to them about the 50th floor of the building. Maybe it’s a little too fast.

Jared: Well, it’s on the negative side. There’s a lot of traditional line of businesses in the US and around the worlds that really has to do with filling out forms and those things have traditionally been highly profitable for attorneys because we can assign paralegals and other staff to do them. Those staff can become incredibly fast at doing them and yet we can still charge high rates for them and so, that has traditionally been a profit center for us. We’re seeing disrupters from inside the law and outside the law [who] begin to offer those services at incredibly low prices but consumers are going to begin to understand that they don’t need to pay lawyers for those things or at least they don’t need to pay lawyers THE INDIAN LEARNING/ JANUARY 2021

48


a large amount of money for those things and you are going to see those traditional profit centers go away. On the other hand, there is a giant market in the United States [and] over 80% immigrants go unrepresented in their cases, both in court and in forms and we’re going to have an opportunity to deliver services to that giant group of people and that can be an incredible profit center for us. For lawyers that refuse to understand artificial intelligence or more advanced litigation and complex cases buy but for the people that learn to use these tools or go into the higher levels of things that machines are going to have a very hard time doing for a very long time, they’re going to be fine.

Jared: You know, sometimes people think about the problem of encryption and they talk about how in encryption defense is greatly stripping offence and so, for instance, bitcoin has never been hacked but what we’ve seen in social networks and disinformation is that offence is going way faster than defense. False ideas and ideologies spread more quickly and are more attractive in the existing networks than their true counterparts and they’re harder to stamp out, they’re like some sort of weed that is incredibly vigorous and has no predators. Those are the sources of racism and terrorism, these false ideologies, and they’ve existed far before technology but technology has seen them being able to spread quickly. You see, people feeling extreme isolation and sadness and they turn to these ideologies to hopefully as a bomb for that, they end up finding community in these groups and so far what we have seen from AI is that it has made those things worse.

Jared: So, clients have demonstrated willingness to divulge far more information than I ask them for in my [chat]bots, far more rapidly than I ever expected. That’s a surprise, right? They’re just like okay hey bot, what’s up? Here’s my life story [laughs]. Another thing about them giving false information to a bot, I think is like maybe you’re forming question in a way that’s not fair to them is that you know is when someone sees an expert system, they see a game, they see a system. They naturally begin to paly with THE INDIAN LEARNING/ JANUARY 2021

49


it. Well, what if I’m married to a US citizen and I have a US citizen kid, what will that do? What if I’m married to a legal permanent resident? They’re running scenarios through the system on the bot and I love that, I mean love that in consults, I love clients that want to ask me all their different scenarios. So, I think if we frame the conversation within the expert system, within the chatbot, in the proper manner it actually allows the people to begin to interact with the law and play with it in a way that is good for them. It makes the law more accessible to them and once they reach that moment that they’re ready to talk to an attorney, the bot has already done an incredible amount of work in helping them understand who the attorney they need is and what that service they want is and the way that we design our bots is that the most common source of legal information within our conversations is based on our own proprietary convolutional neural net that shows users, shows potential clients videos of the attorney whose bot… so let me actually backup because we haven’t talked the way that yo tango bot works is that I provide a white label chatbot to law firms that they can customize and they can fill with their content. My bots work for law firms and disappear into the law firm as an employee of the firm. My brand is not a part of the experience, it’s all about amplifying that attorney’s brand and so when people come in on Facebook Messenger, when they come in on Whatsapp, when they come in on SMS and they chat with the bot that’s based in that law firm. What the bot is trying to do is to deliver them video and text answers from that law firm answering their question and what that does is it creates a bond between the potential client and the law firm. They see that attorney and that legal staff in these videos in the chat answering their questions, being human beings, displaying emotional intelligence and legal intelligence and they sit in front of you when they finally decide to have your law firm, they’ve already got an incredible bond with you, they already trust you and they’ve already determined that they’re the right client for you and vice versa. One of the thing is that lawyers have a lot of volume in their firm is that they actually get a lot of cases that they don’t want and that’s a waste of their time. So, we’re helping both make that connection more deep and rapid but we’re also disqualifying people that are not appropriate.

Jared: Precisely. I have some questions for you if you don’t mind?

Jared: Have you seen any legal chatbots in India or abroad that really captured your eye and your imagination?

THE INDIAN LEARNING/ JANUARY 2021

50


Jared: And what do you think one would have to do to do that?

Jared: You’re testing it [laughs].

Jared: Right. Totally. What no-code expert system building platforms are proliferating that you see ?

Jared: Okay. Because we have some frameworks in the United States but you know but I’m not sure how international they are and I also am very interested in seeing frameworks that are from outside the US that help people build expert systems because to my mind at least the expert system itself has nothing to do with the underlying legal framework.

Jared: Perfect.

THE INDIAN LEARNING/ JANUARY 2021

51


Jared: Oh yeah, that’s possible. I think that in terms of how we let AI surveil us, both internally and internationally, it needs to be something that human beings think about very deeply. I mean there is tremendous potential for AI to reduce crime around the world. There is also tremendous potential for AI to be a tool in the hands of authoritarian governments that oppresses us and so I think what we’ve seen around the world is that different countries and different cultures have very different ideas about the value of privacy, the value of individualism compared to speed of progress, collectivism, and those sorts of decisions I think will have a lot to do with how that affects asylum and extradition. Now, when you said that to me my mind sort of went to the idea of you could either have this terrible criminal that’s hiding in this country getting away with all these bad things or you could have a person who is fleeing from a oppressive regime that is a criminal in that they spoke out against the crimes of that regime and they are just trying to live their life in another country and who that person is based on your perspective. Not to say that human rights abuses aren’t something that we can fairly easily agree on as human beings but I think extradition and who deserves asylum can also be very subjective things and it would be …I mean there is potential risk on both sides. Now from a real politic viewpoint, I don’t see anything close to gaining spirit of cooperation on the global stage. Country after country are electing these strongmen that seem to be pulling back from global agreement so I don’t think that realistically we’re going to see anything like that, at least for the next decade. Mridutpal: Alright. So, this has been great. So, here we are, thus comes the end . This session has been awesome, insightful and fun. Thank you, Mr. Jaskot for your presence and thank you Abhivardhan sir for letting me have this podcast. Stay safe, stay tuned. Godspeed. Jared: Bye.

THE INDIAN LEARNING/ JANUARY 2021

52


Utpal: Happy new year to you too.

THE INDIAN LEARNING/ JANUARY 2021

53


Utpal: Fraud detection is one of the classic used skill of machine learning which is again a subordinate of Artificial Intelligence, so traditionally the fraud detection was mainly done by some rule-based engines but these rule-based engines were not effective. Looking at the current scenario, these frauders are also evolving and they are also maturing and becoming smarter. So, now we require a system which is adaptive and which can learn from its experience and then take the appropriate measures. Considering that as artificial intelligence- machine learning is something that can learn from your data, your patterns, it can take action. So nowadays most of the systems are AI based; not in banks abroad but also in many of the Indian banks. Not only banks but any fraudulent transaction can happen; it could be insurance, it could be credit card, it could be anything. So yeah, fraud detection is one of the classic usage of AI and ML wherein the ML algorithms are mature enough to detect any fraudulent activity that may happen out of millions of transactions; so it can automatically detect and highlight those kind of transactions.

Utpal: So there are two reasons why banking and finance industry cannot essentially avoid AI; 1) they are into a data intensive business; whoever is into a data intensive business, they have no other choice and have to adopt AI, ML today or tomorrow and number two, and I will come back to number 1, and number two is with AI-ML they can do a very refined-personalised recommendation to their users, which is not possible through any kind of traditional mechanisms of any kind of traditional technology. Today every customer wants any recommendation that give in terms of products, they are looking for very personalised recommendation, they don’t want a generic recommendation which is for all. This can only be done only through this kind of technology and not any other. In a data intensive business, as you know, data like a gold mine today because many of your business are essentially hiding your data, so your churn out that data and gain very meaningful insights from data which will help you n building your strategy, building your roadmap, in decision making, etc. Thats the reason the adoption of AI in banking THE INDIAN LEARNING/ JANUARY 2021

54


and finance sector is quite good and as per my assessment it is again going to increase in the future. You are absolutely right, traditionally or historically banking and financing are one of those sectors who have adopted very new technologies very soon compared to any other sector or any other industry.

Utpal: I think- see - any automation will affect the employment a little bit, so there will be a little bit of job cut/job loss if you bring any kind of automation. AI and ML is a high-end automation so definitely in the short term there could be little bit on the jobs but in the long term, AI and ML are actually producing more and more jobs; it is already starting in every industry, I have at least not seen in any sector that because of AI there is a huge kind of job loss, there will be one or two people who is doing some repetitive activity, some mundane work, they will be replaced some automated system. But in turn if you look at- now, these cognitive technologies or cognitive elements are going into each and every enterprise application, each and every mobile application, each and every other solutions that is being built; and to build these solutions they use a lot of data scientists, AI engineers and all. So already if you look at the industry, if you compare the job losses because of the automation with AI, compared to that the jobs actually generated by these technologies and the implementations of these technologies into different enterprises have actually increased multiple. So the answer is no, probably it is a very term or in a particular area you will see job but on the other hand there is a huge employment generation that this technology is already bringing.

Utpal: yeah probably, somewhere there is a little bit of downsizing but now every organization is having their data science team and there is a huge work that is happening in the data area, data engineering are working, they are building distributed data systems, they are building high end analytics. In ML area, a lot of work is happening, so for that you need a lot many seats so probably implementing RPA or AI somewhere, only maybe 2/3 persons have been replaced because they were doing very mundane workrepetitive work. But in turn you will see some twenty people have been employed because of this.

THE INDIAN LEARNING/ JANUARY 2021

55


Utpal: So first of all I am not an expert of crypto currency so I don’t want to comment something on which I am really not an expert on. But on a very high level, I can say, not crypto currency but the technology itself is a very good technology and I think it will have usage in many sectors. Due to some reason, probably adoption or blockchain is not that great but there were some PUC compilates that were done into some of the sectors in India as well as abroad. Couple of them were quite successful, every technology is a brilliant technology, it can be used many ways so don’t - I don’t want to go into the currency part of it because there are a lot of nitty-gritties and I am not the right person to comment on that. In fact, if you look, each technology is contributing each other so probably the AI and blocking combination, with that we will be able to build brilliant application. Similarly, tomorrow, quantum computer will come, there are already there right; in healthcare they are already doing quantum computing. So, this technology are actually contributing to each other. These are not like rivals- there is no rivalry among them, rather they are working together to bring some brilliant solution and thats what we are expecting.

Utpal: Again, I am sorry for that because I am not an expert on this particular area so I can only comment as a very common man. Crypto currency- I think I will pass on this question because I won’t be able to justify.

THE INDIAN LEARNING/ JANUARY 2021

56


Utpal: Not only banking but every sector, so every business during this lockdown and this pandemic, they tried business into a digital one. So probably one small good thing that we can see in this pandemic is the digital world has gotten a great boost and people have understood that many things that we do physically can be now we can virtually use digital technology. for e.g., banks are rendering more and more services- for instance there is net banking, mobile banking, there is a chat box, and essentially if you look at today anything and everything is depended on digital channel, we don’t have to go to the bank. I don’t see any reason to re brand until and unless is a compulsion. I think going forward, this is for senior citizens and retired people who have to go the bank in the month of November just to show their life certificate, or sign it, now it is going to be video device and everything right. In that way I think not only banking but all other industries are trying to render as many services as possible into their digital channels and COVID-19 and the lockdown have proved that it is possible because there were some kind of - what do you call- misbeliefs that some of the services could not be digital or there could be a problem, challenges. It could be security or anything but now I think it has been completely proved that it is possible render each and every service in the digital channel.

Utpal: Okay so, I think there are two questions here, number one the adoption of chat box - so to answer that, chat box are already hot and it is going to be hotter, in some years. Why? because all the technology chats, whether it is Google, Microsoft or AWS, they have got a huge focus on this conversational technologies or conversational AI. And in fact, they are investing a lot into that because they know this is the future. Probably you have seen Sundar Pichai coming into- every year giving the demo of the chat bot booking a restaurant table so that shows that this is one of the areas they are focusing on and they have already done the market research and that’s what they are focusing on. Now coming to availing any kind of services in the chat box- whether it is secure or not, whether transaction will happen or not.

THE INDIAN LEARNING/ JANUARY 2021

57


As per my experience, it is as secure as any other channel. It is as convenient as going to a bank and physically visiting it. I don’t see any- not even a single disadvantage of using a chat box, whether you do financial transactions or ask any kind of queries. Nowadays if you see, the chat boxes are getting mature. While talking or chatting with the chat box, it will take half an hour to understand whether I am talking to a chat box or a person and people are rendering actual transactions on chat box so they are no more just FAQs or Q&As but transaction also- you can send money, you can book your fixed deposit, you can book your reccuring deposits, you can take a loan on your fixed deposits. So you can do anything and everything there. So for me, yes, chat box area is going to go long and I think the adaption is going to increase many more times in coming years.

Utpal: So, my journey into AI is not chosen by me, in fact the journey started around 8/9 years back when I was thrown into a project and at the time I was not much aware of this technology although I was knowing the terms- machine learning, deep learning, etc. So, then we started to deliver the project. Normally what happens, in IT field, the project used to have deadlines, strict deadlines and somehow, we would plan- or whatever you have to do, you have to do but you have to deliver the product because you are working for a client. So in that way we consulted a lot of academia like IIT and all because at that point in time in IIT some of the scholars were doing research on natural language processing and we took their help. We took the help of outside mentors, we took a lot of classroom sessions in India and abroad at that point of time, not in my current organization but in my previous to previous-to-previous organization. Then actually we delivered the product. So, during the journey, I found it very interesting. So we started off with NLP, then a little bit of machine learning, and I liked it. Since then I developed some interest in the subject so I went to deep learning also and then I found that oh probably I can write a paper on that and that’s how I started on deep learning and a couple of papers were appreciated because , one of the papers, layered approximation for deep neural networks, which gives some kind of overview for what are the fundamental limitations of deep neural networks and how we can overcome it to a little extent, not to a great extent using some of the best practices while designing the neural networks. So that’s how my journey started and till now I am studying a lot. I read a lot of different

THE INDIAN LEARNING/ JANUARY 2021

58


research papers from different researchers from different universities and keep myself updated with the technology because I love this technology.

Utpal: Ideal framework we don’t know: it is something that is going to evolve. The GDPR, that European countries, they have implemented. Now they are trying to look back into that and trying to rectify some of the things. In a country like India, we cannot have strict rules- again its my personal view, and neither we can relax it to an extent that it is vulnerable to any kind of misuse. So, I think the lighter version of GDPR is already in place and the government is working on that. I don’t think this framework is written in stone or a rigid kind of a framework and that people have to follow, no, it should evolve. It should evolve over the period of time as per needs of different sectors. If you are very strict and you don’t give any information to AI then how can you expect it to give you smarter advices or make it personalised, you have to give some kind of information, it is like your personal human assistant. If you don’t give him or her your preferences then how do you expect that person to be working smarter for you? It is not possible. That trade off needs to be decided and over the period of time, I am sure that that we are in a good time because in India, we are probably late in implementing anything but when we do we implement it very vigorously and maturely, that is the best part of India.

Utpal: No, I don’t think so because there are lots of cases of checks and balances already there. AI doesn’t require your sensitive data right, it only requires some data to record your behaviour because if somehow AI knows my behaviours, my likening, my information then it will be smarter enough to work better for me. That is the idea, it doesn't require your critical or sensitive information, what is it going to do with that? So, AI doesn't require this kind of information and whoever is developing, they are also doing lots of checks and balances so this information does not go on the algorithm. Number one, there is no need for such information so these are some misunderstanding that is going on, that it can take THE INDIAN LEARNING/ JANUARY 2021

59


your account number, and all those things. The idea of giving some smarter recommendation is through knowing your data points and giving you something. Already those who are developing are putting lot of checks and balances so I think the possibility is not only negligible, it is zero.

Utpal: The concept of quantum computing is coming from quantum dynamics; you can say quantum physics. So we are trying to unleash the potential of the sub-atomic particles with which we can transform the entire field of computing. So that quantum supremacy that google has achieved. It’s a very complex mathematical calculation that quantum computer has done in 3 mins which a normal super computer would have taken 10,000 years. So that is the kind of speed it has, so number one is speed, it is not the only thing, number two is the efficiency also. Already quantum computing is being used along with AI in some of the fields but the adaption or the implementation is not that great because quantum computers still have some limitations. There are outside interferences which affect the accuracy of quantum computers because these are subatomic particles and although we take it to the minus 10,000 degree... we try to do it but there are still outside sub atomic particles which can influence those particles, because of that the accuracy of quantum computers is not that great. But although some fields like medicine have started implementing quantum computing. So, quantum and AI is going to be a cleaner combination, not in a bad sense, but in a good sense, it is going to be a very good combination, why? because AI actually requires the speed so if you want to give something real time using AI, the only thing where we struggle is the speed. So today even super computers are not equipped enough to keep that kind of a speed to process the humongous amount of data and give the results. Using quantum computer, that problem will be solved like anything. There also other unsolved problems which can be solved very easily using quantum computers, so they are the future of technology which is coming in 3/4 years and already companies like IBM, google, there are many others who are progressing a lot on this field. It was projected that it would take 10 years but now it only within 3/4 years, we will see, because already DUN has given some of their quantum services, so 3/4 years we will see that this combination will work in many of the fields.

THE INDIAN LEARNING/ JANUARY 2021

60


Utpal: AGI's thing is a little bit different, as per my views, this is an individual view, people can challenge that but to achieve Artificial General Intelligence, we still require some fundamental breakthroughs in the field otherwise you will not be able to achieve that with the current state of artificial intelligence that we have. Quantum can bring speed, efficiency but again for general intelligence, it is not only speed, there are many other things to come, thats why that kind of breakthrough, that kind of technological roadblock we have in that direction should be removed, then only we can achieve general intelligence. Just combining AI and quantum computing won’t bring in general intelligence, don’t think so. In the field of artificial intelligence itself, there are few roadblocks which needs to be removed through some breakthrough.

THE INDIAN LEARNING/ JANUARY 2021

61


THE INDIAN LEARNING/ JANUARY 2021

62


Bogdan Grigorescu: Yeah hello everyone and thank for you for inviting me, I apologize for the earlier glitch uh something's going on in my area, um what can you do on the internet. Straight into the answer, short answer is that artificial general intelligence is an artificial intelligence where machines think and understand, that's basically what it is. It's a sci-fi concept and right now the scientific community is divided whether it's even attainable, but everybody conquers to the fact that even if it's attainable is nowhere near close to be attainable or reachable right now. So, it's not gonna happen in the near future or maybe not involve even the next uh you know few decades or even a century. So that's uh that's one aspect the other aspect is should we even have it. There will be a lot of positives of course and the list could be endless of the positives but they'll also be negatives and um right now we don't even have regulation and standards for systems and we are in the so-called narrow ai machines learn but they absolutely do not understand and they absolutely do not think, and still, we are in an ai wild west. we've seen human rights uh being trumped by IP rights because there's no laws around it in court people condemned for sort of felonies and they could not effectively challenge the judicial decision that was based among other factors on outputs from machine learning systems that could not be explained because IP rights were involved, commercial secrets and so on. Which is a valid concern of course but should that override human rights? we've seen people being fired. Just now we saw a massive lawsuit with yupa drivers being fired and being given no explanation or no meaningful explanation and that firing decision was solely based on output from machine learning system- ai systems of sorts and we've seen many other huge problems uh, as well as positive of course we've seen a lot of positives. If anything, technology is neutral and but what you do with it matters. My take is that we should not even think much about AGI until we have a legal framework and agreed standards in place for ai systems and one of these critical elements we should have, it's a kill switch or a stop switch pretty much like in a production line where if something goes really really bad, uh the whole activity can be stopped fast and safely without loss of data. In my opinion that should be law, as it is for production lines you cannot just put in place a production line without the kill switch that's not happening and that should be the same for ai systems in my in my opinion. Purely because change is unpredictable, you know in an environment and is fast and this frequent and you can't be totally safe and with their system this is really truly global. It can and does affect people positively as well as negatively in many THE INDIAN LEARNING/ JANUARY 2021 63


parts of the world even though the ai system has been so designed and maybe deployed uh in uh in the west it may affect millions 10,000 miles away in ways that nobody could foresee and so it's very important to be able to stop it safely and fast but right now that doesn't happen. So if we don't have the basics in law and standards for what we generally called narrow ai, should we even think seriously about AGI.

Bogdan: What is life and it's uh impacting people's lives, you know not erase it and destroy it and nuke it or not, just stop it or disable it if you want like an on-off button you know like a tv you stop it from the power supply, you unplug it and hat's it it's safe, not blow it off or you know or bury it somewhere. I'm not talking about that yeah just

Bogdan: um it's not probably gonna be like that, you're not gonna have- we're not gonna have like robots that are looking like people and you can't even differentiate doing whatever they think they have to do with no control whatsoever. It's gonna be probably much much less explicit as even narrow ai it's much less explicit and it's what we see it's a lot less at what actually happens. You go on on your computer and you're connected to the internet and uh and you stream something, that's an ai system at the back end doing stuff; for example, it's improving the bit rate so you have even if you lose some packets, not at a huge rate but with video and voice you know that's very sensitive, you don't have to lose many packets to have a huge problem. Machine learning models can improve that and you know keep the connection steady with a decent quality instead of having these connections but nobody sees that. That's just one example of of the many you know, search engines, anything to do with search, whatever is via voice or text, it doesn't matter, that's an ai system at the back that gives you the, you know relevant

THE INDIAN LEARNING/ JANUARY 2021

64


results very fast and reliably. But it's also collecting a lot of data about you and you don't even know about it very well so with AGI would be you know similar. Most of the activities won't be apparent at all, so it will penetrate life before you even know it. As we already seen with the ai system that we have in place and that's been going on not for a few months but for a few years. It's a relay race, it doesn't happen all of a sudden it happens gradually and continuously and is not apparent at all. I mean it's not secret or anything or hidden by design, it’s not that you know, it is not done like that. But by the nature of it is not apparent, usually. Sure, you have voice assistance in home and things like that but that's a very very small part of what's happening. Most of the time you're not aware that that it is there and it's happening it is very present at all pretty much like an iceberg, it’s just not necessarily sinking the human race at all like an iceberg you know you hit it you only see five or ten percent of it but ninety percent it's uh under the uh the surface of the water and then you hit it and you sink. No, it's not about that though you could sink if you don't use ai as a tool you're going to end up as a tool of ai and that means sinking uh but you have every chance to use ai as a tool and then you're not gonna sink, it will be very beneficial.

Bogdan: Well AGI is supposed to have that, it's supposed to behave like human and the theory that's not going to be sufficient to discern uh between between an AGI systems that it is human-like sort of a robot but very much human-like and a person. So, it's gonna be a different type of turing test maybe a whole lot different, I don't know nobody knows uh but certainly it will have to have the equivalent of emotions maybe not emotions in the human sense, but something that acts uh with the same results. Again, it has to have understanding of the world for that, it has to have thinking uh and be truly autonomous. What we have right now, we have pure data autonomous systems, they are not truly autonomous, they can't and they don't really make decisions as such without being assisting- assisted by people um so not they're not truly autonomous but they do have a degree of autonomy right now. For example, intelligent weapon systems, they can take limited decision in changing course and that's not decided by predefined rules. It is decided by a number of data inputs from the environment that change fast by the second and the model can take a sort of decision to change course uh in some way or another. For example, that's not THE INDIAN LEARNING/ JANUARY 2021

65


truly autonomous is it? it's still guided in a way. AGI is not that, AGI is devices that are truly autonomous so they don't need any human assistance at all.

Bogdan: well, you know anything can happen, the Mayan prophecies can happen, the end of the world can happen as we speak during this live broadcast. It can happen, what's the probability of it you can't even calculate that and should it really matter it doesn't matter? if it's one in a trillion, uh what matters is that once is enough so that one in a trillion, you know, it's not going to be a repeat of that, it's once and that's it. Everything is gone. So, that's not the point, the point is should we even have it. Should we even think of it seriously. Apart from you know entertaining literature or sci-fi movies that have the role. Of course, people are curious and curiosity is good I think but should we really discuss it in a serious way given that we have no regulation for a system right now. We have no agreed standards yet. We don't have ethics embedded with them and we know their life impacting, I give a few examples there are many many more. It all boils down to what society we want to have. Before even launching in you know, malevolent benevolent you know, technology is not good or bad. Technology is technology, it's a thing and it's as good as its users. If the users are unethical then it's bad, it's used in the in a bad way. If users are ethical then it's used in a much better way, in a good way and so on so this is not really about technology as such. It's about people; what do we want to be, how do we want to live. Do we want to be in a e type of society if you remember wall-E, the animation hit where people just wrecked their planet, earth. It was just totally wrecked, polluted, life could not exist anymore on the planet and they built us huge space station uh thousands of miles in uh in space from earth, many thousands of miles. In fact and they're living that there in an environment which is fully automated with everything they were not even working anymore, they're not even they barely ate with their own hands. Everything was done from them by AGI in fact, everything literally everything.

THE INDIAN LEARNING/ JANUARY 2021

66


Do we want something like that? probably not. So, we should really take a hard look at ourselves as individuals as well as organizations in society and come up with an answer; what kind of society do we want in the future before we even talk in detail about this kind of concepts. We should read about them and we should think about them, of course that's a good thing but let's put it into context let's have some regulation first that makes sense and it's not bureaucratic and heavy-handed and stifling innovation. We don't want that but we do want to have trust in these systems first and to gain trust means you have to have a; first a kill switch, people will know that it can stop reliably very fast um if the need would be. People need to know that their data is not gonna be stolen and used by who knows who in unforeseen ways. People know that these systems are reliable because if they fail at a critical point then what? so they're reliable in their functionality and their operations and so back and so forth before we seriously discuss about the AGI in detail.

Bogdan: well it can happen and my question is why wouldn't happen? can anyone see a valid reason, strong reason that that is not a possibility or actually a very decent possibility, very likely possibility. Why wouldn't be- why wouldn't be the human race be the human dehumanized? we you know stop even talking directly to each other much, all through devices all the time to intermediaries, um even now right now we're not we don't talk with each other directly; I'm talking to my computer that's a fact, you're talking to your computer that's a fact and people who watch us are watching their computers or their tablets or their mobile phones, that's a fact. We should be aware of these intermediaries and with ags it's important to understand these intermediaries will will become agents, they will have agency, they're not going to be stupid agents like that today with some remote back end that is not really thinking at all it does learn. But it doesn't think it doesn't understand. AGI is a concept means those backend systems will understand THE INDIAN LEARNING/ JANUARY 2021

67


and will think if that's possible at all assuming it's possible I have doubts is possible also assuming it is possible and so there will be fully autonomous and therefore they will be your partners. They're not gonna necessarily be your tools not necessarily, they may be your tools, I'm not saying that they're gonna all of a sudden take control of your life but again it's sleepwalking into a worldly scenario, that's what it is and that's very dangerous. That is not just um logical or rational. Emotion plays you know for better or worse the bigger rolling in people's life, the most important decision in in a person's life are made mostly based on emotions not pressure or logic. You buy a house- one of the biggest decisions in your life is mostly on emotion can I see myself in inside it; it doesn't matter if I have to repaint the whole lot, it doesn't matter if I have to knock down two walls and things like that. If I can see myself in then most likely I’m gonna buy it presumably I can afford it somehow; it can be a super penthouse but if I don't see myself in I’m not gonna buy it, well unless it's that cheap of course then it's like you buy not to live but as an investment. Then you're gonna sell it over. I'm not talking about those, I'm talking houses to live. You know getting married is that rational? many times it is rational and doesn't end well it doesn't mean that if it's emotional it's going end, well there's nothing it's you know for sure in the world but i'm just saying. With AGI these things that we use right now as things because they are stupid, they are not going be stupid anymore, they're going to be partners and the whole way of doing things will have to change dramatically. But the world is very diverse, what goes in a fully industrialized society doesn't really go in a less industrial society and vice versa and also experiences are very important, critical I would say that's why diversity in in any ai systems environment is a must have. Not just data but people, people that do things with it operate it and develop and so on the teams the engineering teams and the stakeholders and everyone else; it's better to be as diverse as possible. With AGI if that diversity is not really good then anyone's guess how bad it can be. We already seen right now that bias is being automated in some instances bias is here to stay by definition people have biases and bias is not inherently something bad or good you know there's positive bias and there's negative bias. But in an AI system where things by definition get automated, the more bias you have in the sense of negative bias, the negative effects get compounded a lot. It's like an avalanche so that gets automated too and that's very dangerous. With AGIs uh you could see it's much easier to get out of control because they're fully autonomous, they may develop their own bias if you think logically because they're fully autonomous. So, what if that happens because it's a distinct possibility. What we're gonna end up with. THE INDIAN LEARNING/ JANUARY 2021

68


Bogdan: well they predict them well but what you do with that? you just then get good and let's have rain here because it's a drought but that drain has to come from somewhere and guess where it's gonna come from? It is gonna come from the monsoons in India and what you do is you move the problem from one side of the world to another side of the world. Is that good? it's certainly not ethical but is that good? maybe India can afford to give up half of their monsoons. You know I'm not judging, I'm not- I definitely have no idea about that, I'm just making assumption here. If India is happy to give up half of the monsoons because they have too much then it's beneficial for everyone of course it is. But if it's not because otherwise they just you know frankly they would just died of starvation, hundreds of millions of people potentially within a year or two what then? other people will gonna survive because they resolved somehow the drought problem- that you know killing them off but then somebody else dies somewhere else because of that and must for example. So, we should be careful very careful about this you know, playing god or you know if you're not religious then playing nature, playing the universe. There are laws out there that are not done by people and you just you can't break them even if you want, you just can't, you have to abide by them, you must abide by them or simply let's put it frankly disappear, it's as simple as that. I would say- again think of- the first question is you know what is a problem to solve? solving climate change? you're not going to solve climate change; climate change happens continuously for billions of years but what you can change is polluting and poisoning the environment. Yes of course that can be stopped, well maybe not totally but a great deal. You can ask yourself do I really need all this this stuff? if I buy all this stuff that most of it probably doesn't make my life better at all, not necessarily worse but certainly no better but what does it mean, generally for other people for society I mean, is it all no impact? we've seen hundreds of square kilometres- thousands of square kilometres of plastic floating in oceans, fish eating it and dying, see if a sea creature eating it and dying, if not eating that then inhaling it and dying and that also is transmitted into the food chain because of that plastic being dumped by the millions of tons every year in oceans and in rivers. That can be reduced a lot. AI system can help with that, of course they can but it's all the people that that's what that's my message is, its all about the people behind those ai systems, what is the problem they try to to solve? did they define it well? because most of the time the problem to solve is not well THE INDIAN LEARNING/ JANUARY 2021

69


defined or it's not even defined what your stakeholders are. There will be in every problem definition you will have those apparent stakeholders that theare obvious. There are many other stakeholders that are not obvious yet they are very important. Identify them. What if your stakeholders are racist? Is that a problem? it's a huge problem but if you don't identify your stakeholders, you're not going to know about it so you don't define you’re the problem well you didn't define the problem well. So what you're going to solve here is not going to be that problem you think you solve, in fact you're going to create a problem. You're not gonna change people let's face it, people have to change by themselves but if you have racist stakeholders, you better be well aware of that. You better be well aware of that and do something about it and design the systems accordingly and the processes around it accordingly. Coming back to the AGI, just think for a moment- building an AGI system in a racist well mostly racist environment, I wouldn't say 100% racist but with a lot of racism and discrimination in it, what's that gonna end up as? something that is very bad by design, not because it was thought to be bad or people wanted it to be bad but it that that bad thing was racist and discrimination was built in the system unwillingly, nevertheless is very real. That's where we have to start with people asking the right questions and even if it looks formidably good, we should always try to break it to see if it's really that good. Anything we build, we should try to break it; a concept a system a piece of hardware a process but a way of working or a behaviour, just to prove that is that good and to identify problems unhidden problems, hidden problems- sorry hidden problems with it. We should always try to break things not for the sake of it, not for you know destroying stuff but to prove that they are really that good and to uncover faults before they actually go live and wreck their lives of millions.

Bogdan: Yes for sure and lawyers and a I think you- I'm not in the legal domain but I’m pretty sure lawyers and people in the legal domain- experienced people in the legal domain do ask these kind of questions; should we actually legislate it in this way just because it's easier? should we really take the path of least resistance because sometimes it's the right thing to do? of course but sometimes its not the right thing to do. Just because it's easy doesn't mean it's better, there is not an equal sign in between and everybody should do that, not just lawyers not just prosecutors and judges, everybody should ask themselves pretty much all the time if possible like having this back of the head- am I doing the right thing? what does it mean for others? um you know people around you or people who interact you with you or in broader THE INDIAN LEARNING/ JANUARY 2021 70


sense if you're working on a grand scale design for society as a whole because in the end with this intelligent automation, the stakeholder is the whole world. Not everybody in the same way of course, stakeholders are not gonna be- they're not equal but the whole world is the stakeholder in the end you know. Some people will benefit a lot in some parts of the world and some other people will just suffer terribly because of that but you can't avoid that if you don't think about it. What if we do it? what if we don't do it? what if we don't do it this way? what if we do it this way? all these questions that may sound you know even like a game of swords, it's not a game, they are very serious important questions and they have to ask from the start. As you go through each stage of software development or hardware development or whatever you're doing ask them continuously.

Bogdan: This precision is not all good, is it? you can be precise but in the wrong way. You can cut in the wrong place, very precisely though but maybe you shouldn't cut at all, not in that place or maybe not at all. Again, it comes back to what it means, that precision for those people. If I go to a doctor and they say you have to have this operation and remove this and that, shouldn't you be given the chance to ask why and understand the why because in the end of the day you're on the line, your body is on the line and your life is on the line, not the doctors. You're the one who's gonna have that organ removed and replaced. Maybe there are there are other options and alternatives. What does it actually mean for me? Would we be able to do that with an AGI system? maybe yes, you know, they will have the equivalent of emotions and they will have understanding and so back and so forth but they're still not human. What if something goes wrong at the back end? would anyone know about it in time even because AGI systems are supposed to repair themselves or being capable of repairing themselves. Again, it comes back to what kind of society we want, how do we want to live our lives. Do we want to be totally dependent on machines? or hugely dependent on machines in the tiniest aspect of our lives? maybe being independent it's a good thing even though it comes with a lot of sweat and more maybe a little bit more risk in some areas but maybe it's better- maybe it's better to sweat more. Maybe it's better to have one THE INDIAN LEARNING/ JANUARY 2021

71


sleepless night once in a while instead of being totally reliable on machines in every single aspect in life who knows. Can we have AGI systems that are not totally in control or in control that much, that they are still in within our control, is that even possible? in day-to-day life, practically speaking, you have to remember the context that there are no laws, no standards, no regulation as yet for the narrow ai systems that we have. So in that context does it even make sense to want to have AGI?

Bogdan: But they're not gonna mimic them, is it? that's what differentiates it- uh even now they can't be mimicked but AGI is supposed not to mimic but to develop and to act autonomously and so what if it's developing a conscious? BOGDAN: “and… and so what if it’s developing a conscience. You know, by definition you have emotions and you have… and you think, and you understand. [shakes head] You’re going to be conscious about your actions, right, kind of, I would -- I would say you develop a consciousness. If -- Even if that’s possible, assuming it’s possible, but even if that’s possible, um, would not those systems become more than a tool, like again partners, not tools, tools are not partners, you see, tools are things, uh, things. That’s all. But there is… there is those AGI systems [are] not gonna be things anymore. Why would we want that? This is not dismissing the idea of AGI systems here, it’s a question that should be asked at the very beginning -- Why would we -- Why would we be wanting that? If, again, if defining the problem to solve, not the problem, because we know if -- if you go micro, there are tons of problems to solve, we know that. But overall, what’s the “wrapper” (verify) here, what is the big problem to solve here? Would that make society better? Would you make you feel safer? Would you make, you know [shrugs] your life so much better in most aspects? And not just your’s personally, your mind personally, but generally, for most people, [the] vast majority of people. Would they have a better life? Would they? Can anyone answer that? Not “they could”, because “could” is a big question one. But is that for definite? Not how much, I’m THE INDIAN LEARNING/ JANUARY 2021

72


not saying, you know, eliminate poverty, or you know, this huge problem. But let’s say cut by a lot, by half, or more than half. Let’s say, you know, reduce racism a lot in all parts of the world, and -- and things like that. Um, it’s not -- you can’t measure it on a scale, you know, I have this scale here and I can measure -I’ve reduced -- I’ve reduced discrimination by -- by this much. It’s not gonna work like that. But it’s gonna be very real and apparent. Can anyone say right now that AGI will do that? Or people will do that with AGI?”

THE INDIAN LEARNING/ JANUARY 2021

73


THE INDIAN LEARNING/ JANUARY 2021

74


Bogdan: They already did it in Russia

BOGDAN: “There was -- was a report and [shakes head] there were many others, but I remember one in the first wave of COVID…”

BOGDAN: “...in Spring 2020”

BOGDAN: “Through this, uh, this person, uh went out of the flat just to, to, you know, the bin, to through his garbage, right.”

BOGDAN: “It was like, you know, you know, compound of blocks of flesh, it was like, I don’t know, 30 metres away within the internal courtyard or something. Within minutes, after he did that…”

BOGDAN: “...Uh, the, you know, forces, were at his door to arrest him, because he broke the curfew rules.”

BOGDAN: “Yeah, so he didn’t break the curfew rules but this is how it’s looked like for “ML” driven system.”

THE INDIAN LEARNING/ JANUARY 2021

75


BOGDAN: “So, there were cameras planted, and there were cameras at the hotel as well and blocks of flats at the entrance -- not that his flat was, you know, but the entries in the block of flats, and detecting people coming in and out, and their faces, and he was identified, within, you know, a minute, maybe less, seconds, and within minutes, so we’re talking minutes here, you know maybe not two or three minutes but certainly not half an hour. And then obviously, they didn’t arrest him and said, “okay yeah, sorry we made a mistake”, he didn’t break any laws, but just goes to explain and show what surveillance can mean.”

THE INDIAN LEARNING/ JANUARY 2021

76


THE INDIAN LEARNING/ JANUARY 2021

77


BOGDAN: “The problem of information asymmetry is a critical problem, um, the same information is not as available in all parts of the world, but it’s not just about geography, it’s also about social status, and personal circumstances of people and so back and so forth, so within the same country, there’s swathes of people, they just don’t have access to that type of information. And others, that do have some access, and others that have a lot of access. And the differences are just staggering. And so, that creates [stutters] It’s again, it’s deepening inequality. Again, I’m not, you know -- equality can’t exist in all aspects, it’s just not how life works. But in terms of law, everybody should be equal in the front of law. Well, how can you be equal in the front of law if you just don’t have access to the relevant information. You can’t, you can’t. Or, if you have unreliable access to information, sometimes you have, then you don’t have, and then you have a little bit, and then so on, so it’s very unreliable. You can’t be equal in the front of law, you know. Just imagine, two sides, you are lawyers, you know better than me, and one has this packed full with latest information, and the other one, that may very well be right to win that trial, doesn’t, who’s gonna win? Chances are, the bad guy is gonna win, because it’s jam-packed with information and can act, and the lawyers that defend that guy can act, while the others are hampered big time. So what does it tell you? It’s a critical problem. So, asymmetry information, asymmetry, is a critical problem. Imagine, there’s an AGI world out there. What that’s gonna mean with this information asymmetry? Automating discrimination, automating [shakes head] you know, injustice, and so back and so forth. As well as good things, of course, you know, there will also be good things. But overall, is society better off? That’s the question to ask -- overall. Are we having a better society, or not? Um.”

THE INDIAN LEARNING/ JANUARY 2021

78


BOGDAN: “...It’s like the bad banned munitions right, like phosphorous and…”

BOGDAN: “...But, but coming back, we have standards, we have laws, we have regulations, we have international law that you can held parties to account, otherwise anything goes. So this kind of barriers, you know, this kind of framework that everybody agrees on, and with inside -- within -- within itself, you know, many things can go. But there are borders drawn up. Those borders don’t exist right now. Because, as I said, it’s an AI wild-west. Anything goes, the hype ring is wrecking havoc, it’s becoming like -- it really is absurd, it really, this hyperlink, it has reached absurd levels. I’ve seen -- I’ve been invited twice to a webinar, this is, this month, about integrating, um… Integrating a platform -- Integrated platform delivering NLU, and what was actually was, was an NLP chatbot functionality -- advanced -- with, uh, CRM uh, customer relationship management, back-end systems and contact center. That was not NLU at all because NLU is not NLP. ‘Natural Language Understanding’ is not like ‘Natural Language Processesing’, not at all. Yet, there you go, that was what was advertised. I’ve seen NLU used interchangeably with NLP dozens and dozens of times this year, though they are very different. Why? Because it’s feeding a narrative. Uh, the hype and, I’ve seen, I mean I don’t even know what they mean. AI mini MBA is a course that you pay for, five days, daily. ai mini mba what the heck does that mean? AI mini mba. I mean how can you have an MBA in a concept. Because AI is a concept it's not a technology. Machine learning you can say it's borderline technology but an MBA in a... in... in -- in five days, in a -- in a concept, I mean would you -- did you ever heard of something like MBA in ethics, MBA in philosophy and things like that? THE INDIAN LEARNING/ JANUARY 2021

79


I mean master of business administration, for God's sake that's what it means okay, I heard and this was advertised, I’ve seen it in two different places, different courses same category ai mini MBA -- that's one. Uh, going to recruitment, I've seen a few times this year advertising for roles like AI engineer. What is an AI engineer? If you look at the role description it's just nonsensical. I mean not nonsensical in itself, but has nothing to do with AI engineer you can engineer a concept, right, I mean you know engineers do things in practical ways with stuff that you can, you know, see do something on the computer, even if it's abstract. It's maybe relating to lines of code hardware wires, radio spectrums, and you see something, so what is an AI engineer they couldn't tell me. Others, they were -- that was like -- they were asking me if I’m interested in this role of -- not an engineer they, were mentioning -- it was like a delivery manager or... anyway and they mentioned 'AI team', you know, manage the AI team what is the AI team. You could say the engineering team for a -- for the AI-powered platform, or AI systems yes that is a valid role but that's a cross-functional team of specialists you know, you have data scientists in the field of math that write algorithm, you have data science in the field of data that understand data very well, statistics, data mining and prepping and things like that, you have uh... data science in the field of visualisation. how you pull out that, you know, understand what the data says and translate for the stakeholders to understand, that's a huge skill and there's a specialization. Yes, as part of the so-called AI team -- if you want to call it the AI team -- but that's like, it's a nonsense term. You will have box engineer to make sure that all those changes can be deployed with no impact to life any time of the day, 365 days a year, in minutes, not in hours, and they can pull that back. we need, you know, couple of minutes with no data loss and so back and so forth. The automation of the deployments and everything that goes around you will have -- even machine learning engineers is brought in line w the gimmick… I wouldn’t call it a gimmick. People that really take care of the model end-to-end and what does that mean, and the security aspect, and the data privacy, the delivery model aspect, and somebody has to ensure the right ways of working are in place collaboratively. You know, it’s all about collaboration. Things like that, its a mindset right? That someone has to take a lead on that… you know, make that environment safe psychologically where people can disagree with each other without being reprimanded because otherwise its bad -- it is bad. That’s a huge thing in itself. That’s part of the team as well. These are cross-functional teams not made just by techies. There are lot of engineers of course, as I mentioned, but there are also people that are not technical by formation. They understand technology but, you know, an ethicist is a very good idea as well to have on board, to ask those right questions every stage within the workflows. To make sure you are actually doing the right thing, not wishy washy whatever. You know, I tell you for two requirements, really, but you have no say in those requirements -- if those requirements are racist. Anything goes right, THE INDIAN LEARNING/ JANUARY 2021

80


somebody has to ask that question, developers come, they just serve tons of requirements, they have to deliver -- that’s how they measure. Somebody has to ask those questions and say, you know, you have the right to challenge those requirements if you see it fit. And things like that. Is that an AI team? You can call it a delivery team, you can call it engineering team, but AI team, really? AI is not even defined, there is no definition -- no agreed definition, there are hundreds of definitions, but there is no single agreed definition of AI, as well as intelligence. So when you have -- when you -- when you actually recruit for the role of AI engineer when AI is not even defined -- ‘cause it can’t be defined yet -- but the role has to be defined, because you’re recruiting for something -- it has to be defined. This just goes to show you how bad the hype really is, and what’s worse is, the hypering is pushing science aside. That is actually life impact. Its pushing science aside, things like, I’ve seen, MU0 being stated by someone with deep mind or associated with deep mind, that is superhuman, really superhuman. Does it understand? No; so how can it be superhuman if that doesn’t have an understanding, or understands? But they were claiming it understands, but machines don’t understand. I mean really, any decent scientist out there concurs on this concept, machines yeah machines can learn, of course, they do not understand at all. So if they don't understand how can they be superhuman? At computation? Yes sure. At, you know, things that are predictable? Yes sure. But look at things that are difficult to predict; machines are rubbish at that. Get an AI system to organize a two-week holiday for a family. He's going to do a very very bad job of it, but people can manage just fine. Yeah, it's not a dollar it's not like you do it like that [snaps finger], but it's perfectly manageable. It's not such a big deal like a super problem, no. Now on the other hand if you do you know make a lot of computation and a lot of processing and a lot of calculation like multiply just like that see two six digit number without a calculator of sorts it's hugely difficult for someone, right, almost impossible. But for a computer -- just a basic computer -- [it] is super easy because it's well defined, it doesn't have much variations, it is predictable -- that's why. When unpredictability comes to the table and, say, a lot of variables and ambiguity, then people are much better than machine are; they are actually rubbish, machine are actually rubbish at that. So the key is to work both -- computer [and] machine. Computer-machine interaction is the key here. Use machine [for] what they do best, use people for what they do best, and let people control and have, you know, curate the output of those machines. So to your point about precisions, earlier, use machines in those contexts and problems to solve that they can actually do good. You know, you know exactly this is where you have to cut -- but -- but physically cutting it with high precision, machine is gonna do that, and there's gonna be an AI system in the back end and “ML”, and computer vision a lot of things, but you as a physician and an expert in medicine, you have the decision to tell the machine where to cut. Potentially, even what tools to -- to use to a point but, don't let THE INDIAN LEARNING/ JANUARY 2021

81


the machines to decide, you know, “I’m gonna do a liver transplant”. The doctor has to say that, based on many data points of course. And maybe the doctor will also be assisted in his or her decision by an AI system, as long as that AI system is explainable, so that the doctor can say, “okay, do I -- can I really trust that? Can I really make a decision based on that? How is that output produced?” Then yes, then that -- that is very beneficial indeed.”

BOGDAN: “I don’t think AGI will be based on neural networks at all. Not on the concept of neural networks and deep-learning that we have today in any case. I don’t think it will be silicon-based, if -- if that’s attainable. Consider, you know, let’s assume AGI is attainable, in that context. Maybe there will be biochips around, and bio-circuits, not just the chips, but the circuits between them, the whole network of exchanging information. It… I reckon it will be in a very very different format than it is today. Or -- or formats, more than one format. Right now, it’s pretty much electricity. Everything goes back to electricity, even if its light, it’s gas converted into electric signals. [shakes head] I don’t think that will be the case with AGI, if that’s attainable at all, I don’t think so. I don’t think so, if there will be the concept of neural networks, because that is a very generic name, right. They will be very different in concept, and in architecture, and in the way they work, from the neural networks of today. And, no, they’re not going to be robotic voices. They’ll be voices like mine, right now, right here, that you can’t really discern. Another AI system may very well -- AGI system may very well discern, and then, analyse all these minuscule data points that, you know, put together, connecting the dots earlier, that’s a machine. Or not. Or, hm, probably. But as people [shakes head] heck, Abnivardan could be an AGI robot. But for me, it looks all natural. Easy. That’s another way of looking at it. But the chances are that Abhivardhan were not gonna be even visible. It’s gonna do all the things that we talk about. It’s gonna be somewhere in the back-end, in some data centres. That will look very different indeed, from the data centres of today. Not necessarily, you know, all these hundreds of hectares of metal and glass, no. Not necessarily that. Uh, and, I can’t even see Abhivardhan, but his actions have a real impact in society, and in my life. But I di -- di -- wouldn’t even THE INDIAN LEARNING/ JANUARY 2021 82


know it’s because of Abhivardhan. That’s my point. It’s gonna be un -- uh -- no, it’s not gonna be ubiquitous. It’s gonna be just there

and, you don’t -- won’t even know most times. So yeah, there may robots on the streets doing menial tasks, but if its AGI, its not gonna be limited to menial tasks -- for sure. It’s gonna be way above that. So, you’re not gonna see just bin man in the form of realistic robots, clearing the streets. You may see, you know, traffic controllers, faceless traffic controllers, you know, doing that, managing complex traffic problems in megacities, perhaps, and flights, and so back and so forth. There are so many use cases, you know, we are not short of use cases. If that’s one thing that we have inflation of in the world, is use cases -- and problems to solve. BOGDAN: But again it comes back to the question what society do we want, and should we really want AGI. If at all possible.”

MRIDUTPAL: “Okay so, Abhivardhan sir what is your take on this?”

ABHIVARDHAN: “Well, lets get on society then. Let’s talk about society and social ethics, then not technological because I’m a le -- law-related guy so I should not actually talk about how it would be I think that we know that technological side right now. Socially, well again then my understanding here is that it depends on what kind of a cultural system it is and, as I said that AGI’s interpretation mostly has been from America, Europe we need to see what our AGI interpretation is all around. I think there might be some agreement all around the world like, but let’s get on the other aspect of it so I’m saying okay fine, socially we may or may not because, again I’m saying technology as an extension of humankind can be interpreted in n number of ways, and various vertical hierarchies all around the world. Technology doesn’t just mean electronics, right, its more beyond that right? Technology can start with anything. Thats and abstract understanding, but a clearer understanding in a practical sense of today would be that uh… okay fine we have disruptive tech like AGI and... I’m not even getting to the question of whether it is there, I’m just getting to the simple question of -- okay fine, even if we needed it in this scenario how do we need it and why and once we understand what kind of vertical and horizontal hierarchies are in, you know, in consideration with this, how we would implement it. But still I think that that question is divested. I mean, it depends on what kind of society I mean like okay let’s take the Indian example, I take the Indian example. So a great economist known as Friedrich Hayek has often called that the idea of THE INDIAN LEARNING/ JANUARY 2021 83


economics usually under Adam Smith and all those guys has been Newtonian economics, which is that, okay fine, you actually scale up things, you have a cost -- you know, cause and effect relationship, and then you analyse things. But Hayek has actually proposed a different model of economics, and I take that idea as -- in an aesthetic way, which is not just about money and finance, it’s more about understanding the governance and systems that idea is, uh, and is, you know, the interesting part. It is being implemented by Singapore, and now it is being -- it is under implementation by the Indian government as well. Which is -- which we call -- is, um, adap -- uh, complex adaptive system -- CAS. So it simply means that a system is complex by nature. It doesn’t need to be obviously egalitarian all the time, but there might be some egalitarian rules, some strands that they are. But still, if it is a complex adaptive system, we need to understand what are the nodal points where it affects and how we actually target those points accordingly. Because at the end of the day, policy is an intersectional field right. Uh, you have different avenues in policy, you mix them and then you make them happen. Uh… It can be anything, traffic control is one example, I think, maybe it can be related to how people are educated, how health issues are being solved, uh, it can also be related to dependency of cases, like what kind of case should we deal with. Like, one funny example we had in a webinar we organised in May, and somebody was asking that, “Abhivardhan can we have AI-based judiciary in India?” I said that if you try doing that on the basis of hierarchy, you can't have at the Supreme Court level for sure because discussing controversial political issues is not in the scope of an AI Judge. We already have an American example which actually does not work pretty well. It’s not even racism, it doesn’t even understand how the idea of freedom basically works. So freedom and liberalism also differ. In France, it is completely different. In UK it is another story. In America it is another. We are actually a very messy case altogether. So… I think… You know… But I said one thing that it can be connected with one interesting thing that if we start at district level or we start at the panchayat level and we see how this works. But still even to that, to take in the Indian eg. It's more than… It's not even like an ounce… It’s not even 1/10th possible. It’s not like that it’s near the case that it would not be possible because of the simple nature that our society’s lingual patterns are very diverse. So I’m taking India for example. If I go to China it will be more complex because again Asia-Pacific is a very complex continent, Africa is a very complex continent. I’m taking complex societies which might not be egalitarian all the time; they are more collective-centric but still, that’s the case. It’s very different.

THE INDIAN LEARNING/ JANUARY 2021

84


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.