The Indian Learning - Volume 2, Issue 1 (2021)

Page 1

INDIAN SOCIETY OF ARTIFICIAL INTELLIGENCE AND LAW

The Indian Learning e-ISSN: 2582-5631 Volume 2, Issue 1 (2021) July 31, 2021 Abhivardhan, Editor In Chief Aditi Sharma, Chief Managing Editor

Digital Edition isail.in/learning


e-ISSN: 2582-5631 Volume: 2 Issue: 1 Website: isail.in/learning Publisher: Abhivardhan Publisher Address: 8/12, Patrika Marg, Civil Lines, Allahabad - 211001 Editor-in-Chief: Abhivardhan Chief Managing Editor: Aditi Sharma Date of Publication: July 31, 2021 © The Indian Learning, 2021. No part of the publication can be disseminated, reproduced or shared for commercial usage. Works produced are licensed under a Creative Commons Attribution-NonCommercialNoDerivatives 4.0 International License. For more information, please contact us at editorial@isail.in.

2

THE INDIAN LEARNING / JULY 2021


contents ARTICLES

SPECIALS

EXCLUSIVES

Intelligent Space Exploration through AI: A Mini Primer

India Should Embrace NonFungible Tokens: A Mini Primer

Infosys's NIA: An EnterpriseGrade AI Platform

Mental Health Apps: the AI Psychologists

Sarcastic Content on the Internet and their Detection Using AI Tools

How Do Artificial Intelligence Assistants Interact with People?

India Should Embrace Non-Fungible Tokens: A Mini Primer

The Beijing Consensus on AI and Education and NITI Aayog’s Response

How custom algorithms will shape the future of media buying and more...

3

THE INDIAN LEARNING / JULY 2021


Editorial Board Abhivardhan, Editor-in-Chief Chairperson & Managing Trustee Indian Society of Artificial Intelligence and Law abhivardhan@isail.in Aditi Sharma, Managing Editor Deputy Strategy Advisor Indian Society of Artificial Intelligence & Law aditi.s@isail.in Kshitij Naik, Chief Managing Editor Chief Strategy Advisor Indian Society of Artificial Intelligence and Law kshitij@isail.in Mridutpal Bhattacharya, Managing Editor Junior Research Analyst Indian Society of Artificial Intelligence & Law mridutpal@isail.in Associate Editors Abhishek Jain, Senior Associate Editor Chief Managing Editor Indian Journal of Artificial Intelligence & Law abhishek.jain@isail.in Rishika Pandey, Junior Associate Editor Contributing Researcher Indian Society of Artificial Intelligence and Law rishika@isail.in

4

THE INDIAN LEARNING / JULY 2021


The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Intelligent Space Exploration through AI: A Mini Primer Bhavana Nair, Editorial Intern, Indian Society of Artificial Intelligence & Law

The use of AI is growing at an unparalleled rate in the field of space exploration. In space, literally, there are more stars than there are grains of sand on Earth, and each of those stars may harbour life or have a planet that is potentially inhabitable. We would run out of time even though all humans were to unite under one umbrella and research each of those stars. A quicker, safer, and more reliable solution needs to be available that can take care of all the dreary work, which appears to be none other than artificial intelligence.[1] This article is based on a careful analysis of research articles and internet-based resources combined with personal opinion to understand and appreciate the potential of AI in space exploration.

The Past and Present of AI in Space Exploration

AI (Artificial Intelligence) has always been a comrade of space research organisations like NASA, the European Space Agency, the CNSA (China National Space Administration) and Space X. The origin of AI and space exploration is older than anyone might think. A Rocket Booster technology was developed during World War II, allowing the first generation of spaceflight, with artificial satellites and interplanetary probes being launched by the Soviet Union and the United States using AI technologies. The journey had already begun in the mid-20th century.[2] Towards the end of 20th century, SKICAT (Sky Image Cataloguing and Analysis Tool) detected what was beyond human competence. It classified roughly a thousand objects in low resolution during the second Palomar Sky Survey. Astronomers have used similar AI systems, Convolution Neural Network, in finding 56 new gravitational lenses.

THE INDIAN LEARNING/JULY 2021

5


With the commencement of the 21st century came another breakthrough in space exploration, the success of EO1 (Earth Observing 1). The EO-1 satellite had been efficacious in obtaining images of natural calamities. Even before the ground crew realised that the occurrence had taken place, the AI operating with it began to take pictures of the hazards. This was the first satellite— to map active space lava flows. to assess the methane leakage of a facility from space. to track re-growth from space in a partially logging Amazon forest.[3] NASA along with Google made 2017 the year of the scientific breakthrough by discovering two unidentified planets by the help of artificial intelligence viz. Kepler 90 (now Kepler 90i) and Kepler 80 (now Kepler 80g).

Exploring the space gives rise to massive quantities of information that cannot be processed by human intelligence. This is where applications of artificial intelligence count. AI can alter the course of space exploration by analysing and extracting the meaning of the data. Researchers may find life on new planets with the results. It will help to recognise and monitor trends that humans may not have made possible. It is also possible to identify planets that have the correct conditions to sustain human life. The rovers presently exploring the surface of Mars are expected to make decisions without the mission control's explicit commands. AI applications are what really makes this possible. For example, the NASA Curiosity rover could very well move on its own, avoiding obstacles on the way and identifying the best route to take. We get data in the form of pictures from space. However, the task is to decipher those images and extract the information required. Here, machine learning will help. NASA Frontier Development Lab and tech giants like IBM and Microsoft have unified to harness machine learning as a solution for detecting solar storm damage, measuring the atmosphere, and measuring the 'space weather' of a given planet by evaluating the magnetosphere and atmosphere. Machine Learning, a derivative of Artificial Intelligence, played a major part in the successful landing of SpaceX Falcon 9 at Cape Canaveral Air Force Station in 2015. It determined the ideal way to land the rocket via real-time data that enables route prediction. The geological composition and historical importance of a planet can be understood via AI applications. Not only this, but AI can also submit, evaluate, and classify images of the same and decide on the next appropriate move. Deep Learning, a branch of Artificial Intelligence could be used for automatic landing, logical decision-making and fully automated systems. The next generation spacecraft will be more independent, self-sufficient and autonomous thanks to the Artificial Intelligence applications. AI would go beyond human limitations to perceive observations and submit information back to Earth. AI applications can augment planetary tracking systems, allow smart data transmission, and eliminate the risk of human error by way of predictive maintenance.

THE INDIAN LEARNING/JULY 2021 6

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Why is AI Used in Space Exploration?[4]


India’s feat in Space Exploration using AI Indian Space Research Organisation had created a solar-powered robotic vehicle named Pragyan which would explore the lunar surface. This Pragyan was integrated with Chandrayaan-2 rover. Pragyan contained LIBS (Laser Induced Breakdown Spectroscope) from the Laboratory for Electro Optic Systems, Bengaluru which would identify elements present near the landing site as well as an APIXS (Alpha Particle Induced X-ray Spectroscope) from the Physical Research Laboratory in Ahmedabad which would enable inspection of composition of the elements identified by LIBS. Artificial Intelligence enabled the Chandrayaan-2’s rover in numerous ways -The AI-powered Pragyan was able to communicate with the lander. It contained motion technology that was designed to help the rover travel and land on moon's surface. Moreover, the artificial intelligence algorithm could aid the rover spot signs of water and other minerals on the lunar surface. AI could well allow the rover to share pictures that would have been used for research and experimentation.

InIt is a widely accepted fact that artificial intelligence is the key to unlock further developments in almost every arena. With respect to space exploration, there are a number of ways in which artificial intelligence can help. 1. Astronaut Assistants: Virtual assistants can possibly detect any hazards in long space flights, such as disturbances in the spacecraft atmosphere or medical assistance to astronauts who get sick due to zero-gravity conditions. AI can help condition humans prior to their long-distance space journey and can be particularly useful in conducting operations in deep space or on another planet when the reporting system fails to communicate. With deep learning techniques applied to speech recognition and facial recognition, AI could also hold a two-way conversation with astronauts and learn from their conversations. One such space exploration camaraderie between a human and a machine began with CIMON (Crew Interactive Mobile Companion) on 29 June 2018, CIMON (a compact AI-endowed football-shaped robot) was launched on a 2-day mission to the ISS on Space X Dragon Cargo Capsule. Cimon can be used to alleviate the anxiety of astronauts by completing tasks they require it to do. NASA is also designing a companion for astronauts on board the ISS, called Robonaut, who will work closely astronauts or undertake the tasks that are too hazardous for them. Another noteworthy example is the Japanese Space Agency, which has developed an intelligent system—Int-Ball by JAXA for the ISS to take pictures of observations in the Japanese module. This autonomous, self-propelled and navigable ball camera used current drone technology and was intended to support astronauts with on-board challenges and exploration missions. 2. Planning and designing missions: Conventionally, fresh space missions depend on the information gathered in previous studies. However, this information may often be limited or not readily accessible. There is a need for a smart system that can respond to researcher's queries in real-time. Researchers are working on the concept of a Design Engineering Assistant to minimise the time needed for initial mission design, which would otherwise take several hours of human work. "Daphne" is an example of an intelligent assistant in the design of Earth observation satellite systems. Daphne is used by system engineers in satellite design teams. It supports their work by providing access to pertinent information including reviews and answers to specific problems. 3. Satellite Data Processing: Although there have been several crowdsourcing projects intended at satellite imagery analysis but only on a limited scale. For detailed analysis, artificial intelligence can prove to be a boon. In a recent research, scientists have tested various AI techniques for remote satellite health monitoring systems. This is effective for analysing data received from satellites to spot any problems, assess satellite health performance, and present a visual image for strategic planning. 4. Tackle Space Debris: According to European Space Agency, there are almost 34,000 objects greater than 10cm that pose significant risks to current space infrastructure. Artificial intelligence might solve this problem. In a recent study, researchers have developed a framework for developing ‘collision avoidance manoeuvres’ using machine learning (ML) approaches. One way to ensure the safety of space flights has recently been suggested by using already competent networks onboard the spacecraft. This allows for more versatility in the design of satellites while at the same time minimising the danger of collisions in space. 5. Effective Navigation in Space: NASA Frontier Development Lab has been working on an AI programme that functions like a GPS in space and makes it easy to reach Titan, Mars or even the Moon. By use of GPS as well as other GNSS systems in Medium Earth Orbit (MEO), Geostationary Orbit (GEO) and beyond, is "an emerging capability," as per Miller (Positioning Navigation and Timing (PNT) policy chief for the NASA Goddard Space Flight Center).[5]

THE INDIAN LEARNING/JULY 2021 7

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

The Way Ahead


Conclusions It is evident that artificial intelligence has the much-needed potential to facilitate advanced space exploration programmes and possibly aid in discovering more exoplanets which would not have been possible with traditional technologies and human intelligence.

References

THE INDIAN LEARNING/JULY 2021 8

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

[1]Ronald Von Loon, How AI is Transforming Space Exploration, LinkedIn (Feb 8, 2021, 2:30 pm), https://www.linkedin.com/pulse/how-ai-transforming-space-exploration-ronald-van-loon[2]Artificial Intelligence in Space Exploration – Importance of AI in Space Exploration, AIlabs ( Feb 8, 2021, 2:52pm),https://ailabs.academy/artificial-intelligence-inspace-exploration-importance-of-ai-in-spaceexploration/#:~:text=The%20history%20of%20AI%20and,than%20anyone%20could%20possibly%20think.&text=AI%20has%20also% 20helped%20in,2%2C545%20light%2Dyears%20from%20earth. [3]Sakshi Gupta, AI Applications in Space Exploration: NASA, Chandrayaan2 and others, Springboard (Feb 8, 2021, 3:02 pm), https://in.springboard.com/blog/ai-applications-in-spaceexploration-nasa-chandrayaan2-andothers/#:~:text=AI%20in%20Indian%20Moon%20Mission%20%E2%80%93%20Chandrayaan2&text=And%20that%20was%20the%20i ntegration,surface%20on%20its%20six%20wheels. [4] Ibid.[5] Five ways artificial intelligence can help space exploration, The Conversation (Feb 9, 2021, 2:28 pm), https://theconversation.com/five-ways-artificial-intelligence-can-help-space-exploration153664.


Aditi Biswas, Editorial Intern, Indian Society of Artificial Intelligence & Law

As the world has grown more complex, the rise of mental health problems has been inevitable. Due to this, a very interesting phenomenon has been observed: the development of artificial intelligence-based mental health applications (MHAs). Accessible to anyone with a smartphone through various app-stores, they usually boast functions of improved mental health through relaxation exercises and stress management skills amongst others. Behavioural scientists who study organizational behaviour believe that the idea of automating psychology has become out of date due to how evident it is that psychology is a humane profession specialising in empathy and intuitive skills that cannot be mimicked by a machine. However, AI has changed what we considered machines adapting human-like behaviour. A psychologist’s job includes assessing the problems a patient is facing with computer-aided psychological tests, using psychology assessment/evaluative tools to diagnose the patient’s problems as a condition, formulating treatments or interventions for these conditions through therapy, and evaluating a summary of all of these parts. Several MHAs are based on cognitive behavioural therapy that identifies and changes dangerous or destructive thought patterns that negatively influence emotion and behaviour. The most common manifestation of AI in these mental health apps are chatbots. An AI chatbot is a software that can simulate a user conversation with a natural language through messaging applications and sometimes, through voice conversations. AI chatbots in MHAs are programmed with therapeutic techniques to assist people with anxiety and depression, but the promise of this technology is tempered by concerns about the apps' efficacy, privacy, safety, and security. Any good technological advancement has both advantages and disadvantages. The most prominent advantages of MHAs are as follows. They are often preferred over consulting human psychologists partly due to the stigma attached to mental health. They provide their user-patients the convenience of having ‘medical’ assistance with mental health free of cost or at very low prices as per one’s personal schedule right on one’s cellular device. Patient engagement is higher due to real-time engagement, usage reminders, and gamified interactions. The use of pictures rather than text, reduced sentence lengths, and inclusive, nonclinical language creates a simpler user interface for patients to deal with, especially as their cognitive load is reduced. Since mental illnesses tend to manifest simultaneously with others, MHAs with diagnosis methods that treat symptoms shared by multiple disorders increase patient engagement and treatment efficacy by reducing the commitment needed to interact with multiple apps.

THE INDIAN LEARNING/JULY 2021 9

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Mental Health Apps: the AI Psychologists


The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Features of MHAs that let users increase their emotional self-awareness (ESA) by self-monitoring and periodically reporting their thoughts, behaviours, and actions has been shown to reduce symptoms of mental illness and improve coping skills. The anonymity that MHAs offer their user-patients is unparalleled by human psychologists because human psychologists are professionally obligated to keep their patients’ personal information confidential whereas MHAs don’t directly communicate with the person thereby removing the dilemma of confidentiality. MHAs are more consistent in their treatment than human psychologists due to their programming. The most prominent disadvantages of MHAs are as follows. There is a frequent lack of an underlying evidence base in them. A lack of scientific credibility is also commonly noted. Subsequently, their clinical effectiveness is limited. They feed into cultivating an over-reliance on apps in their user-patients. The equity in access that they provide may result in at-risk people accessing the same instead of seeking out professional help which may prove dangerous for them. Increased anxiety in user-patients resulting from self-diagnosis. Professional involvement of clinicians is not considered when these applications are developed hence ruling out professional intervention in case of potential red flags. Most MHAs focus purely on one condition or disorder or illness whereas professional help is a more comprehensive process with a more inclusive treatment plan. While an AI chatbot may provide a person with a place to access tools and a forum to discuss issues, as well as a way to track moods and increase mental health literacy, AI is not a replacement for a therapist or other mental health clinician. Ultimately, if AI chatbots and other MHAs are to have a positive impact, they must be regulated, well-informed, peer-reviewed, and evidence-based; and society must avoid techno-fundamentalism in relation to AI for mental health. There are two kinds of laws under which health professionals fall which may or may not apply to MHAs. These are medical negligence laws and consumer protection laws, both of which intersect in the healthcare sector. An act or omission (failure to act) by a medical professional that deviates from the accepted medical standard of care is known as medical negligence. Negligence is used as a tool to ascertain fault in a civil case where injuries and/or, losses or damages occur to a party. Medical professionals owe a certain standard of care to their patients which is, on the whole, accepted to be as the level and type of care that a reasonably competent and skilled health care professional, with a similar background and in the same medical community, would have provided under the circumstances which led to such negligence. However, a claim or a suit can be brought against a medical professional only if the negligence had a detrimental effect on the patient (damages), and if the harm caused to the patient was a foreseeable result of the medical negligence (legal causation). Medical negligence claims against psychologists involve the same requirements as a regular case of medical negligence. However, a claim or a suit can be brought against a medical professional only if the negligence had a detrimental effect on the patient (damages), and if the harm caused to the patient was a foreseeable result of the medical negligence (legal causation). Medical negligence claims against psychologists involve the same requirements as a regular case of medical negligence. However, psychological harm is always more difficult to prove than other forms of harm. An example of medical negligence in the case of a psychologist would be diagnosing a patient who has bipolar disorder, which includes manic episodes as well as depressive episodes, with merely clinical depression, and hence possibly prescribe them incorrect medication. When it comes to health applications, developers of such apps will not fall under the ambit of medical negligence or medical malpractice due to such laws applying only to a doctor-patient relationship. However, consumer protection laws will be applicable in such a scenario. Specifically, product liability laws that provides the consumers with legal recourse for any injuries suffered from a defective product. A product is required to meet the ordinary expectations of a consumer, therefore, responsibility lies with the manufacturers and the sellers to ensure safety and quality of the product as per description. Since an MHA passes as a product, product liability laws are likely to apply to them. There is, however, a grey area in the law when it comes to applications available free of cost.

THE INDIAN LEARNING/JULY 2021

10


nAlso, some healthcare applications add an extra layer of protection for themselves through making their customer-patient-users sign digital consent forms. These forms generally ensure that the patient knows the risks and complications before a treatment begins and is aware of alternate treatment options. This provides the additional safeguard against any grievance or negligence lawsuits. Furthermore, some applications make these forms ‘tamper-proof’ hence not allowing any modifications after signing. Even though these forms must be compliant with the current law, they can be borderline exploitative of the desperate condition of their user-patients. Additionally, in the case of medical institutions, corporate negligence and vicarious liability may prove effective instead of medical negligence. Particularly, the concept of ‘negligent credentialing’ might come into play if an MHA attached to a hospital uses an AI which has not had its credentials appropriately reviewed, similar to how a hospital must review the credentials of employees they hire like doctors and other staff. Due to the heavy commercialization of medical practices over the world, consumer protection laws do intersect with medical negligence law. However, here too, the Consumer Protection Act of India, covers all services provided by medical practitioners to patients, except those that are provided to them free of cost.

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Stigma surrounding mental health, not only in India, but the rest of the world as well, limits people from demanding their fair share as patients or consumers. AI-based MHAs available free of cost, completely escape the scope of the laws that generally cover medical institutions, practitioners, or other applications. This is dangerous for patients struggling with mental illness or mental disorders due to the desperate situation they find themselves in wherein they have to rely on free applications for medical assistance. There need to be legal safeguards for the same due to the risk these situations pose to at-risk individuals and arguably, society as whole. Even in case of low-cost MHAs, only selective consumer protection laws apply to them. MHAs fall under the ambit of healthcare applications and hence they must be awarded within the legal scope of that access to medical negligence law. Finally, for AI-based MHAs to be approved and readily available and accessible to all through app stores or the internet in general, there needs to be more stringent requirements for the same. The involvement of mental health professionals while developing these applications, getting them peer-reviewed by a group of professionals, and based on evidence from running trials with users (with their informed consent as well), might go a long way in the advancement and widespread effective use of MHAs.

THE INDIAN LEARNING/JULY 2021

11


Oruj Aashna, Editorial Intern, Indian Society of Artificial Intelligence & Law

A non-fungible token is an irreplaceable digital asset or token, with each token symbolizing unique items, such as art, poster, games, real estate, etc. The word fungible in NFT means something which is replaceable by another identical item. Therefore, nonfungible tokens are those tokens that aren’t replaceable with another non-fungible token. We can see this differently: each token is different, unlike other digital currency or actual currency, which holds identical value if replaced by the same amount. NFT is not a new concept, and it was created in the year 2012-13. It started gaining popularity in 2019 when a French street artist, Pascal Boyart, painted a mural inspired by the famous masterpiece of Eugene Delacroix. The French authorities did not approve the art and decided to coat Boyart’s painting with a purpose to hide the message behind the artwork. Boyart decided to take a picture of the mural and put it on NFT. Today, even though the physical mural does not exist, it is available in the form of NFT, and it is the first most popular art mined on the digital platform.

Why NFT? With the world extending its digital scalability, everything is available online. Most notably, the trading of art and digital artwork has become accessible in one click. Each artwork is available at every next page of a different website. Thus we are unknown of the ownership and authenticity of the work. The digital platform has adversely affected the creation of original artists. With no security and unknown ownership, the artists are left with nominal value. Proponents of NFT designed NFT to tackle the problem of ownership in the digital marketplace. In simple terms, NFT will represent the ownership of the unique item. An item, when attached with a token, gives certification of ownership of that unique item.

THE INDIAN LEARNING/JULY 2021 12

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

India Should Embrace NonFungible Tokens: A Mini Primer


What are classified as Unique Items? A unique item could be anything original that holds value in the eye of people. The item could be anything provided it is original and irreplaceable. This is to note that anything to be considered an asset or token need not have to be anything artistic. An item could be a mere post or a tweet. The most relevant asset that projects the value of “originality” is the one-line tweet by Jack Dorsey that is sold for $2.9 million in NFT based transactions.

How Does NFT Work?

Since NFTs reside on a public blockchain, anyone holding a cryptocurrency wallet such as Ethereum, Binance, Flow, Alogorand, and others can buy NFT. NFT are typically purchased and sold using crypto assets. Some crypto exchange platforms such as Nifty Gateway also allow investors to buy it through debit or credit cards.

Indian Context of NFT Recently Indian programmer Vignesh Sudaresan paid $69.3 million for digital art created by artist Mike Winkelmann, popularly known as Beeple. The token was sold in an auction conducted by auction house Christie. With a record-breaking auction by two investors, Vignesh, followed by Anand Venkateshwaram, became the highest betted token holder. This opened up hope for millions of investors and crypto-asset platforms in India. Now various artists in India are stepping forward into a digital market to associate their artwork in NFT. With that, a famous artist Nuclaya has decided to put his album on the digital market as an NFT. WazirX, which is apparently India’s most popular cryptocurrency exchange platform, launched its first NFT market for Indian artists. WazirX will allow artists across India to place their digital asset or non-fungible token, including anything artistic or unique, ranging from artwork to a mere tweet. Looking at the growing interest in NFT in the Indian audience, the company's CEO has claimed to eliminate the fees paid to miners in respective currencies to verify transactions and make it more lucrative for customers. With the gaining popularity in NFT, the asset is also steeped with questions around is longevity and sustainability in the Indian environment. The uncertainty is primarily because the Government intervenes in the digital asset transaction (discussed in the preceding subhead).

THE INDIAN LEARNING/JULY 2021 13

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

NFT are functioned and stored on a blockchain ledger. The blockchain method records the token data, which means the token and its associated ownership are locked in a block. Simply put, when an item or token is stored in the blockchain, the data about its actual ownership gets recorded. Whenever there is a transaction or sale, or purchase of the NFT, the blockchain will record the same on its subsequent block. With every sale or resale of an item, the original artist will get a certain percentage of Royalty, and all of these will be recorded in ledger-based accounting. Blockchain thus provides peer-to-peer transactions, consequently prevents reproduction or piracy of original items. Whenever an artist decides to put his or her art on nft, it is assigned with a unique token, and this token is then gets stored in blockchain. The NFT then gets sold at auction, with prospective buyers bidding against each other. When the price reaches its highest amount, the token gets sold, and the highest bidding buyer becomes the owner of that particular token.


Governments' View on NFT As of now, there is no general prohibition on NFTs, which could penalize its movement in India. Although the Government is silent about the legality of NFT, a previous approach of the Government toward cryptocurrency has certainly made NFT a skeptical area. Off late in 2019 government issued a bill called Banning of Cryptocurrency and Regulation of Official Digital Currency Bill, which happens to be the “Bill” for banning private digital currency and regulating public or official digital currency in India. Section 3(1) of the Bill states, “No person shall mine, generate, hold, sell, deal in, issue, transfer, dispose of or use Cryptocurrency in the territory of India”, Does NFT come under the definition laid down in the Bill?

If we examine the definition of cryptocurrency, there is a possibility of NFT being fall under the above description's ambit. Since NFT is a crypto token that signifies both representations of value and a store of value, it is likely to fall under the said definition. NFT being public and non-fungible, there is a hope that it may get exempted from the prohibition or penalty. Moreover, section 3(3) of the bill states, “Nothing in this Act shall apply to the use of Distributed Ledger Technology for creating a network for delivery of any financial or other services or for creating value, without involving any use of cryptocurrency, in any form whatsoever, for making or receiving payment” The above para lays down the exception to it by allowing ledger technology to create a network for the flow of any financial or other services or create value. But there’s still an ambiguity to this Bill. But recently, the opinion of the Government has shifted to a pessimistic point. Our finance minister, Nirmala Sitharaman, made it clear that there will not be a complete ban on cryptocurrency or at least the technology associated with it. In her interview stated that there is a specific window for people to experiment with blockchain and cryptocurrency. The Government acknowledged that existing laws on cryptocurrency are inadequate to deal with the subject. The new legislation will clear the Government's view on cryptocurrency. The question arises: Would NFT be affected by any future laws of banning crypto transactions? There is a possibility of prohibition because the definition of cryptocurrency verifies it as cryptocurrency. Considering the handful of investors on NFT in India, banning the token will be a bad idea. The Government’s concern on cryptocurrency might affect NFT because these tokens are termed as “cryptocurrency” most of the time. The notion needs to be changed because NFTs are more of an asset and not just a currency. NFTs can be exempted from being a currency for one primary reason, ie. it cannot be interchanged. NFT representing unique items does not act as a means of exchange

THE INDIAN LEARNING/JULY 2021 14

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Section 2(1)(a) defines cryptocurrency as “….any information or code or number or token not being part of any Official Digital Currency, generated through cryptographic means or otherwise, providing a digital representation of value which is exchanged with or without consideration, with the promise or representation of having an inherent value in any business activity which may involve risk of loss or an expectation of profits or income, or functions as a store of value or a unit of account and includes its use in any financial transaction or investment, but not limited to, investment schemes”


Ankesh, Editorial Intern, Indian Society of Artificial Intelligence & Law

Microsoft is one of the technological leaders in the world, which has been involved in almost all sectors of the Information and Technology market. Azure is one of the AI tools which is available to use on a licence basis like other Microsoft product so that users can take advantage of the software according to their needs. Azure is basically a tool mainly for data analysis and decision making. In Microsoft’s own words it is “a portfolio of AI services designed for developers and data scientists. Take advantage of the decades of breakthrough research, responsible AI practices, and flexibility that Azure AI offers to build and deploy your own AI solutions.” One of the fundamentals while doing market research is to know about what is going to be the target audience for the market, or which type of customers is the research actually aimed at. Microsoft’s Azure is mainly aimed at scientists, and tech enthusiasts, who are in a necessity of using the data collected by them in an organised manner. The platform in addition to this has integrated software such as Jupyter Notebook, and Visual Studio to take full advantage of AI and Machine learning capabilities. Microsoft describes Azure Cognitive Services as “a comprehensive family of AI services and cognitive APIs to help you build intelligent apps,” and claims to have the “most comprehensive portfolio of domain-specific AI capabilities on the market,” although its competitors might disagree with that assessment. Azure Cognitive Services are aimed at developers who want to incorporate machine learning into their applications.

THE INDIAN LEARNING/JULY 2021 15

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

The Capabilities in Microsoft Azure's AI/ML System: A Mini Primer


What is Azure? The Azure services are a group of services integrated together, to get the best possible usage to any sort of data available for processing on the network. In addition to this these Capabilities are also used in applications such as Visual Studio Code for better optimization. You can finish hours of codes in minutes through these templates and Artificial Intelligence suggestion which are available to the users. Not just this, but these capabilities are slowly been seen in Microsoft’s productivity suite, like Word, PowerPoint, but also in Microsoft’s in-house image and video editing platforms. Instead of having companies run all Azure services on the Azure platform, Microsoft also offers several Docker containers that allow companies to use local AI platforms. It supports a subset of Azure Cognitive Services and allows companies to run Cognitive Services on firewalls with local data sensitive to companies with strict data security policies and those subject to protectionrestricted data. In addition to this, as the debate of responsible AI increases, Microsoft has also, released some Open-Source AI projects from its dockets like FairLearn, InterpretML, and SmartNoise.

This set of Services are together called the Azure Cognitive Services. As mentioned earlier that Microsoft defines it as a group of services presents which use Artificial Intelligence and Machine Learning capabilities, to develop better applications for customers, The set of services have mainly four areas of expertise. Decision Making – Helps you choose or decide what should be the next step. Language – This is the basic AI are that helps all sorts of language and translate that language into another, it may include any sort of language. The third one is speaking. And the final one is vision. Web Search or the interaction of computers has been given a separate section in Azure Cognitive Services. These services are ready to use, and trains itself form the plethora of data it gets fed every day, the basic working principle of AI. Azure Cognitive Services Learning is usually not required, at least not as high as you would expect from Azure Machine Learning. Some Azure Cognitive Services are configured, but you don't need to understand machine learning to do so. Almost all Azure Cognitive Services have a free plan to choose from. The Decision Support of Azure Services It includes four layers of services to reach the final stage. They are namely an anomaly detector service, a content moderator, a metrics advisor, and a personalized. The Anomaly Detector service integrates anomaly detection into your application, allowing users to quickly identify issues when they arise. No previous experience with machine learning is required. Through the API, Anomaly Detector receives time series data of any type and selects the best anomaly detection model for the data to ensure high accuracy. You can configure parameters to customize the service according to your business risk profile. You can run Anomaly Detector anywhere, from the cloud to smart peripherals. The content moderator service is designed to manage social media, product review websites, and games with user-generated content and performs image moderation, text moderation, video moderation, and everyone's validation to lower or soften trust prediction contexts. The Metrics Advisor service is based on the Anomaly Detector service and allows you to track your organization's growth mechanisms from sales revenue to manufacturing in near real-time. It also tailors the model to your scenario, provides detailed analysis with diagnostics, and alerts you to anomalies.

THE INDIAN LEARNING/JULY 2021 16

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

These open-source projects have proven quite handy and useful for users and Microsoft too. According to InfoWorld, these opensource codes and APIs can also be integrated with some other released software from Microsoft, in turn increasing their productivity. “Fairlearn contains mitigation algorithms as well as a Jupyter widget for model assessment, and has been integrated into a Fairness panel in Azure Machine Learning.” With regards to the InterpretML, it helps you “understand your model’s global behaviour, or understand the reasons behind individual predictions, and has been integrated into an Explanation dashboard in Azure Machine Learning.” The SmartNoise project, in collaboration with OpenDP, aims to expose discriminatory privacy for future use by providing some basic building blocks for people with sensitive information to use. You can import SmartNoise into your Python notebook by installing and importing your project and adding calls to display sensitive data. Many frameworks and tools are used in the world of machine learning, deep learning and artificial intelligence. Azure AI directly supports dozens of these, but there are hundreds more managed by Azure by providing or allowing integrations. Some, like MLflow, are integrated into Python packages. Others like Pachyderm are often integrated as containers in Kubernetes (AKS).


Language Platform in Azure The language area of Azure Cognitive Services includes an immersive reader, a language understanding service, a Q&A platform, text analytics, and a language translator. These services are then integrated to give the best results possible. The reader is basically software converting texts and images in the most readable way possible for a user. “The Azure Language Understanding service, also call LUIS (the “I” stands for intelligent), allows you to define intents and entities and map them to words and phrases, then use the language model in your own applications.” The Question-and-Answer forum basically lets user create a layer of questions on the text data present like FAQs, etc. Text Analytics is an artificial intelligence service that integrates ideas such as sentiments, units, relationships and key phrases into unstructured text. It can be used to identify key phrases and entities such as people, places, and organizations to understand common themes and trends. A related service, Text Analytics for Health (currently in preview), lets you classify medical terms using pre-trained domain-specific models. Sentiment analysis and text evaluation in multiple languages can help you better understand customer opinions. This also includes a translator feature which is also available as a standalone application for many windows running devices.

The vision platform for Microsoft’s Azure is probably one of the most interesting services out there. The future holds quite great for these services as they would be on the frontend services if Virtual Reality has a future to hold here. They work on layers namely computer vision, custom vision, face detection, form recognition, and video indexing. These features basically include analysing the images and videos for extracting data from them to use for the services. Custom Vision uses transfer learning to generate a customized image model from a few tagged images rather than the thousands of images needed. An unaltered image can also help. As more images are added, the model continues to improve. “The Face service includes face detection that perceives faces and attributes in an image.” Also, the Video indexing service allows you to automatically extract metadata such as voice, text, face, speakers, celebrities, emotions, themes, brands and scenes from video and audio files. You can then access data from your application or framework, or make it more discoverable. Azure Machine Learning and Other Services In addition to the major services provided by Azure, Microsoft also uses services such as Bot services and Databricks. These services are the various sets that make azure such a comprehensive product to try. The azure bot service is used to Managed service for creating and using chat agents for various application areas and an open-source SDK derived from Cortana development to build Q&A bots, virtual assistants, and more. They also use Access natural language capabilities in the Azure cloud and deploy services across multiple communication channels and messengers. Databricks are part of MS’s cloud data warehouse ecosystem. These Databricks on Apache Spark allow setup, preparation and training of large amounts of data and they are an essential element for using near-/real-time data or streaming high-scale IoT data. The Azure machine learning is one of the similar services like the Azure Cognitive services but it uses more end-to-end learning between the computer devices. The features of both overlaps, but there are some clear distinctions too. The Azure Machine Learning is the core data science cloud service to build, train and deploy machine learning models. It also It provides an easy-touse visual interface for merging open source or shelf models and transforming data by dragging and dropping into data pipelines. As with personal coding in various programming languages such as R or Python, detailed model configuration and customization is also possible. In addition to all of this some small services such as Web Search service which is currently under Microsoft’s Bing, and services like Data Science Virtual Machines is also there. They have cloud solutions for creating workstations for data processing and analysis. Also, there is no need for a fully integrated data storage environment.

THE INDIAN LEARNING/JULY 2021 17

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

The Speech and Vision Platform of Azure In addition to these, the vision and speech services are also provided as mentioned earlier. The Speech area of Azure Cognitive Services includes speech recognition, text to speech, speech translation, and speaker recognition. Speech recognition works in two use cases. For identification, it matches the voice of an enrolled speaker from within a group, which is useful in transcribing conversations. For verification, it can either use passphrases or free-form voice input to verify individuals for secure customer engagements. The text to speech or vice versa can be used to enter information without the use of physically touching the devices. “Microsoft describes its Speech to Text service as allowing you to quickly and accurately transcribe audio to text in more than 85 languages and variants.” The voice translation service allows you to translate sounds from more than 30 languages and tailor the translation to your organization's specific context.


Conclusions The Azure services are one of the best front-end services with respect to artificial intelligence and machine learning, currently in the market. It is easy to access and is deemed to be customer friendly. The use of these services is more into the simple normal tasks, which a scientist or a data analyst or a coder might face. This although has a wide range of services so more companies can opt for the product. The cutting edge work is not suitable for Microsoft Azure, as it can handle them, but not really in an efficient manner.

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

THE INDIAN LEARNING/JULY 2021 18


Ankesh, Editorial Intern, Indian Society of Artificial Intelligence & Law

Sarcasm is a popular use of the expression, mostly used in a humorous way. The detection of Sarcasm, or as the common language goes “people who do not understand sarcasm” is not that popular. It can be defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. Sarcasm is often understood by becoming aware of the context of a particular subject matter. If people don’t know the context, it might not be possible to gain insights on the same. This has been the general understanding of the concept of sarcasm, but as we move ahead, and advancements of technological developments take place, the use of sarcasm is also integrated into this technology. Sarcasm is a rhetorical way to express hatred or negative feelings with an exaggerated verbal picture. It is a series of false mocking and morals that increase hostility without being explicit. In face-to-face conversation, sarcastic speech is easily identified by the speaker's expressions, gestures, and tone. However, recognizing satire in text communication is not trivial because these signals are not available. As Internet usage increases, finding sarcasm in online communications on social media, discussion forums, and e-commerce websites has become important in public meetings, sentiment analysis, and identifying cyberbullying online.

THE INDIAN LEARNING/JULY 2021 19

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Sarcastic Content on the Internet and their Detection Using AI Tools


This has aroused a lot of interest in neuropsychology and linguistics professions too, but the development of a computer model that automatically detects satire is still in its infancy. Previous work in detecting satire in the text has used vocabulary (content) and practical (context) cues such as interference, punctuation, and emotional shifts, which are important indicators of satire. In this work, features are created manually and cannot be generalized if there is an informal and pictorial language that is often used in online conversations. As the technology has advanced, we can do a lot more like leveraging neural works, deep machine learning among other things. This can be used to study both “lexical and contextual features”, thereby making it easy for researchers to do their work in a wellorganised manner. Artificial Intelligence is one important part of the same. Social Media and Sarcasm

In today's world thanks to advances in communication technologies such as mobile phones and social media. These advances have led to an exponential increase in data production. In recent years, we've seen people use social sites like Twitter and Facebook in bulk to collect and share thoughts, opinions, and discoveries, and engage in various discussions. This data should be analysed for a variety of purposes, including sentiment analysis, author mood evaluation, and more. Such information can affect your audience, so you need to understand the nuances of the authors adding data to these social sites. The mood can range from confused, provocative, distracted, or nauseous. Psychologists study the various moods of people and their origins. Mood affects an individual's behaviour, which can affect not only their lives, but also others. Mood is related to emotions, and it focuses primarily on opinions and attitudes. This is why emotions are considered subjective. Some people refer to emotions as a natural way to react to admiration, longing, discomfort, and disgust. Sense means motivation through emotion, evaluation, or observation. There are many types of data on the Internet, from short text data such as tweets to long text data such as arguments. Twitter, a popular social site, has trillions of tweets and tweets that provide a lot of information to understand the concept of satire. Prudence plays an important role in the mood analysis process, and today researchers use these tones to understand an individual's mood. In this article, we will look at the work of various researchers in this field, as well as these tones and their uses. Understand the technical details required to develop these models.

THE INDIAN LEARNING/JULY 2021 20

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

The world of the internet is full of different things all around, it has the education, online marketplaces, and even the latest news. But amidst all this, we have developed platforms to share this news, education and other things. Social Media is one such platform where people share different things according to their interests, some pictures they like, some news they might enjoy, etc. This use of social media leads to a different category of content on the internet, the witty and funny content often referred to as “memes”. Sometimes this content crosses a reasonable limit to not be available on a particular platform, and as a majority of the time artificial intelligence is used to censor these contents, the AI might not understand sarcasm or the humour in a content. Twitter one of the social media giants faces this particular issue a lot, as most of its content is more public than other social media platforms like Instagram, or Facebook per se. This type of censorship often known as algorithmic censorship is often facing some kind of backlash by the users, after the removal of some content on their platform. In the data-driven mindset of today's world where we rely on the internet for even the smallest of things, and a majority of the global population is using the internet, thereby, a lot more people doing these sarcastic and witty posts, it becomes really important to look out for things which cross a reasonable line, thereby invoking censorship. Now, censorship in itself has acquired a negative connotation to over the last few years, but a person owning the platform should be free to decide what should be available on his platform, and how he decides to use the available content. Therefore, a reasonable amount of censorship is required to maintain a certain order in our society. Artificial Intelligence or AI comes to the rescue of platforms on this issue. If used properly, AI could easily be used to censor certain words, or a certain group of words or phrases, which it thinks could be demeaning to the platform. But how does this work exactly? There have been different ways to approach these studies, and many countries, especially the United States of America and China, have moved quite further ahead in this research, in addition to these various different methods to pursue this topic has also been developed. These methods along with their usage would be further discussed in this article.


The Method of Sentiment Analysis One of the methods to detect sarcasm or more specifically any emotions is the method of sentiment analysis. This method is also sometimes called “opinion mining”, the reason being that this approach, determines a person’s behaviour or predicts his behaviour through the set of data that is available to him. This is the process of classifying emotions using text. Emotions can be neutral, negative, or positive. With the increasing proliferation of social media these days, they have greatly improved sentiment analysis and made researchers explore this area more. A variety of useful information can be extracted through sentiment analysis in social networks. For example, it helps advertising companies calculate success and failure, predict consumer behaviour, and even can predict election outcomes. (Although this step has been criticised by some saying it can lead to unfair elections.)

Although this method does with some of its limitations, the first and foremost being the availability of language. For English dictionaries, it's easy to create, but researchers are looking for other languages and creating dictionaries for their languages, dictionaries for these languages, which is the biggest problem researchers face. Also, the use of this process has attracted a wide range of researchers in the field. Natural Language Processing helps you get the best results from your Opinion mining process. Since domain-independent corpus gives better results for the opinion than domainindependent corpus, more attention should be paid to domain-specific corpus rather than domain-independent corpus. You may mention false comments or fake blogs that mislead users and give false comments on any topic. This is done to reduce the reputation of the object. This type of spam provides impractical opinions in various applications.

The CNN Framework One of the methods suggested by a paper published at Cornell University suggest the use of a framework called the CNN framework. As we know that, sarcasm detection may depend on sentiment and other cognitive aspects. For this reason, they have included a queue of moods and emotions in our concept. Along with this, they also argue that the personality of an opinion person is an important factor in identifying satire. To account for all these variables, they create different models for each such as mood, emotion, and personality. “The idea is to train each model on its corresponding benchmark dataset and, hence, use such pretrained models together to extract sarcasm-related features from the sarcasm datasets.” The paper has further mentioned that CNN can automatically extract key features from the training data. “It grasps contextual local features from a sentence and, after several convolution operations, it forms a global feature vector out of those local features.” CNNs are not required for handcrafted functions used by traditional supervisory distributors. These functions are difficult to calculate manually, and you always have to code them to get a satisfactory result. Instead, CNNs use a hierarchical structure of local functions that are essential to the learning context. “The hand-crafted features often ignore such a hierarchy of local features. Features extracted by CNN can therefore be used instead of hand-crafted features, as they carry more useful information.” A look at this method clearly using the science of Artificial Intelligence to detect the sentiments which have been associated with the content. The framework is also learning itself from the behaviour of the content which has been posted.

THE INDIAN LEARNING/JULY 2021 21

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

The term has been commonly referred to as taunting someone, or commenting inappropriately on someone’s action, and is used to express someone else’s context or in a context in which the speaker presumes that the listener or the person reading his post knows. Due to the illiteracy of comments, or more importantly the illiteracy of the context, this has become a complex problem today. As we discussed earlier that these days sarcasm is being used frequently in social media, tweets, etc, sentiment analysis or opinion mining have come as an important player. The opinion analysis app has become a key solution to fighting sarcasm. Prudence is associated with various verbal phenomena, such as clear gaps between emotions or unevenness of expressed emotions. Humour is compared as a difference or distinction between these positive and negative moods.


The AI competition between the US and China

The Future of detection of Sarcasm with AI Content censorship or moderation is an essential part of the online world, but for a number of reasons, it is difficult to get it right at scale. In doing such moderations, the moderators for the most part are bots or AI tools, and while extending this moderation to all forms of communications raises significant concern for the users, it also becomes essential for the safe use of the platform. Detection of sarcasm through different tools and different methods might just be the beginning of the long rage of AI tools to come, but the arrival of algorithmic censorship brings two new developments in this field, that more and more private communications are going to be under the influence of moderation with the levels increasing. The detection of sarcasm is in simple words the sentiment analysis, and if you can master the art of one sentiment analysis, you’ll have no or very less issue mastering the other sentiments. This brings s to the second area of development, the use of more and more realistic robots. One thing which they have lacked over the years is emotion, and this making of tool would be one of the great starts in making way for successful production of robots and AI chatbots. Detection using algorithms may seem to have little military value, but think about how much time people spend on the internet than years ago. Also consider the increasing role of open-source information, such as social media posts, to help you understand what's happening in key areas where the military can operate. The future holds a series of interesting developments not only in the detection of “Sarcasm” one of the human emotions but also other human emotions too, and algorithmic censorship has also not only talked about sarcasm but all of the human emotions as a whole. The governments again come into the picture, as we discussed earlier that the major countries and military of the world have actively been participating in this research to understand human sentiments, we also should remind them that the use of these tools should be done in a reasonable manner, and more parliamentarians should come forward for the ideas of legislation on this subject matter, aiming for safe use of these bots and tools.

THE INDIAN LEARNING/JULY 2021 22

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Both these countries have been making constant advancements in this field of AI and technology as a whole, here also, both the government have been funding universities to do research on the same so that government could have a better idea as to what they are dealing with to make the legislation and rules accordingly. Researchers in China say they have created artificial intelligence to detect sarcasm that has reached the highest level of performance in a database searched on Twitter. AI uses multimode learning that combines text and images. Because both are often necessary to understand if a person is doing something wrong, or posting something inappropriate. Researchers claim that detecting satire can help you analyse your emotions and share public attitudes toward specific topics. As part of a challenge launched earlier this year, Facebook is using multi-modal AI to recognize if memes violate terms of use. AI researchers focus on the difference between text and image and then combine these results to make predictions. It also compares hashtags to the text of your tweets to measure the mood you want to convey. “Particularly, the input tokens will give high attention values to the image regions contradicting them, as incongruity is a key character of sarcasm,” the paper reads. “As the incongruity might only appear within the text (e.g., a sarcastic text associated with an unrelated image), it is necessary to consider the intra modality incongruity.” The US government also launched a new AI tool, which was research funded by the military. This tool has proven to be able to solve problems that have traditionally been very difficult for computer programmers. Discovering human satirical art. This allows intelligence officers or agencies to better apply artificial intelligence to analyse trends while avoiding non-serious social media posts. “Certain words in specific combinations can be a predictable indicator of sarcasm in a social media post, even if there isn’t much another context,” the University of Central Florida noted in a Research Paper. In essence, the team often learned computer models to look for patterns that represent satire and combined this with training the program to correctly select keywords from sequences that are more likely to identify satire. They learned a model to do this by loading large amounts of data and then testing for accuracy. It’s not the first time researchers have tried to use machine learning or artificial intelligence to detect sarcasm in short pieces of text, like social media posts. This method is based on what researchers call a self-aware architecture and trains a sophisticated artificial intelligence program called a neural network to give more weight to some words depending on the words that appear next to them and the tasks assigned to them.


Niharika Ravi, Former Research Intern, Indian Society of Artificial Intelligence & Law

Introductory Note The outcome document of the International Conference on Artificial Intelligence and Education held in Beijing in May 2019, or the Beijing Consensus on AI and Education, was constructed with contributions from around 500 international representatives from over 100 member states, UN agencies, academic institutions, civil society and private sector members, and 50 government ministers and vice ministers in reaffirmation of the 2030 Agenda for Sustainable Development, and specifically SDG 4 i.e. ensuring inclusive and equitable quality education and promoting life-long learning opportunities for all. Problem Addressed Recent trends in AI, which push it to have profound effects on all walks of life, were recognised at the Beijing Conference and the potential to harness the benefits of AI to reshape the core principles of the teaching-learning process were addressed.

THE INDIAN LEARNING/JULY 2021 23

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

The Beijing Consensus on AI and Education and NITI Aayog’s Response


Affirmations While the 2015 Qingdao Declaration committed to inculcating the use of Information and Communication Technology in Education in its commitment to SDG 4. The complexity of the rapidly developing AI technology was censured in the collective wisdom of the congregation in Beijing, pushing them to reaffirm UNESCO’s humanistic approach to the use of AI, and prioritising the protection of human rights ineffective human-machine collaborations for learning, sustainable development, and other goals. The Consensus observed that AI development must be both humane and human-controlled, ethical and equitable, transparent and non-discriminatory, and took a strong stand for the impact of AI on society and people to be monitored throughout value chains. Goals

Ensuring equitable access to education and AI Ensuring inclusive access to AI and education Ensuring life-long learning opportunities for all Avoiding the extension of social divide to digital divide in the use of AI in education Abiding by standards of ethics and transparency Recommendations Recommendations on AI and Education were made in keeping with the aforementioned affirmations and in tandem with SDG-4. The recommendations were made on the following broad fronts: For Governments and Other Stakeholders in UNESCO’s member states These recommendations are for the concerned parties to implement in accordance with their legislation, public policies and practices. Whole-government, inter-sectoral, and multi-stakeholder approaches to the planning and governance of AI in education was recommended to these parties with foundational planning on meeting local challenges to SDG-4 while being mindful of investment requirements on this front. Considering using new models for delivering education to benefit all stakeholders is suggested but a censure is issued that human interaction between teachers and students must remain the core of education i.e. teachers cannot be replaced by machines. Governments are also advised to remain cognizant in AI’s potential to substantiate the learning process by transforming learning methodologies. These implementations are to be organised by drawing lessons from successful cases and scaling up evidencebased practices. In the same spirit, integration of AI-related skills in curricula is advised and the development of local AI talent is advocated. The Beijing Consensus on AI and Education places significant stress on the fact that the development of AI education must not deepen the digital divide. In keeping with the letter and spirit of SDG-4, this policy pushes for technological advancement to facilitate learning anytime, anywhere, and potentially for anyone on a personalised level. The policy recommends paying attention to the needs of older people, and specifically older women on this front. On similar lines, ensuring AI provides high-quality education to all irrespective of gender, disability, socioeconomic status, ethnic or cultural background, or geographical location. Inclusion of those with learning impairments or studying in a language other than their mother tongue is proposed. Ethical, transparent, and auditable use of education data and algorithms is proposed while it is propounded that the concerned bodies must be mindful of the lack of systematic studies of AI and education.

THE INDIAN LEARNING/JULY 2021 24

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

These affirmations were connected to the recommendations made in Beijing by a set of common goals.


For International Organisations and Others Active in the Field The congregation issued financing, partnership, and international cooperation guidelines to the concerned organisations to address the AI divide and disparity and giving focus to Africa and Less Developed Countries, etc. This portion of the proposal promotes collective action for equitable use of AI in education- global and regional. Alignment of international cooperation with national needs forwarded and in the context of the development of both, AI technology and AI professionals strengthens the recommendation to create multi-stakeholder partnerships to mobilise resources to bridge the AI and digital divide. For the Director-General of UNESCO

Critical Appraisal The Beijing Consensus on AI and Education makes a fair attempt to appeal to the growing need of regulating the use of artificial intelligence. Viewing that it looks exclusively at the applications of AI to education, the policy stands out as it proposes ideas for tackling challenges related to this field on local, national, and international fronts while prioritising the eradication of the digital divide- a hand-me-down of the previous era of ICT. The acknowledgement that there is a dearth in academic research on the implications of introducing the use of AI in the education sector is reinforced by the focus on bridging the digital divide, for the use, and perhaps the misuse and inequitable use of AI in education may more likely be at the cost of the underprivileged on all fronts in society, as history has evidenced. The Beijing Consensus was held in May 2019, but its applications to the Post-COVID era are stark in nature and must be addressed. The sudden shift from education in classrooms to that in virtual classrooms suddenly propelled the discussion on the use of machine learning and artificial intelligence in classes from either experimental IT or science fiction to immediate or potential reality. The multi-stakeholder approach proposed here, coupled with the multiple notes of caution on the use and misuse of AI in education, especially with regard to data security, AI ethics, and data privacy protection are matters that need immediate national and international attention. NITI Aayog’s Response to AI and Education with focus on the Beijing Consensus The NITI Aayog, a policy think tank established by the Government of India in 2015, introduced the national strategy for AI (#AIforAll) in 2018 in 2018, identifying 5 core areas to ensure AI progress in India in a discussion paper. The organisation’s CEO said that the paper lay the groundwork for evolving the National Strategy for AI and improved access and quality of education was notably one of the five areas mentioned in the paper. The discussion paper identifies low retention rates and poor learning outcomes as issues that must be tackled on the Indian Education front along with multi-grade and multi-level classrooms, lack of interactive pedagogy, ineffective remedial instruction and attention for drop-outs, large teacher vacancies due to unequal concentration of teaching populations across the country, etc. as many problems that must be tackled. Low adoption of existing technology was notably a feature on the list even though EdTech is becoming a global phenomenon according to the paper. The paper proposes a two-pronged solution to these problemsintroduction of adaptive learning tools for customised learning, intelligent and interactive tutoring system, automated personalization of teachers, and predictive tools to inform pre-emptive action for students predicted to drop out of school among others. This was before the Beijing Consensus. To fill the gap in research illustrated by the Beijing Consensus, the NITI Aayog had also sought investment of Rs. 7500 crores to boost research and adoption of AI, with a high-level task force to oversee implementation in sectors including education, and to institute 5 research centres and 20 AI adoption centres. It was noted that the potential of such investment was an addition of $957 billion to the Indian GDP by 2035 and a boost of 1.3 percentage points in annual growth by the same year.

THE INDIAN LEARNING/JULY 2021 25

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

The establishment of an ‘AI for Education’ platform is proposed to promote the use of AI under SDG-4 by providing a comprehensive database of open-source AI-related material tools, and policies among many other recommendations to further the international cooperation reinforced in this field by UNESCO by taking into account national and local needs.


In February 2020, the think tank launched an AI module for school students in collaboration with NASSCOM in the form of Atal Tinkering Labs under the Atal Innovation Mission. The module comprises several videos, experiments, and activities that would teach students about the fundamentals of AI and prepare them for the digital era, in keeping with the provisions of the Beijing Consensus that advocate for AI education. In July 2020, the organisation published a whitepaper on Responsible AI for All that can be viewed as a direct result of the Beijing Consensus’ stress on AI ethics, data privacy, AI security, etc. The white paper called for the introduction of Ethics in AI in mainstream university curriculums to encourage youth to explore unbiased and responsible uses of AI.

Concluding Observations The Indian Government’s think tank NITI Aayog had already formulated a basis for its strategy on AI at a national level in 2018prior to the Beijing Consensus. However, the actions taken by the think tank in the time subsequent to the Beijing Consensus have revealed that the NITI Aayog is, indeed, following the path laid down in the policy proposal formulated therein. The absence of the NITI Aayog’s acknowledgement of the digital divide and the strategy to bridge it, especially after the Consensus’ stress on it is significantly glaring, especially in its 2020 publications wherein the pandemic had made the digital divide more eminent than ever before. On the other hand, the think tank has made significant strides in attempting to address research in EduTech, and specifically in the incorporation of AI and education in its many policies which is worthy of mention and praise in its commitment to the Beijing Consensus and to SDG-4- the foundation stone of this discourse on the incorporation of advancing technology in education. Sources The Beijing Consensus on Artificial Intelligence and Education NITI Aayog, “National Strategy for Artificial Intelligence #AIforAll,” June http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf. NITI Aayog, “Towards Responsible 11/Towards_Responsible_AIforAll_Part1.pdf.

AI

for

All,”

July

2020,

2018.

https://niti.gov.in/sites/default/files/2020-

The Economic Times, “Niti Aayog proposes Rs 7,500 crore plan for Artificial Intelligence push,” May 20, 2019, https://economictimes.indiatimes.com/news/economy/policy/niti-aayog-proposes-rs-7500-crore-plan-for-artificial-intelligencepush/articleshow/69403255.cms?from=mdr.

THE INDIAN LEARNING/JULY 2021 26

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

In November 2020, NITI Aayog proposed to set up an oversight body to play an enabling role across different aspects of AI and published a paper titled ‘Enforcement Mechanisms for Responsible #AIforAll.’ Research and education were among the spheres of influence that the body is set to have. This paper was instituted as the second portion of the aforementioned 2018 national strategy paper and comments from stakeholders were invited on both.


Aeron Thomas, Editorial Intern, Indian Society of Artificial Intelligence & Law Infosys is a multinational IT company, it is the second-largest IT company in India after TATA consultancy services. It was founded in Pune and registered in the year 1981, headquartered in Bangalore. It mainly emphasis providing business consulting, information technology and outsourcing services. Its revenue reached well over 10 billion dollars in 2017. A company as big and successful as Infosys is considered to be at the top of their game in keeping themselves abreast in various technological advancements and inventions. In this era of Artificial Intelligence, Infosys managed to pioneer an AI platform, Infosys Nia. Infosys Nia is an enterprise-grade AI platform that quite simplifies the AI adoption journey designed for Business & IT. Infosys Nia supports end-to-end enterprise AI journey from the arenas of data management, digitization of documents and images, model development to operationalizing models. This initiative taken by the firm is to simply dismantle the problems faced by industries that are not able to move from AI experimentation to production, also the enterprises that are struggling to derive insights from their documents and also to boost enterprises that are finding it difficult to manage their siloed data assets. The audiences targeted in the markets are those customers who are facing struggles in their firm, due to the continuous shift in advancing techs, and firms that are devoid of evolving the efficiency of their production due to lack of technology enforcements. What is Nia? Infosys’s Nia is a next-generation platform, which is very much inclined to tackle and help breakthrough business problems such as forecasting revenues, forecasting the kinds and types of products needed to be built, understanding customer behaviour, deeply understanding the content of contracts and legal documents, understanding compliance, and fraud. Such applications are much in demand due to various businesses booming and lots of them looking forward to enhancing the technological, efficient and productive aspects of the business. Infosys Nia is also furnished with tools that let it amass, ingest, and process as much information it can, empowering companies to constantly use past knowledge even as they grow and as their centre frameworks experience adjustments. Nia also aids in enabling them to spare assets particularly with regards to the workforce and the financial aspects. Infosys Nia aid firms by making affordable offers to businesses which are struggling in Technological fronts as it gives away the chances to use AI by automating repetitive tasks and responsibilities, allowing them to become increasingly profitable. More than that, this empowers staff to be productive in their respective jobs.

THE INDIAN LEARNING/JULY 2021 27

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Infosys's NIA: An Enterprise-Grade AI Platform


Data management Infosys Nia is controlled by different technologies that allow it to generate insights quickly from various sources. It also provides end-to-end support for complex data workflows including data extraction, transformation and loading and also complying with data from other sources to power further ML experimentation. Constraints to accommodate data with a variety of schema or storage types can be solved by implementing Lite versions enabling lite infra footprint. Machine Learning

Conversational AI Infosys Nia Chatbot brings various conversational artificial intelligence features to already existing and further new enterprise applications. It leverages existing enterprise channels and newer channels such as social, mobile, devices to provide on-demand access to enterprise knowledge with ease. The new Nia Chatbot is delivered as an end-to-end offering with rather quite flexible deployment options. Nia Chatbot expands the very scope of automation even beyond IT simplification and optimization by helping out interested clients to build smart and efficient conversational user interfaces on their various core business systems, this aspect is quite very helpful in time management and closing any communication gaps that might exist. Deep learning Nia Deep Learning capability is a very significant feature in the new AI which drastically reduces the amount of manual effort or lead time required to fine tune models for specific problems, thereby solving a prominent problem in the AI lifecycle. The key aspects of Nia Deep Learning are the newly implemented feature that’s “unsupervised feature learning” and “deep learning” algorithms that can automatically learn feature representations from unlabelled data. Nia Deep Learning is very much in possession of state-of-the-art Neural Architectures that can learn a good feature representation from large volumes of unlabelled data and can also be fine-tuned for supervised machine learning. It also tries to solve the inconsistency caused by the limitation of manual human skill resources involving various natural signals such as human speech, natural sound and language, natural image and visual scenes. Deep Learning also has enormous potential which can be quite useful in many domains such as that of healthcare, science, business and government. NIA Vision Nia Vision is a GUI based workflow that can be easily used to annotate data, train the machine for object localization, detection and recognition from static and animated images and varied documents. Especially when the status of different sectors of business in keeping structured information which can be easily accessed especially the Legal sectors which deal with very large volumes of information which is stored in documents and images. Businesses find it difficult to manually analyze such information when in need, here the NIA vision can be of use as the Nia AI platform leverages a deep learning framework that enables the visual identification of document imagery based on state-of-the-art object detectors. This is less costly, more accurate, and more reliable than traditional technical approaches such as OCR and template-based information extraction. This designed feature of the Nia keeps in check the errors and minimalizes any damages that have already been caused.

THE INDIAN LEARNING/JULY 2021 28

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Machine learning workbench and further included toolkit aids experts and citizen data scientists in unravelling the creation of various AI Models. Nia Machine Learning workbench further increases data scientists’ productivity within the respects of magnitude by different sorts applying automation to the data science workflow. It offers a broad range of machine learning algorithms with industry-leading speed and scale. Moreover, data analysts, developers, and even business users with even their finite knowledge of data science can also build high-performing Machine Learning models as a result of its easy-to-use Machine Learning workbench.


Nia Model Ops Full-scale production of a newly made AI or ML model can be quite a difficult task for data scientists. Multiple challenges such as to get models into deployment like multiple infrastructure compatibility challenges and the inability of the model to meet the peak demand from a business perspective. Nia Model Ops will be an aid as it will be available as an integral component of the Nia platform to address these pain points for the data scientists. Nia Model Ops integrates seamlessly with key Nia components to deploy, orchestrate and monitor models in either in-house or cloud base deployments. It provides various useful tools to track various AI assets and resources used to build the models and for versioning them for audit and compliance purposes. It has a rich in-built dashboard to view end-to-end model workflow and monitor models in either training or production stages. The visual dashboard helps in monitoring model performance, degradation and data drift.

Fraud detection Firms and enterprises being fallen prey to fraud is not a new concept, in order to detect this sort of misadventures, the AI gives valid insights into reasons for fraud and prescribes steps to avoid such revenue leakage in posterity. Such a feature is quite useful for any firm. Conclusion This rather bold and innovative initiative from Infosys opens up many opportunities for a plethora of firms that are rather looking forward to expanding their respective spectrum in the realm of technology and artificial intelligence. The very inception of Nia began when Infosys and Blue Prism had joined hands in 2017 to help enterprises drive intelligent automation capabilities across multiple industries. It does combine the intelligence of AI and Natural Language Processing (NLP), with Blue Prism's intelligent automation platform and Optical Character Recognition (OCR) capabilities. Such a partnership was rather a bold and lucrative move for both sectors in bringing out something as productive and efficient as the Nia. Various other tech giants across the globe, such as WIPRO, as well are competing in the race to provide the best AI-based technology to the customers. What Nia actually does is collect and aggregate multiple organizational data from people, processes and legacy systems into a self-learning knowledge base. As from this general base, every bit of information that is being processed into data, can be easily used to access and implement for varying purposes of the concerned firm and then automates repetitive business and IT processes, so that the concerned employees of the firm are free to solve higher-value customer problems that require creativity, passion, and imagination. . Nia also comprises a data platform, automation platform, and knowledge platform as its key components, along with AI capabilities, such as ML, data analytics, and robotic process automation. Moreover, Nia also offers organizations the basis to deploy AI technological capabilities for simplifying complex operations. Another platform that Infosys offers is the KnowledgeBased Engineering (KBE) platform. KBE comprises AI to improve human decision-making capabilities. The KBE platform collects information about engineering products and processes them for developing new products. Infosys has also reported that they have created more than 50 clients and 150+ deployments across different sectors. After, all the very basic purpose of such an AI is to mainly palliate the pain processes or anomalies that exist using AI. Even though the status of Nia is slowly evolving, and it never ceases to stop until the expected perfection is totally achieved.

THE INDIAN LEARNING/JULY 2021 29

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Nia Knowledge As often said Knowledge is power, it’s often the case in the respect of the enterprises as well, as one of the common business challenges of enterprises today is related to retention and reuse of enterprise knowledge. There is a definitive need for processes, methodologies and tools specifically built for representing and building enterprise knowledge to enable re-use by often and various business applications. Nia knowledge provides capabilities to build, consume and reuse enterprise knowledge across various business domains and functions within an organization. The Nia Knowledge module of the Nia AI platform helps enterprises to organize information into an ontology-based knowledge base so that additional knowledge can be inferred and queried. The taxonomy is rather standardized, and also the content is often organized for re-usability across various and multiple business domains within the enterprise.


Medha Singh Yadav, Editorial Intern, Indian Society of Artificial Intelligence & Law As the people are evolving so is the technology. From a reusable rocket to an autopilot car. An example of this evolution is also an artificial intelligence assistant at home. From the beginning, text was the best way to interact with an assistant app (typing in a phrase triggered a response). Presently, voice has dominated. Assistant apps or smart speakers are continually tuning in for their wake words. As a matter of course, the words "Hey Siri," "OK Google," "Hey Google," and "Alexa" are the guidelines on their particular gadgets, however clients can also customize their wake words. "Alexa" can turn into "Echo," "Amazon," or simply "computer." The capacity to create these changes can be particularly useful in the event that someone named Alex or Alexis lives in the home. Wake words depend on an exceptional algorithm that is always listening for a specific word or expression so a phone, smart speaker, or something different can start speaking with a server to tackle its job. Wake words should be sufficiently long to not to mistaken, simple enough that a human can speak it and clear enough that a machine can recognise it. This is the reason you can't change your wake word according to your preference.

THE INDIAN LEARNING/JULY 2021 30

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

How Do Artificial Intelligence Assistants Interact with People?


Voice assistants don’t really understand what you're saying — they simply tune in for their wake word and afterward start communicating with a server to finish a job. Natural language processing (NLP) is a type of computerized reasoning that assists innovation with interpreting human language. Many companies own their own voice-enabled virtual assistant like Amazon’s Alexa, Apple’s Siri, Google assistant and Microsoft Cortana. Like its competitors, Cortana can be described as the next stage in human-computer interaction. There is a neck-to-neck competition among all these companies. These voice-enabled assistant makes our life’s easier for us whether it is to set the alarm or looking for records or exercises. According to Microsoft, Cortana is currently used by 148 million people. Cortana’s Key Characteristics Cortana is the AI-controlled advanced aide first showing up on Windows telephones in 2014 before growing to PCs running Windows 10. Cortana is intended to get familiar with a client's propensities and expect their necessities.

The more that Cortana finds out about a client, the more point by point the reactions become. For instance, Cortana can give refreshed entryway tasks to a booked flight, offer knowledge gathered about the purchaser that a salesman is meeting or give showcasing experiences dependent via web-based media destinations. Cortona can make up a list, pull legit information from LinkedIn like the professional background and the company details and plan accordingly the meetings and everything. It can also track upcoming travel reservations. Microsoft’s new Play My Emails feature for outlook has given Cortana an ability to read and relay messages. Microsoft is utilizing Artificial Intelligence and Cortana to upgrade Dynamics 365 CRM Microsoft Dynamics 365 is a product line of enterprise resource planning (ERP) and customer relationship management (CRM) intelligent business applications. Different devices in Dynamics 365 use Cortana's AI without straightforwardly alluding to the partner. Microsoft has expressed that its will probably dispose of storehouses among advertising and deals while solidifying applications through brought together route and client experience. Clients are shown all applications for which they approach, disposing of information storehouses while keeping up with the client's knowledge of dashboards and route devices. A portion of the highlights that are important to sales and marketing experts are: Client insights provide marketing and sales a complete image of the company’s customers. By uniting data from different sources, creating singular client profiles and examining KPIs, organizations can acquire experiences into their missions and exercises, measure achievement and even get ideas on approaches to further develop commitment. Relationship experiences are particularly useful for the sales group. Utilizing AI, sentiment analysis, natural language processing and data recovered from Dynamics, deals experts can get point by point writes about the situation with their associations with their clients. Significant arrangements controlled by AI can help in an assortment of ways. For instance, experiences may uncover that it is a happy chance to seek after a client for another deal or that move should be made promptly to hold a record. Microsoft's utilizing Machine Learning (ML) as an approach to speed up the training of these AI models. In 2016 the organization started looking at utilizing field-programmable gate array (FPGAs) within servers as a method of expanding performance level. While a universally useful CPU like an Intel Xeon can be customized to run a calculation, a devoted fixed-work ASIC (applicationexplicit coordinated circuit) is by and large the quickest execution. However, as ASICs don't take into consideration much improvement in the hidden plan, a FPGA compromises between execution and adaptability. Microsoft has stated that it’s making available the hardware-accelerated Azure Machine Learning models that run on FPGAs.

THE INDIAN LEARNING/JULY 2021 31

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

For instance, if a client regularly requests that Cortana check the morning traffic, Cortana will start to offer the data without provoking. Cortana's consideration in Dynamics 365 CRM implies that deals and promoting have more devices available to them than any time in recent memory. Cortana can keep up with the client's timetable, set up updates, show client records, make new records or quest for contacts.


According to a new Patent, Cortana will use AI and ML to investigate message information from various sources — Microsoft Teams, Skype, WhatsApp, Twitter, messages, calls and instant messages are completely delineated in the drawings — to become familiar with the significance of each message. To investigate message information from various sources — Microsoft Teams, Skype, WhatsApp, Twitter, messages, calls and instant messages are completely delineated in the drawings — to become familiar with the significance of each message. Cortana would score the messages and generate a text summary that would be transformed into speech and shipped off to a listening device. That could be a telephone, vehicle, earphones or keen speaker. Conclusion

THE INDIAN LEARNING/JULY 2021 32

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

Microsoft has the potential and money to make further innovations and development in the area of voice enabled virtual assistant. It is well positioned to expand upon the scenarios Cortana can accomplish. The size of Cortana will likewise be controlled by how effective Microsoft is at persuading more significant outsider sellers to foster abilities and backing the innovation. Speech recognition is turning into a standard component in big business applications, and advances in regular language comprehension and voice synthesis will give organizations much greater adaptability to pick the best for human-computer connections. There are more modifications and corrections to come in the advancement of Cortana. We can expect to see more enterprise applications and conversational features in the near future.


Zainub Chauhan, Editorial Intern, Indian Society of Artificial Intelligence & Law The virtual marketing and marketing enterprise ingests and methods tens of thousands and thousands of facts alerts according to second, producing giant volumes of facts. While the enterprise is hyper-targeted at the cookie deprecation, the third-birthday birthday celebration cookie is surely the simplest one advertising input, there are numerous different facts alerts, each on and offline, to be had to optimise media shopping for. Algorithms primarily based totally on artificial-intelligence (AI) may be tailor-made to brands’ precise goals, permitting entrepreneurs to discover a wallet of overall performance inside significant quantities of facts and optimise media shopping to force actual commercial enterprise outcomes. By combining custom AI tactics that combine a brand’s key overall performance indicators (KPIs), and shaking off our third-birthday birthday celebration cookie dependence, we are able to welcome a brand new technology of obvious and powerful programmatic media. User matching through first-party data alerts One manner AI and custom algorithms will form media buying, is via means of matching transformed customers with potentialities which have comparable virtual patterns. Rather than focussing on who customers are – their age and gender, or in which they live – AI appears past fundamental traits to recognition at the maximum vital behavioural alerts of a probable customer. Two customers could have absolutely one-of-a-kind profiles however in the long run need the identical thing. Where conventional target market concentrated on could leave out this opportunity, algorithmic matching allows manufacturers to become aware of and take benefit of those comparable needs.

THE INDIAN LEARNING/JULY 2021 33

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

How custom algorithms will shape the future of media buying


Algorithmic client matching is presently primarily based totally on first-celebration facts alerts, from retailers, manufacturers or publishers. Moving forward, an explosion in new styles of facts is anticipated from related vehicles and homes, internet-of-matters devices, digital and augmented reality, and biometrics, so that you can all feed into this process. AI might be essential to manipulate these facts, and there ought to continually be an emphasis on balancing the connection among AI and ethics to make certain marketing and marketing works higher for anybody whilst person identities are protected. Aligning media shopping for with emblem targets A 2nd manner tailor-made algorithms will make media shopping for greater power with the aid of aligning interest with emblem targets to supply actual commercial enterprise performance. Brands determine at the consequences they need to achieve, permitting multi-metric KPIs and offline information inputs to be included into customised algorithms and make sure media shopping is focussed on achieving the ones goals.

Correspondingly, as soon as favored consequences are established, custom algorithms can run hundreds of actual-time exams to decide the precise bid required to win media placements in an advert exchange. The overall performance of media buys may be always measured, with effects fed lower back into algorithms to create a closed loop of optimization. While AI is crucial to beautify and streamline virtual media buying, it doesn’t eliminate human beings from the procedure through any means. Success is based at the preliminary center and non-stop control of campaigns through especially professional human beings from facts scientists to media planners. Algorithmic achievement is ready locating concord among guy and system through optimizing in the direction of dreams set and overseen through actual human beings to make certain the moral utility of technology. Dynamically optimising innovative for performance The function of custom algorithms doesn’t give up with shopping for the proper impact on the proper price, additionally it is advert execution, and in particular optimizing advert innovation to maximize the possibilities of conversion. Sophisticated algorithms are used to pick out the maximum applicable and powerful innovative factors, in line with a number of information points, and to bring together advertisements that enchant people at distinctive levels of the acquisition journey. Volvo, for instance, currently used AI to generate cost-powerful conversions from a virtual marketing and marketing marketing campaign in Norway. Custom algorithms had been used to check innovative factors which included logos, layouts, and messaging at scale to decide which innovative versioning drove the maximum conversions at the bottom cost. As a result, Volvo noticed a 440% growth in audiences configuring new automobiles and reserving test-drives and made greater green use of its advertising price range with a 66% discount in cost-per-acquisition (CPA). In my opinion, as technology evolves and volumes of statistics boom in virtual advertising, the innovative programs of custom algorithms will keep growing in methods we might not be capable of considering yet. What we may be positive of, is that AI might be an important element as a part of the toolbox of any marketer trying to optimize media buying, and higher supply enterprise outcomes.

THE INDIAN LEARNING/JULY 2021 33

The Indian Learning | e-ISSN: 2582-5631 | Volume 2, Issue 1 (2021)

AI can grow performance with the aid of routinely directing spend in the direction of regions of sturdy performance. The generation continuously exams itself to shift transport and enhance execution. Algorithms can expect which impressions will carry out well, primarily based totally on a massive range of things which include the period of time due to the fact that a consumer visited an advertiser’s website, and generate some distance higher conversion fees than may be accomplished even through guide optimisation.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.