Artificial Emotional Intelligence: The Future of Autonomous Systems

Page 1



Ean Mikale, J.D. is the Founder of Infinite 8 Aeronautics, a commercial drone research and technology firm, and Infinite 8 Institute company.

Acknowledgements: We would like to thank the IBM Global Entrepreneur Program, especially our program guide and mentor, Meagan Harrington, who went above and beyond for every ask, the Omahabased Startup Collaborative, the Nebraska Department of Labor, and my family for persevering.



CONTENTS WELCOME……….………………………………………………………………………………………………....i INTRODUCTION………………………………………………………………………………………………….1 A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE……………………………………………………….1 IBM’s WATSON: A MODERN SUPERCOMPUTER...……………………………………………………..3 ARTIFICIAL EMOTIONAL INTELLIGENCE...............................................……….…………………....3 PROPOSED LEVELS OF AUTONOMY...………………………………………….………………………….5 COMMERCIALIZATON POTENTIAL...………………………………………….…………………………...6 Case Study: Safety Scout...........…………………………………………….………………………….7 SOCIAL IMPACT................................................…………………………………………………………….7 CONCLUSION.……………………………………………………………………………………………………..8


WELCOME BY EAN MIKALE, J.D. Founder, Infinite 8 Aeronautics At the cusp of a new age, many uncertainties exist, especially in the space of autonomous machines and their place among mankind. Thus far, as automation makes its way across every continent, and into every community across the globe, there will be a new way of assessing and creating value. Value will no longer be based on the amount of labor one exerts, as labor will soon become something that is infinitely available with machines. The innovation, therefore, will come not from the labor that is done, but rather the overall creativity and intangible value that man extracts from his creative thoughts manifested through each machine. Nevertheless, there must be a buffer between the transitional shift of our current human labor and fossil-fuel driven society toward a machine driven society. There must be a way to assist in the human integration of autonomous machines. In this case, the machines we are referring to are not only autonomous aerial vehicles, otherwise known as commercial drones, but also landbased and water-based vehicles, wearable devices, mobile devices, low-earth orbit vehicles, and other internet connected things. It is the concept of a “thing” that is at the heart of the conversation. How do you turn a “thing” into “something” that is not a hollow shell, but rather something that can feel? Turning a thing into something is necessary in order to assist human beings more readily accept and integrate autonomous machines into human life. Through our ongoing research, we have learned a great many things. However, one thing that still stands out the most concerns the lack of in-depth research available in the area of artificial emotional intelligence. As a result, it is our hope to bridge the gap in this new field that we are actively blazing, in order to help machines understand “us”, and in turn better understand our place in the universe throughout the process. Each of us is unique and can never be replaced, and likewise no two machines are the same, although they may look so. We must approach the dawn of a new civilization, a machine-driven civilization with humility at our power to create or destroy. It is our organizations intent to elicit the safe and thoughtful integration of commercial drone technology into human society. It is also our desire to see the safely integration of artificially intelligent machines into human society. We seek to accomplish our purpose, not because we desire to rid mankind of its subsistence, but rather we believe this has already occurred, and only seek to buffer the blow as inevitable change takes its toll. Ean Mikale, J.D. Founder, Infinite 8 Aeronautics

Infinite 8 Institute, L3C The design and finance of social impact systems

i.


INTRODUCTION Artificial Intelligence, also known as “AI”, is defined as emerging technologies that can understand, learn, and then act based on received information. Various forms of artificial intelligence, include commercial drones, digital assistants, chat services, and machine learning. According to key findings by PriceWaterHouseCoopers, 63% of surveyed respondents believed that Artificial Intelligence, also known as “AI”, will help solve complex problems that face society.1 We also believe that AI has the ability, like any human creation, to become something that is used for good or for ill. It is our hope and aspiration to utilize the technology to maximize the utility of autonomous machines in the lives of everyday people. And it is with this same optimism that we pursue to better society through, and anticipation of, the rise of AI. A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE Modern AI, as we know it, began with the famous Nicola Tesla at New York’s Madison Square Garden in 1898. Using a small, radiotransmitting box, Tesla was able to maneuver a tiny ship about a pool of water and even flash its running lights on and off, all without any visible connection between the boat and controller. And when asked about the boat’s potential as an explosive-delivery system, Tesla responded, “You do not see there a wireless

1

Bothun, D. (2017). Bot.Me: A revolutionary partnership – How AI is pushing man and machine coser together. Retrieved from http://www.pwc.com/us/en/industry/entertainment -media/publications/consumer-intelligenceseries/assets/pwc-botme-booklet.pdf 2 Cheney, Margaret (1981). Simon & Schuster. Tesla: A Man Out of Time. Retrieved from https://r3zn8d.files.wordpress.com/2012/03/teslaman-out-of-time-by-margaret-cheney.pdf

torpedo; you see there the first race of robots, mechanical men which will do the laborious work of the human race.”2 Almost half a century later, Alan Turing, is said to have founded modern computing, with his paper on The Universal Computing Machine. Turing subsequently proposes the Turing Test, which assesses whether a machines shows, in fact, any intelligence.3 During the same era, AI gained its current meaning in 1956 at Dartmouth College, where the world’s foremost experts convened for a think tank on intelligence simulation. Following the publicized research of the group, a surge of government funding became available for the study of non-biological intelligence. This wave fell off for a few decades, but revved back up in the eighties, with a new wave of AI funding in the UK and Japan. In 1993, MIT pursued the Cog Project, to build a humanoid robot, which became a lucrative project for the US government. In 1997, when Deep Blue defeated Kasparov at chess, AI had risen to a whole new plateau. The AI industry would continue to explode with the release of a paper by scientists that showed an improvement using Graphics Processor Units, or “GPUs” for image recognition, which normally were used for high-graphics gaming, which tend to contain much higher processing power than CPU’s and can accelerate some software by 100x. 4 In less than two years after this discovery, image classification by scientists improved from 72% to 96%, slightly higher than 3

A.M., Turing (1936). On Computable Numbers, With An Application to the Entscheidungsproblem. Retrieved from http://www.cs.ox.ac.uk/activities/ieg/elibrary/sources/tp2-ie.pdf. 4 Krewell, Kevin (2009). What’s the Difference Between a CPU and a GPU. Retrieved from https://blogs.nvidia.com/blog/2009/12/16/whatsthe-difference-between-a-cpu-and-a-gpu/.

pg. 1


the human accuracy of 95%.5 Subsequently, DeepMind scientist published a paper on neural networks, which was followed by a sound victory in the complex game AlphaGo, over the reigning champion, Lee Sedol in March 2016. 6 The technological capabilities and utility of artificial intelligence has accelerated rapidly with the advancements in computing and processing technology. In the near future, such computational breakthroughs will allow for 5

Corea, Fancesco (2017). A Brief History of AI: An Outline of What Happened in the last 60 Years in AI. Retrieved from https://medium.com/cyber-tales/abrief-history-of-ai-baf0f362f5d6.

deep learning capabilities beyond what we can currently fathom. Tesla’s vision was a race of machines that would relieve man from the burdens of laborious work, freeing mankind to explore one’s innate creativity and drive. While our advancements in the field of AI have been modest in our civilizations fairly primitive stage, the fact that we are only at the beginning is what makes the prospective future seem like something worth working towards. 6

Silver, D., et al. (2016). “Mastering the game of Go with deep neural networks and tree search”. Nature, 529: 484-489.

pg. 2


IBM’s WATSON: A MODERN SUPER COMPUTER

ARTIFICIAL EMOTIONAL INTELLIGENCE

Watson, named after IBM’s first industrialist Thomas J. Watson, is a question answering computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies.7 In 2011, the Watson computer system competed on Jeopardy! against former winners Brad Rutter and Ken Jennings, winning the first-place prize of $1 million. In 2016, IBM’s Watson made image recognition available, using deep learning algorithms to analyze images that can give you insights into your visual content. Thus far, the Watson platform has been used for recognizing images of individuals, food, dog breeds, satellite imagery, and even insurance claims.

Thus far, in the field of AI, the closest area to working on EI, has been in the realm of “Sentiment Analysis”. Sentiment Analysis, refers to the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. This field is widely applied to the written voice of consumer materials, such as product reviews or survey responses.

In order to process such massive amounts of data, IBM’s Watson employs a cluster of ninety IBM Powered 750 servers, each of which uses a 3.5 GHz POWER7 eight-core processor, with four threads per core. In total, the system has 2,880 POWER7 processor threads and 16 terabytes of RAM.8 Watson continues to be used in a wide variety of areas. Our research during the IBM Global Entrepreneur Program for Cloud-based startups gave us the opportunity to work closely with Watson. While working with Watson on a particular use case, which we shall discuss in the following paragraphs, we discovered Watson’s ability to decipher the emotional state of its subjects, both human and non-human. What we found was mind-boggling to us, we had stumbled upon true artificial emotional intelligence, or what we refer to as “EI”. 7

“DeepQA Project: FAQ”. IBM. Retrieved July 18, 2017. https://www.research.ibm.com/deepqa/deepqa.sht ml. 8 “Is Watson the smartest machine on earth?”. Computer Science and Electrical Engineering

We find this field to be inadequate for our needs, as Sentiment Analysis, involves the review of secondary sources to determine the overall contextual polarity or emotional reaction of the speaker, such as through a document or web application. Sentiment Analysis does not deal with the direct interpretation and analysis of a live subject, which left us as researchers at a cliff as we pursued the concept of emotional intelligence. We also sought out the work of our colleagues in different areas of the AI field, whose work in some way touches upon emotion. We discovered the Kuri Robot, which is a robot companion, which sits between an Amazon Echo and a pet dog. The robot can play music, send messages, and patrol the property for anyone who is or is not supposed to be there. The developers sought to create the following personality traits within the robot: humility, earnestness, and curiosity. The robot further conveys such traits through movement and sound.9 The shortcoming of this use case gravitates around the lack of focus on the

Department, University of Maryland Baltimore County. February 10, 2011. Retrieved February 11, 2011. https://www.csee.umbc.edu/2011/02/iswatson-the-smartest-machine-on-earth/. 9 Murphy, Mike (2017). “Finding Baymax: Robotic Companies are Hiring Pixar Engineers to Make Their

pg. 3


emotional state of the subject, rather than the artificial character traits mimicked by the robot. This method we found inadequate for the interpretation of emotional responses in subjects, as this method does not utilize visual recognition queues, body gestures, nor tones to analyze the emotional state of the subject and provide a pre-determined output based on the emotional input received by the subject. Next, we discovered the work of Mark Sagar, CEO of Soul Machines. Their first project was BabyX, a computer simulation of a baby that can learn and respond to the world much as a regular baby might. BabyX’s eyes light up when you show it something interesting, and it cries when you later take it away. The project has revealed that the human mind can be tricked into empathizing with non-living entities. Once again, as impressive as this work has been, we similarly found such methods to be inadequate, as they focused again on accurately simulating the emotional response of BabyX, rather than accurately interpreting the physiological and emotional response of the subject.

Finally, we explored the proposed concept of robotic Empowerment, based on Isaac Asimov’s three laws of robotics, which state the following: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.11 We believe that these laws may provide some level playing field for developers, and may provide human beings with the security many seek concerning machines and human relationships. However, neither laws one through three, are even innately possible through the integration of artificial emotional intelligence in machine learning. Machines cannot otherwise accurately interpret inputs, to deliver the necessary accurate outputs to maximize the machine-human relationship.

Additionally, in 2015, the New Zealand technology firm, Touchpoint Group, used machine learning to help its AI system called “Radiant” recognize, and even simulate anger. The purpose is to develop an automated system that can diffuse angry customer service calls. Radiant is incapable of experiencing genuine anger, but is only programmed to mimic and repeat these nasty conversations. An overt focus on anger, rather than positive or good human traits, is a variance between our current work and those of predecessors in the field of artificial intelligence.10

More recently, due to the real or perceived increased potentiality for dangerous AI, research scientists at the UK-based University of Hertfordshire, created a concept called Empowerment. The computer program will empower the robot by providing it with access to alternative choices, while also empowering humans by interpreting their choices. In a dangerous situation, the robot would be programmed to try to keep the human alive and free from injury. To empower a machine with choices, and to analyze human choices, without the ability to understand the emotional state of the human, would leave too much room for error.

Robots Friendlier.” Quartz Magazine. Retrieved July 17, 2017. 10 “Touchpoint Using Artificial Intelligence to Defuse Anger.” The Australian. Retrieved July 20, 2017. http://www.theaustralian.com.au/business/technol ogy/touchpoint-using-artificial-intelligence-to-

defuse-anger/newsstory/658525219d55e1d509ce99a79003d1f0. 11 “Isaac Asimov’s Three Laws of Robotics”. Auburn University. Retrieved July 20, 2017. https://www.auburn.edu/~vestmon/robotics.html.

pg. 4


The results of our research reveal that there has been little focus on utilizing artificial intelligence and deep learning to assist machines to create empathy for human beings, and relate to their immediate needs, concerns, or even the relative safety of their subjects. In order for the successful, safe, and harmonious integration of artificial intelligence, such as fully autonomous commercial drones, into human society, there must be an ability to share empathy with the human race, through the analysis of the various emotional outputs, of a subject. In line with this mode of pursuing artificial intelligence, we have trained IBM’s Watson to feel, or at least interpret human emotion for the first time. We were able to observe Watson’s visual recognition capabilities interpreting human emotion from images at an average accuracy of 73%. Accuracy ranged as high as 90% when interpreting states, such as happiness, but as low as 53% when interpreting other more complex states, such as sadness. The error margin averaged 21%, which is unacceptably high for current use in commercial applications. However, we expect this to decrease rapidly with further research and development. We believe emotional intelligence has the most potential being utilized amongst embedded, systems, such as commercial drones, and other autonomous mobile capable machines. Upon further research, and access to current advances in embedded Computing and Graphics processors, such as an integration of the forthcoming Intel i9 Extreme processor, and the NVIDIA Jetson TX2 Graphics Processor Unit (GPU) module, which while working together will have the capability to greatly enhance the accuracy and available spectrum of emotional empathy experienced by future autonomous systems. 12

“U.S. Department of Transportation Releases Policy on Automated Vehicle Development.” U.S. Department of Transportation. May 30, 2013. Retrieved July 19, 2017.

PROPOSED LEVELS OF AUTONOMY While the self-driving car sector of autonomous systems has a scale for measuring autonomy, developed by the U.S. Department of Transportation and the National Highway Traffic Safety Administration, there does not exist a similar system that covers all autonomous vehicles for commercial use, such as commercial aerial drones, low-earth orbit drones, walking or roving land-based robots, and/or aquatic machine submersibles.12 In light of this necessary gap in the evolution of autonomous systems, we propose a new standard for the development of autonomous systems, Seven Levels of Machine Automation, which have been adapted to include the epitome of machine learning, i.e., Artificial Emotional Intelligence, each of which are included below: 0) Level 0 (No Automation): This category is relegated for non-commercial grade autonomous systems, and include the vast majority of hobbyist and off-theshelf entertainment purposed autonomous systems, such as toy drones. The operator handles the navigation utilizing either a mobile or remote-controlled device; 1) Level 1 (Pilot Assistance): Commercialized autonomous systems in this category can conduct missions and/or tasks which require the automation of navigation, but lack the ability to avoid obstacles, and the operator must be ready to take over those functions if called upon by the situation. That means the operator must remain aware of what the commercial drone is doing and be ready to step in if needed. https://www.transportation.gov/briefing-room/usdepartment-transportation-releases-policyautomated-vehicle-development.

pg. 5


2) Level 2 (Partial Assistance): The commercialized autonomous system handles navigation, but immediately lets the operator take over if he/she detects objects and events the autonomous system will not or cannot respond to. In these first three levels, the operator is responsible for monitoring the surroundings, traffic, weather, and environmental conditions; 3) Level 3 (Conditional Assistance): The commercialized autonomous system possesses partial sensory systems, and monitors surroundings while taking care of all navigation in certain environments. But the operator must be ready to intervene if the autonomous system requires it. 4) Level 4 (High Automation): The commercialized autonomous system handles navigation and monitors the surroundings in a wide range of situations, but not all, such as severe weather and/or extreme environments. The operator switches on the autonomous control only when it is safe to do so. After that, the operator’s direct control is not required. 5) Level 5 (Full Automation): The operator only has to set the waypoints or coordinates and launch the commercialized autonomous system, while the system handles all other tasks. The autonomous system can navigate to any other urban or rural land, air, or water-based destination and make its own decisions along the way. 6) Level 6: (Emotional Intelligence): The commercialized autonomous system has the ability to empathize with human subjects it comes in contact with, as well as the operator. The operator has the ability to manually switch off, or override EI, in the event the autonomous systems misinterprets the emotional cues of its subject.

COMMERCIALIZATION POTENTIAL The commercialization opportunities for machines that are capable of recognizing human emotion through visual recognition technology, are endless. Such technology will allow a myriad of new uses cases and enhanced customer service experiences across industries, ranging from telecommunications, social media, security, brick-and-mortar retail, e-commerce, professional sports, entertainment and gaming, healthcare, social services, as well as education. For example, EI can be integrated into customer service experiences to provide customer service agents with actionable intelligence, which will allow them to better understand the customer perspective, better service customers by creating more empathetic accuracy, and by creating more repeat and satisfied customers. Additionally, by recognizing the facial expressions of customers while they are shopping in brick-and-mortar stores or online using EI, will allow stores to save on inventory and better individualize customer experiences by recommending similar items found in the store or online, and creating customer profiles based on their emotional reaction to various items in the store. EI can also be utilized in the professional sports world, to provide actionable intelligence concerning the emotional state of players, allowing for further insight toward enhancing the overall performance of players based on their reaction to various environmental stimuli. Players and coaches will also gain better intelligence into the mind of their opponent, in order to adapt ones game strategy. EI is a natural fit as well for personal gaming. EI will allow for more immersive gameplay by providing the gamer with customized in-game computer responses, recommendations, and inferences regarding how to best balance difficulty levels. EI will bring many new benefits to the healthcare industry. Doctors will gain the ability pg. 6


to use EI enabled machines to determine patient attitudes, symptoms, and other physical conditions that always are not always readily apparent to the patient themselves. This also will allow for a more patient-centered experience, by making the patient analysis more accurate in real-time, as well as providing medical professionals with recommendations based on the immediate state of health of the patient. Telecommunications and internet providers, as well as content creators, i.e., news media, bloggers, and motion picture studios will gain the ability to cater content toward the current emotional state of the content consumer. This will provide content providers with the ability to offer more relevant information to consumers, and providing them with data that they otherwise might be unaware they are even interested in. Social service providers, empowered with the use of EI, will gain the ability to provide enhanced, more robust and individualized services, to targeted populations. This will allow social service providers to better service vulnerable populations with enhanced intelligence concerning the emotional state of the beneficiary, and preventative intelligence concerning the behavioral health or emotional state of such beneficiaries, saving the government and private sector from unforeseen costs. Finally, EI will entirely change the way we learn. EI will provide teachers and software with the ability to better understand the learning patterns of students, and adapt to those learning patterns to keep interest high, spark curiosity, and heighten student engagement, resulting in better overall student performance. Student surveys can even be taken in an academic setting, no longer requiring students to raise their hands, but rather their facial expressions will be enough to validate or invalidate a consensus.

Case Study: Safety Scout Safety Scout, a concept in the development phase, is the world’s first autonomous drone for personal safety. The purpose of Safety Scout is to provide parents and care-takers with the ability to have an extra pair of eyes on their child or loved one regardless of where they are. The commercial drone will possess the ability to escort the subject to a pre-designated location, or track the subject at a pre-determined distance, for a pre-determined amount of time. The autonomous system will also have the ability to use artificial intelligence and machine learning to analyze the probability of danger, by using EI and facial recognition to analyze the emotional state of the subject, as well as nontargeted subjects within the immediate environment of the autonomous system. As a result, the autonomous system will have the ability to warn of potential harm, and contact parents, guardians, caregivers, schools, as well emergency responders upon a determination of an emergency, such as a child having an asthma attack, bullying, or a kidnapping in progress.

SOCIAL IMPACT In light of recent critiques of the artificial intelligence space and autonomous systems, there is a large divide between the understanding of a lay person, and the current capabilities of artificial intelligence and autonomous systems. There is a coldness and lack of understanding between human beings and machines inspired by human beings and/or nature. Because of this lack of understanding or inability to relate to artificially inspired technology, it is extremely important to address this divide if, indeed, artificial intelligence is to ever successfully integrate into human society. It is our belief based upon our research into EI and into the entire field of AI, that it will be human empathy understood utilizing visual recognition technology, that will allow machines to better understand their human operators and counterparts, to create a more

pg. 7


shared meaning, unified system of understanding between humans and machines, and increase the overall utility and acceptance of such devices into our everyday lives.

CONCLUSION In conclusion, the rise of AI is undeniable. However, the question still has yet to be answered concerning the smartest and most ethical way to safely integrate artificial intelligence and autonomous systems into human society. Our works shows that the best way to do so, may be to not solely focus on machines gaining the capability to mimic human behavior and emotion, but rather for machines to understand it, and adapt its own behaviors or reactions based on such intelligence. For example, if a care-taking robot sees that it’s operator’s grandson is afraid of it, the robot would gain the ability to empathize with the grandson, and bring the child a toy object to create a psychological and relational connection. If a robot only mimics our behavior and movement, but does not have the ability to understand and truly empathize with human existence, then we will create hollow shells, incapable of helping us to further humanity and discover our full potential as a race of beings in the vast cosmos. For more information on EI and autonomous systems, please contact us at info@Infinite8institute.com, or visit us online at www.infinite8institute.com.

pg. 8



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.