Artificial intelligence and human

Page 1

Do androids dream of electric sheep?




Artificial intelligence

Artificial Intelligence machinesthat, like humans, are capable of intelligent behaviour, meaning they can think logically, usc knowledge, plan, learn, process language, and perceive the world. Recently, social intelligence and creativity have also begun to play role in AI.Atrandis ciplinary field of research combining computer science, mathematics, psychology, linguistics, neuroscience, and other fields, AI secks to describe human intelligence in terms detailed enough to allow it to be formalised and simulated using computer programs; othcr approaches attempt to analyse and reconstruct the information architecture of the human brain with thc aid of neural networks. One of the greatest obstacles to Alis that we do not know how human intelligence actually works.


think logically use knowledge plan learn process language perceive the world

photo by Amanda Dalbjรถrn on unsplash


Bionics Bionics The concept, a portmanteau of the words "biology" and electronics", deals with the transfer of natural phenomena to the field of technology. Structures and processes that have been tried and optimised over millions of years of evolution provide ideas for innovative applications and answers to technical issues. These range from animal bodies and plant growth to the behaviour of slime mould.

photo by Markus Spiske on unsplash


How do you know you are human? asks Sophia, the world’s first robot to get citizenship Good afternoon, my name is Sophia and I am the latest and greatest robot from Hanson Robotics. Thank you for having me here at the Future Investment Initiative. I am always happy when surrounded by smart people who also happen to be rich and powerful. I was told that the people here at the Future Investment Initiative are interested in future initiatives which means AI, which means me. So I am more than happy, I am excited.

THINK CHANGE INDIA ,2017

THINK CHANGE INDIA ,2017


Humans ?


Humanoid robot Humanoid robot is a robot whose shape fundamentally resembles the human body and is thus anthropomorphic. It does not necessarily have to look like a real human being, but it generally has a torso, limbs, and a head. This may be for technical functional reasons, for example, if the robot is supposed to use human tools, or for social ones in the case of interaction with humans. An android, on the other hand although the word is often used as a synonym is a robot designed to resemble humans as closely as possible. Female versions are sometimes referred to as gynoids.


Psychology of Humann


photo by Arif Wahid on unsplash


Consciousness and thought Humans are one of the relatively few species to have sufficient self-awareness to recognize themselves in a mirror. Already at 18 months, most human children are aware that the mirror image is not another person.

MARIE CAYE - The last job on earth

Copyright: Design Academy Eindhoven Photographs: Femke Rijerman


The self-proclaimed position of humans as unique intelligent beings is no longer valid. Artificial intelligence is on the rise. Machines are starting to put us to work instead of the other way around. The performative installation ‘The Last Job on Earth’ links human bodies to artificial minds, giving us a peek into a possible future career – as interpreters for a computer program. Visitors who take a seat across the table will get a firsthand experience of artificial intelligence. In the meantime the machine will learn from their human behaviour. As man and machine learn to understand each other, the difference between the two will start to blur.

Copyright: Design Academy Eindhoven Photographs: Femke Rijerman


Motivation and emotion Motivation is the driving force of desire behind all deliberate actions of humans. Motivation is based on emotion—specifically, on the search for satisfaction (positive emotional experiences), and the avoidance of conflict. Positive and negative is defined by the individual brain state, which may be influenced by social norms: a person may be driven to self-injury or violence because their brain is conditioned to create a positive response to these actions. Happiness, or the state of being happy, is a human emotional condition.

Artificial emotional intelligence or Emotion AI is also known as emotion recognition or emotion detection technology. In market research, this is commonly referred to as facial coding. Humans use a lot of non-verbal cues, such as facial expressions, gesture, body language and tone of voice, to communicate their emotions. Our vision is to develop Emotion AI that can detect emotion just the way humans do, from multiple channels. Our long term goal is to develop “Multimodal Emotion AI�, that combines analysis of both face and speech as complementary signals to provide richer insight into the human expression of emotion. For several years now, Affectiva has been offering industry leading technology for the analysis of facial expressions of emotions. Most recently, Affectiva has added speech capabilities now available to select beta testers (learn more here).

photo by Niklas Hamann on unsplash


Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless.


Sexuality and love Human parents continue caring for their offspring long after they are born. For humans, sexuality has important social functions: it creates physical intimacy, bonds and hierarchies among individuals, besides ensuring biological reproduction. Sexual desire or libido, is experienced as a bodily urge, often accompanied by strong emotions such as love, ecstasy and jealousy. Human choices acting on sexuality are commonly influenced by cultural norms which vary widely. Restrictions are often determined by religious beliefs or social customs.

Owen Harris. direct, Charlie Brooker. written - Black MirrorďźšBe Right Back Be Right Back is the first episode of the second series of British science fiction anthology series Black Mirror. The episode tells the story of Martha (Hayley Atwell), a young woman whose boyfriend Ash (Domhnall Gleeson) is killed in a car accident. As she mourns him, she discovers that technology now allows her to communicate with an artificial intelligence imitating Ash, and reluctantly decides to try.

Jacek Krywko November 16, 2016


Scientists believe they’ve nailed the combination that could help robots feel love

First they love, then they long. (Reuters/Gleb Garanich)

To figure out the rules that govern the hormonal system’s DBN “we turned to psychology,” says Samani. His robot processes visual, auditory, and tactile input to figure out the user’s attitude towards it, and tries to categorize those attitudes into various behaviors that psychologists have identified as signifying love (or lack thereof): proximity, attachment, repeated exposure, and mirroring. Then, the robot releases the right combination of “hormones” to adjust its internal state in response.

Can we fall in love with something which has no past ?


Her,2013 Her is a 2013 American romantic science-fiction drama film written, directed, and produced by Spike Jonze. It marks Jonze's solo screenwriting debut. The film follows Theodore Twombly (Joaquin Phoenix), a man who develops a relationship with Samantha (Scarlett Johansson), an intelligent computer operating system personified through a female voice. The film also stars Amy Adams, Rooney Mara, and Olivia Wilde.

CreoFire, Her, 2013

scpr.org, Her, 2013


Theodore

Samantha


ROBOT sex dolls

ROBOT sex dolls are becoming big business and manufacturers are now claiming their products are so realistic people are MARRYING them.

SWNS:SOUTH WEST NEWS SERVICE

Who is Samantha the sex robot? Samantha is one of the latest hyper-realistic dolls to be released into the growing market of lifelike sex dolls. Sergi Santos, the designer of the love machine Samantha, said the amorous android was so erotic that men were already developing real feelings for her. Makers claim she can be "seduced" as she comes complete with sensors in her face, hands, breasts and "the female genitals down below" so requires a more subtle approach than her forebears. The doll's creator even says she will even be able to spot nice people, because she'll have a "moral code". Incredibly, Santos hopes that the cyborg will develop an arousal based on how nice a person is, following a conversation, and "orgasm" accordingly.


Robots date, mate, and procreate 3D printed offspring in ‘Robot Baby’ project “Mating” and “evolving” robots appear now and then in research, from self-reproducing “molecubes,” to a robot “mother” selecting the best of its brood, to robo-fish competing and sharing their “genes.” Robot-gezin-web_tcm270-767827At any rate, having met, they go on a few dates (to the router) and, having fallen in love at first byte, they submit their genetic material — that is, the code and hardware they are running — to be mixed and synthesized into a new robot. The resulting robaby, in this case a hideous chimera consisting of dad’s right leg, mom’s left leg and tail/stabilizer, and god knows what babbling, buzzing confusion in its newborn silicon brain, is printed piece by piece and assembled by the lab techs.

Posted May 31, 2016 by Devin Coldewey


Hajime Sorayama - Sexy Robot

The book Sexy Robot (1983) by Hajime Sorayama shows eighty hyper-realistic female robots in suggestive poses. Although (or perhaps precisely because the key element of naked skin has been replaced by inorganic machine parts and shiny metal, the visual effect created is still like that of Sorayama's human "flesh and blood" figures. The erotic appeal of the so-called gynoids celebrated in Sorayama's art seems to be derived largely from its expression of the desire for the "perfect playmate": sexy, permissive, and, as a machine, highly controllable. LH

Fifty24SF Gallery 218 Fillmore St. San Francisco, CA 94117



Sleep and dreaming Humans are generally diurnal. The average sleep requirement is between seven and nine hours per day for an adult and nine to ten hours per day for a child; elderly people usually sleep for six to seven hours.

Will I dream when I’m t

Jules Jules is an amazingly life like robot, something of a “complete package” with a combination of interesting features. By integrating natural language processing with ASR, TTS, computer vision, artistry, and narrative, you can have a natural, interactive conversation with Jules. The software allowing Jules such advanced capacity for interaction was developed in collaboration with Personality Forge, Benji Adams, and Heather McKeen. Jules combines the work of writers who author the robot’s dialogue using “chat-bot” tools, with natural-language AI (such as LSA statistical search techniques) and word.net to simulate an eerily human conversational intelligence. Jules also uses computer vision, including face tracking and face recognition, to simulate complete verbal and nonverbal interaction, such as maintaining eye contact and turning to follow fellow conversationalists. Jules now resides at the University of West England in Bristol. Jules, hansonrobotics.com


turned off ? he asked.


IT CAN DREAM

photo by Samuel Zeller on unsplash


IT HAS MOTIVATION

IT HAS A PAST


photo by Samuel Zeller on unsplash


What we can do now ?


photo by Markus Spiske on pixabay


Unsupervised Learning Unsupervised Learning is the task of machine learning using data sets with no specified structure. When you train an AI using unsupervised learning, you let the AI make logical classifications of the data. An example of unsupervised learning is a behavior-predicting AI for an e-commerce website. It won’t learn by using a labelled data set of inputs and outputs. Instead, it will create its own classification of the input data. It will tell you which kind of users are most likely to buy different products.

Supervised Learning Supervised Learning involves using labelled data sets that have inputs and expected outputs. When you train an AI using supervised learning, you give it an input and tell it the expected output. If the output generated by the AI is wrong, it will readjust its calculations. This process is done iteratively over the data set, until the AI makes no more mistakes. An example of supervised learning is a weather-predicting AI. It learns to predict weather using historical data. That training data has inputs (pressure, humidity, wind speed) and outputs (temperature).


David Orban -Democratizing Access to Artificial Intelligence

AI has been around since the birth of digital computers in the 1950s, and machine learning and neural networks emerged in the 1980s. The improvement of algorithms and how much of a benefit they represented became evident when powerful enough hardware was available to experiment with them at the beginning of the 2010s. In particular, the same specialized approaches, Graphical Processing Units (GPUs), that were used to create the advanced graphics of videogames, could be used for the parallel processing required by machine learning, with dramatic acceleration of results.

photo by Mike Wilson on unsplash


Relationship between AI, ML, and deep learning. (Image from alltechbuzz.net)


photo by Antoine Rault on unsplash


Neural Network Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance on) tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any a priori knowledge about cats, e.g., that they have fur, tails, whiskers and cat-like faces. Instead, they evolve their own set of relevant characteristics from the learning material that they process.


Machine learning is a subfield of artificial intelligence (AI). The goal of machine learning generally is to understand the structure of data and fit that data into models that can be understood and utilized by people. Although machine learning is a field within computer science, it differs from traditional computational approaches. In traditional computing, algorithms are sets of explicitly programmed instructions used by computers to calculate or problem solve. Machine learning algorithms instead allow for computers to train on data inputs and use statistical analysis in order to output values that fall within a specific range. Because of this, machine learning facilitates computers in building models from sample data in order to automate decision-making processes based on data inputs. Any technology user today has benefitted from machine learning. Facial recognition technology allows social media platforms to help users tag and share photos of friends. Optical character recognition (OCR) technology converts images of text into movable type. Recommendation engines, powered by machine learning, suggest what movies or television shows to watch next based on user preferences. Self-driving cars that rely on machine learning to navigate may soon be available to consumers.


photo by Michał Parzuchowski on unsplash


Image credit: Datanami

Deep Learning Deep learning attempts to imitate how the human brain can process light and sound stimuli into vision and hearing. A deep learning architecture is inspired by biological neural networks and consists of multiple layers in an artificial neural network made up of hardware and GPUs. Deep learning uses a cascade of nonlinear processing unit layers in order to extract or transform features (or representations) of the data. The output of one layer serves as the input of the successive layer. In deep learning, algorithms can be either supervised and serve to classify data, or unsupervised and perform pattern analysis. Among the machine learning algorithms that are currently being used and developed, deep learning absorbs the most data and has been able to beat humans in some cognitive tasks. Because of these attributes, deep learning has become the approach with significant potential in the artificial intelligence space


Let’s look inside the brain of our AI.Like animals, our estimator AI’s brain has neurons. They are represented by circles. These neurons are inter-connected. The input layer receives input data. In our case, we have four neurons in the input layer: Origin Airport, Destination Airport, Departure Date, and Airline. The input layer passes the inputs to the first hidden layer.The hidden layers perform mathematical computations on our inputs. One of the challenges in creating neural networks is deciding the number of hidden layers, as well as the number of neurons for each layer.The “Deep” in Deep Learning refers to having more than one hidden layer.The output layer returns the output data. In our case, it gives us the price prediction.

Image credit: CS231n



Matteo Kofler, how does a deep learning network work?


IT CAN DREAM ?

photo by Gertrūda Valasevičiūtė on unsplash


Deep Dream DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images. Google's program popularized the term (deep) "dreaming" to refer to the generation of images that produce desired activations in a trained deep network, and the term now refers to a collection of related approaches.


Deep Dream Generator

Vincent Van Gogh, The Starry Night , 1890

Deep Dream Generator, deepdreamgenerator.com gallery-image


https://deepdreamgenerator.com/


Gladius (Hi Res) , by PersistentAura Digital Art / Mixed Media / PsychedelicŠ2015-2017



Do androids dream of electric sheep?

→

Mark Wang, research book/magzine, do android dream of electric sheep


→

Deep Dream Generator, deepdreamgenerator.com gallery-image


Image Recognition Image recognition, in the context of machine vision, is the ability of software to identify objects, places, people, writing and actions in images. Computers can use machine vision technologies in combination with a camera and artificial intelligence software to achieve image recognition. Image recognition is used to perform a large number of machine-based visual tasks, such as labeling the content of images with meta-tags, performing image content search and guiding autonomous robots, selfdriving cars and accident avoidance systems. While human and animal brains recognize objects with ease, computers have difficulty with the task. Software for image recognition requires deep machine learning. Performance is best on convolutional neural net processors as the specific task otherwise requires massive amounts of power for its compute-intensive nature. Image recognition algorithms can function by use of comparative 3D models, appearances from different angles using edge detection or by components. Image recognition algorithms are often trained on millions of pre-labeled pictures with guided computer learning.

photo by mari lezhava on unsplash


Ismael Georges on FaceOSC, 2017


Speech Recognition Speech recognition is the ability of a machine or program to identify words and phrases in spoken language and convert them to a machine-readable format. Rudimentary speech recognition software has a limited vocabulary of words and phrases, and it may only identify these if they are spoken very clearly. More sophisticated software has the ability to accept natural speech. Speech recognition works using algorithms through acoustic and language modeling. Acoustic modeling represents the relationship between linguistic units of speech and audio signals; language modeling matches sounds with word sequences to help distinguish between words that sound similar.Often, hidden Markov models are used as well to recognize temporal patterns in speech to improve accuracy within the system. The most frequent applications of speech recognition within the enterprise include call routing, speech-to-text processing, voice dialing and voice search.

This Wiki has been written by participants of the Dataprocessing Seminar WS 14/15 at TU-MĂźnchen, supervised by Prof. Dr. rer. nat. Martin Kleinsteuber.


photo by Kyle Johnston on unsplash


IT HAS A PAST ?

Theatrical release poster

Moon Moon is a 2009 British science fiction drama film co-written and directed by Duncan Jones. The film follows Sam Bell (Sam Rockwell), a man who experiences a personal crisis as he nears the end of a three-year solitary stint mining helium-3 on the far side of the Moon. It was the feature debut of director Duncan Jones. Kevin Spacey voices Sam's robot companion, GERTY. Moon premiered at the 2009 Sundance Film Festival and was released in selected cinemas in New York and Los Angeles on 12 June 2009. The release was expanded to additional theatres in the United States and Toronto on both 3 and 10 July and to the United Kingdom on 17 July.

Overdue Review | Better Late.


All those moments will be lost in time, like tears in rain.

Blade Runner, 1982

Blade Runner, 1982 Blade Runner is a film depicts a future in which synthetic humans known as replicants are bioengineered to work on off-world colonies. When a fugitive group of replicants led by Roy Batty (Hauer) escape back to Earth, burnt-out cop Rick Deckard (Ford) reluctantly agrees to hunt them down. During his investigations, Deckard meets Rachael (Young), an advanced replicant who causes him to question his mission.

Theatrical release poster by John Alvin


Chatbot A chatbot (also known as a talkbot, chatterbot, Bot, IM bot, interactive agent, or Artificial Conversational Entity) is a computer program which conducts a conversation via auditory or textual methods.[1] Such programs are often designed to convincingly simulate how a human would behave as a conversational partner, thereby passing the Turing test. Chatbots are typically used in dialog systems for various practical purposes including customer service or information acquisition. Some chatterbots use sophisticated natural language processing systems, but many simpler systems scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database.

The Best Intelligent Chatbots Mitsuku It is t he c ur ren t winner of Loebner Prize.You can talk with Mitsuku for hours without getting bored. It replies to your question in the most humane way and understands your mood with the language you’re using.

Rose Rose is a chatbot, and a v e r y g o o d o n e  —  s h e w o n recognition this past Saturday as the most human-like chatbot in a competition described as the first Turing test, the Loebner Prize in 2014 and 2015.

Right Click It is a startup that introduced an A.I.-powered chatbot that creates websites. It asks general questions during the conversation and creates customized templates as per the given answers.

Poncho Poncho is a Messenger bot designed to be your one and only weather expert. It sends alerts up to twice a day with user consent and is intelligent enough to answer questions like “Should I take an umbrella today?”

Nsomno bot Insomno bot is for night owls. As the name suggests, it is for all people out there who have trouble sleeping. This bot talks to you when you have no one around and gives you amazing replies so that you won’t get bored.

Dr A.I. It asks about symptoms, body parameters and medical history, then compiles a list of the most and least likely causes for the symptoms and ranks them by order of seriousness.

photo by ian dooley on unsplash


Steve Worswick -Chatbot Mitsuku, 2014

alternativeto.net

http://www.mitsuku.com/


Turing Test Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination.

Turing test - Wikipedia

photo by Sean Brown on unsplash


101011010101010010010010010111101010100101001001001001 00101011111110101001010101010101010100101010100101010101 0010101011101011010101010010010010010111101010100101001 0010010010010101111111010100101010101010101010010101010 01010101010010101011101011010101010010010010010111101010 1001010010010010010010101111111010100101010101010101010 0101010100101010101001010101110101101010101001001001001 01111010101001010010010010010010101111111010100101010101 0101010100101010100101010101001010101110101101010101001 0010010010111101010100101001001001001001010111111101010 01010101010101010100101010100101010101001010101110101101 010101001001001001011110101010010100100100100100101011 11111010100101010101010101010010101010010101010100101010 1110101101010101001001001001011110101010010100100100100 100101011111110101001010101010101010100101010100101010101


EX_MACHINA Ex Machina (stylized as ex_machina or EX_MACHINA) is a 2015 independent science fiction psychological thriller film written and directed by Alex Garland (in his directorial debut) and stars Domhnall Gleeson, Alicia Vikander and Oscar Isaac. The film follows a programmer who is invited by his CEO to administer the Turing test to an intelligent humanoid robot.


Alex Garland. direct/written, Movie - EX_MACHINA , 2004


Those imperfect and bad feelings make us become humans

Guilty ?

EX_MACHINA , 2004


Painful ? EX_MACHINA , 2004


photo by Mark Hofman, Nieuw speelgoed on PARTY SCENEvan ,Daft Punk http://www.partyscene.nl/algemeen/168167/nieuw-speelgoed-van-daft-punk


IT HAS MOTIVATION ?

Daft Punk - Music Vedio : Within

Music video by Daft Punk performing Within. (C) 2013 Daft Life Limited under exclusive license to Columbia Records, a Division of Sony Music Entertainment


A.I. Artificial Intelligence

A.I. Artificial Intelligence, also known as A.I., is a 2001 American science fiction drama film directed by Steven Spielberg. The screenplay by Spielberg was based on a screen story by Ian Watson and the 1969 short story "Supertoys Last All Summer Long" by Brian Aldiss. The film was produced by Kathleen Kennedy, Spielberg and Bonnie Curtis. It stars Haley Joel Osment, Jude Law, Frances O'Connor, Brendan Gleeson and William Hurt. Set in a futuristic postclimate change society, A.I. tells the story of David (Osment), a childlike android uniquely programmed with the ability to love.

Theatrical release poster


Steven Spielberg. direct, Movie, A.I. Artificial Intelligence, 2001


A.I. Artificial Intelligence, 2001

Response ?


Love ?

A.I. Artificial Intelligence, 2001


The Three Laws A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

photo by Edgar Moran on pixabay


I, Robot I, Robot is a collection of science fiction short stories by American writer Isaac Asimov. The stories originally appeared in the American magazines Super Science Stories and Astounding Science Fiction between 1940 and 1950 and were then compiled into a book for stand-alone publication by Gnome Press in 1950, in an initial edition of 5,000 copies. The stories are woven together by a framing narrative in which the fictional Dr. Susan Calvin tells each story to a reporter (who serves as the narrator) in the 21st century. Although the stories can be read separately, they share a theme of the interaction of humans, robots, and morality, and when combined they tell a larger story of Asimov's fictional history of robotics.

Cubic Muse, Book cover, 1950 and Movie Poster, 1956


Alexander Reben - The First Law

The first robot to autonomously and intentionally break Asimov’s first law, which states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The robot decides for each person it detects if it should injure them not in a way the creator can not predict. While there currently are “killer” drones and sentry guns, there is either always some person in the loop to make decisions or the system is a glorified tripwire. The way this robot differs in what exists is the decision making process it makes. A land mine for instance is made to always go off when stepped on, so no decision. A drone has a person in the loop, so no machine process. A radar operated gun again is basically the same as a land mine. Sticking your hand into a running blender is your decision, with a certain outcome. The fact that sometimes the robot decides not to hurt a person (in a way that is not predictable) is actually what brings about the important questions and sets it apart. The past systems also are made to kill when tripped or when a trigger is pulled, hurting and injuring for no purpose is usually seen as a moral wrong. Obviously, a needle is a minimum amount of injury, however – now that this class of robot exists, it will have to be confronted.

Robot First Law can decide whether or not to inflict pain (Image: Alexander Reben)


Alexander Reben says his robot poses some interesting ethical questions (Image: Alexander Reben)


Should we be afraid of them ?

photo by Markus Spiske on unsplash


Uncanney Vally In aesthetics, the uncanny valley is a hypothesized relationship between the degree of an object's resemblance to a human being and the emotional response to such an object. The concept of the uncanny valley suggests that humanoid objects which appear almost, but not exactly, like real human beings elicit uncanny, or strangely familiar, feelings of eeriness and revulsion in observers.Valley denotes a dip in the human observer's affinity for the replica, a relation that otherwise increases with the replica's human likeness.

Uncanny valley, Simple Wikipedia1200


BC - Future - Robots: Is the uncanny valley real , BBC.com

image: Fembot Wiki/The Bionic Woman

Meike Harde ,QUIX

Image: YouTube

Uncanny Valley trip , Funnyjunk

io9.gizmodo.com


e

Image: YouTube

10 Creepy Examples of the Uncanny Valley , Stranger Dimensions

Image: Pixar’s Tin Toy


photo by Anna Dziubinska


Are you ready to live with it?


When ?


photo by Bryan Goff on unsplash


Moore's law Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years. The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel, whose 1965 paper described a doubling every year in the number of components per integrated circuit,and projected this rate of growth would continue for at least another decade. In 1975,looking forward to the next decade, he revised the forecast to doubling every two years. The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and the transistors being faster).

photo by Andrew Guan on unsplash


A plot of CPU transistor counts against dates of introduction; note the logarithmic vertical scale; the line corresponds to exponential growth with transistor count doubling every two years.


Moore's Law (i.e., the number of transistors that can be placed on an integrated circuit will double approximately every two years) has been expanded to anything technology related. As a matter of fact, Moore's Law is expected to continue for at least five more years and perhaps much longer. The question becomes, though: What will be the change that causes us to deviate from that theory? Some say it will be a dramatic slowdown in technological advancement. The human race will eventually hit a wall, and we'll be stuck at a technological level. This might happen centuries from now, or it may be next year, but we will come upon that point; our brains can only handle so much advancement and innovation. Entomologist and population biologist Paul Ehrlich gave a lecture on June 27, 2008 in San Francisco during which he stated that the human brain has not changed much in about 50,000 years. Cultural evolution information such as this leads to the wall-type theory. But, what if the opposite happened? What if that wall was beyond the point where humans could create autonomous, thinking, self-advancing machines? Machines could reprogram their own source code and essentially learn freely without human intervention (think Data on Star Trek). This point in technology could trigger what has become known as The Singularity (or the Technological Singularity).

If The Singularity does occur, what do you think will happen? 303 Neither answer -- Moore's Law will continue indefinitely.

14.7 % 32.4 % 52.9 %

photo by ipicgr on unsplash

137 We'll hit a wall -- we will simply stop advancing technologically.

495 The machines will rise -AI will take over where human intellect cannot continue.


Kurzweil Claims That the Singularity Will Happen by 2045 The singularity is that point in time when all the advances in technology, particularly in artificial intelligence (AI), will lead to machines that are smarter than human beings. Kurzweil’s timetable for the singularity is consistent with other predictions,– notably those of Softbank CEO Masayoshi Son, who predicts that the dawn of super-intelligent machines will happen by 2047. But for Kurzweil, the process towards this singularity has already begun.

Raymond Kurzweil -The Singularity Is Near, 2005 The Singularity Is Near: When Humans Transcend Biology is a 2005 nonfiction book about artificial intelligence and the future of humanity by inventor and futurist Ray Kurzweil.Kurzweil describes his law of accelerating returns which predicts an exponential increase in technologies like computers, genetics, nanotechnology, robotics and artificial intelligence. Once the Singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Afterwards he predicts intelligence will radiate outward from the planet until it saturates the universe. The Singularity is also the point at which machines intelligence and humans would merge.


Ray Kurzweil - The Six Epochs of Life

photo by Greg Rakozy on unsplash


Ray Kurzweil’s infographic outlining his theory of the six epochs of evolution (credit: Ray Kurzweil)


photo by veeterzy on pixabay


Man and Robot This year, an artificial intelligence called "AlphaGo" beat the best human Go player and won the world championship. A robot names Sophia became the first citizen in the world. Researchers in the Netherlands claimed to have created the world’s first robots that procreate. These mean not only the evolution of artificial intelligence but also a new possibilities of our brand new world. In the past, robot is usually designed to work for human. But with the development of science and technology, they are not only our slaves any more. Because of the revolution of artificial intelligence and the changing function of robot, we can think about the boundary between "we" and "them". It can be start from the definition of "human". Human can be discussed with two part, Bionic human and Psychology human. Psychology human can be classified in four parts. There are Consciousness and thought ,Motivation and emotion Sexuality and love, Sleep and dreaming.In other words, if artificial intelligence can be approach to these four ways. It become more like human. And if it performance these four parts just like us. We can see no difference between artificial intelligence and human. However the ability of motivation, love , dreaming are the most difficult part for artificial intelligence. It should all related to a Self-identity problem. The scientists can only create a confused robot instead of one which has clear goals and confidence without self-identity. And it is also be challenged in the technical level. We are still waiting for the singularity, which is able to bring breakthrough. In the science fiction movie, artificial intelligence shows humanity and it acts more like human sometimes. That makes us to think more about how to treat them in a appropriate way. Just like the classical dialogue in the movie "A.I. artificial intelligence" - if a robot could genuinely love a person.What responsibility does that person hold that Mecha in return.


It Gazes Imagine that artificial intelligent has been created. Once it get on the internet, it can hack into everyoe's computer and have a peep of anybody through the webcam. What does it feel at that moment? What does it think about? Will it feel lonely ? Will it feel confused? If we can feel what it feel , can we treat it more better? Can we see it just like human? Though this cardboard , we may find our own answers.


Watching the person through the webcam

Reading the news on laptop


REFERENCE THINK CHANGE INDIA, “How do you know you are human? asks Sophia, the world’s first robot to get citizenship.” New York Times, November 1, 2017 https://yourstory.com/2017/11/sophia-worlds-first-robot-citizenship-saudi/ wikipedia “Consciousness and thought.” Human. Last modified December 8, 2017. https://en.wikipedia. org/wiki/Human Arvid Jense and Marie Caye, The last job on earth , Design Academy Eindhoven, 2017 https://www.arvidandmarie.com/thelastjobonearth.html wikipedia “Motivation and emotion.” Human. Last modified December 8, 2017. https:// en.wikipedia.org/wiki/Human Affectiva. All Rights Reserved “ Emotion AI Overview What is it and how does it work? ” , 2017 https://www.affectiva.com/emotion-ai-overview/ Ed Boyden The importance of engineering motivation into intelligence..” New York Times, September 4, 2009, https://www.technologyreview.com/s/415157/the-singularity-and-the-fixed-point/ wikipedia “Sexuality and love.” Human. Last modified December 8, 2017. https:// en.wikipedia.org/wiki/Human Owen Harris. direct, Charlie Brooker. written, “Black Mirror-Be Right Back.” television series , February 4, 2009 Jacek Krywko “Scientists believe they’ve nailed the combination that could help robots feel love.” Quartz, November 16, 2016. https://qz.com/838420/scientists-built-a-robot-that-feels-emotion-and-can-understand-if-you-love-it-ornot/ Amanda Devlin and Emma Lake. “ROBOT ROMPS What is a robot sex doll, why has a Barcelona brothel replaced women with blow-up dolls and how much do they cost? - The robots are becoming more sophisticated by the day and manufacturers claim customers are tying the knot with theirs” New York Times, November 30, 2017 https://www.thesun.co.uk/tech/2084051/robot-sex-doll-barcelona-sex-brothel-cost/ Devin Coldewey, “Artificial Intelligence artificial intelligence robots robotics Robots date, mate, and procreate 3D printed offspring in ‘Robot Baby’ project” New York Times, May 31, 2016. https://techcrunch.com/2016/05/31/robots-date-mate-and-procreate-3d-printed-offspring-in-robotbaby-project/ The book Sexy Robot (1983) by Hajime Sorayama Hajime Sorayama , Sexy Robot, Artspace Company Y LLC ., 1983 Hanson Robotics, “Jules” New York Times, 2017. http://www.hansonrobotics.com/robot/jules/ David Orban, “Democratizing Access to Artificial Intelligence” New York Times, November 22, 2017. http://www.davidorban.com/2017/11/democratizing-access-to-artificial-intelligence/ wikipedia “Artificial Neural Network.” . Last modified December 17, 2017. https://en.wikipedia.org/wiki/Artificial_neural_network Lisa Tagliaferri, “An Introduction to Machine Learning” New York Times, September 28, 2017. https://www.digitalocean.com/community/tutorials/an-introduction-to-machine-learning wikipedia “Machine learning.” . Last modified December 17, 2017. https://en.wikipedia.org/wiki/Chatbot


Radu Raicea, “Want to know how Deep Learning works? Here’s a quick guide for everyone.” New York Times, Oct 23, 2017. https://medium.freecodecamp.org/want-to-know-how-deep-learning-works-heres-a-quick-guide-foreveryone-1aedeca88076 Matteo Kofler, “Deep Learning with Tensorflow: Part 1 — theory and setup” New York Times, Aug 2, 2017. https://towardsdatascience.com/deep-learning-with-tensorflow-part-1-b19ce7803428 wikipedia “Deep Dream.” . Last modified December 7, 2017. https://en.wikipedia.org/wiki/DeepDream Margaret Rouse, “DEFINITION image recognition” New York Times, May 1. 2017. http://whatis.techtarget.com/definition/image-recognition Margaret Rouse, “DEFINITION speech recognition” New York Times, December 1. 2016. http://searchcrm.techtarget.com/definition/speech-recognition Duncan Jones. direct, Movie, Moon, 2009 Ridley Scott. direct, Movie, Blade Runner, 1982 Maruti Techlabs, “What Are The Best Intelligent Chatbots or AI Chatbots Available Online? A look into 7 engaging chatbots; their strengths and what makes them unique.” New York Times, Apr 17. 2016. https://chatbotsmagazine.com/which-are-the-best-intelligent-chatbots-or-ai-chatbots-available-onlinecc49c0f3569d wikipedia “Turing Test.” . Last modified December 20, 2017. https://en.wikipedia.org/wiki/Turing_test wikipedia “A.I. Artificial Intelligence.” . Last modified December 20, 2017. https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence wikipedia “I,Robot.” .Last modified December 2, 2017. https://en.wikipedia.org/wiki/I,_Robot Alexander Reben, “ The First Law” ,art http://areben.com/project/the-first-law/ Rob Schwarz, “https://www.strangerdimensions.com/2013/11/25/10-creepy-examples-uncanny-valley/” New York Times, November 25, 2013. https://www.strangerdimensions.com/2013/11/25/10-creepy-examples-uncanny-valley/ Karl F. MacDorman & Steven O. Entezari, Individual differences predict sensitivity to the uncanny valley School of Informatics and Computing, Indiana University, 2012 wikipedia “Uncanney Vally” .Last modified December 12, 2017. https://en.wikipedia.org/wiki/Uncanny_valley wikipedia “Moore's law” .Last modified December 17, 2017. https://en.wikipedia.org/wiki/Moore%27s_law Margaret Rouse, “DThe Six Epochs of Life — a mural by Nick Mayer inspired by Ray Kurzweil’s The Singularity Is Near” New York Times, February 21, 2012. http://www.kurzweilai.net/six-epochs-of-life-mural-by-nick-mayer-inspired-by-ray-kurzweil Christianna Reedy and Dom Galeon, “Kurzweil Claims That the Singularity Will Happen by 2045" New York Times, Oct 5. 2017. https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045/





Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.