How To Be More Or Less Human

Page 1


2

Table of contets

Table of contets


3

Introduction

4-5 Interview 6-13 Emails with Imagga 14-27 Preformancs 28-60 Python Script 61-63


4

Introduction

Introduction


5

How To Be More Or Less Human is a performance investigating how humans are identified by computer vision software. Looking specifically at how the human subject is identified and classified by image recognition software, a representation of the human body is formed. The living presence of a human being cannot be sensed by computer vision, so the human subject becomes a quantifiable data object with a set of attributes and characteristics. Seeing ourselves in this digital mirror allows us to reflect on other models of perception and develop an understanding of how the human subject is ‘seen’ by the machinic ‘other’. Looking at ourselves through the automated perception of image recognition can highlight how gender, race and ethnicity have been processed into a mathematical model. The algorithm is trained to ‘see’ certain things forcing the human subject to identify themselves within the frame of computer vision.


6

Interview Interview


7

Max Dovey is 28.3% man, 14.1% artist and 8.4% successful. His performances confront how computers, software and data affect the human condition. Specifically he is i nterested in how the meritocracy of neoliberal ideology is embedded in technology and digital culture. His research is in liveness and real-time computation in performance and theatre.. By looking at you enjoy critiquing corporate lifestyle, do you agree with that statment? Yes, in a lot of digital services, digital technology companies like to commodify and exploit people and I want to present and preforme this tendency of capitalism in the techno cultures. But I have never see it to be the main subject, but they are always there, always present. In one of you past project, The Emotional Stock Market’. How did that come about ? t came from me using Twitter as a script to perform with and using the real-time nature of social media as material to generate stories and narratives, it was made in the form of the classic “stock market floor”, but instead of selling goods, we where selling emotions with search words like “happy”, “love” and “sad”, but there is an irony there, now corporations are doing that, they are starting to commodify emotions. You had another project, this one while at the Piet Zwart Institute, The project “We Believe In Cloud Computing” where you send printed messages into the sky with balloons. Whats your story of the piece and your opinions on the cloud? Its problematic to deal with the concept of the cloud and digital materiality, its computers and services, but on someones else’s computer, therefore I thought it would be funny to


8

take that abstraction sending data to the sky with balloons, Cloud computing is more a belief, more than a brandIt’s backing everything up and you no longer have responsibility over it. That is the idealogy of the cloud. You are questioning some systems, but are you maybe even more questioning the reason that the system exists in the first place? It was not really constructed for how it is used now. That is an interesting idea... yes of course, like social media..its was not really constructed in the first place like now. Mark Zuckerberg made it to rate of woman, then expanded on it and it turned into what its now. Cooperation have learn to capitalise the technology. What do you think happens after Facebook? Well now we are so used to working with the internet on a really personal level, putting yourself in the center. It has capitalized on basic human needs, being loved, being popular, The basic the basic human desire to feel close to others has become an economic commodity and I don’t know what happens after that. There is a movement to take back control, to make new systems.. but even I feel its hard to go back.. you kinda make yourself an outsider now if you log off. Even if I want to leave, you got the rest who stays.. Some people only use Facebook to keep in contact so its hard to break off.. you become a luddite. We are all free to leave, but the market demands me to have a Facebook account. You started on an idea to do your performance in an office environment , but was that the original idea? How did the idea came about? Last year, I was using a program to see if


9

objects could articulate language, for example if the coffee grinder could speak what would it say? The program gave a voice to the objects and I was really interested in how the computer can be a co-performer. I am taking a product and misusing it in a really different compared to what is was made to do. You really need to go to the extremes, to show the absurdity of it. I found it really fascinating that a computer can be so confident in what it sees. What are the program you are using? Its called Imagga and is used to organize images based on keywords and values it find in them. Its using a programmed data sett, thats how it can see a “man” in a photo of a man, but the program is really not accurate at all, it have a programmed bias on what it “sees”. Its shows an image of society value feelings and imitation, whats is successful or whats happy is totally a construct of who have programmed it. Think about the story of Sleeping Beauty where the princess ask the mirror “mirror mirror on the wall” but only its a computer program and not a mirror that tell us who is the fairest of them all. You need to be on board with me for the sake of the piece to


10

work to mean anything.. the values can mean nothing, but they coloured to not something 100% man or sexy. You want to take it to the absurd and make it funny? There is a danger there, that people will not take the topic seriously. I am slightly amazed at how a computer can visually interpret the world through shapes, colours and patterns. am concerned that everything in our world can be categorized by a computer. So I want the computer to make mistakes, I want to show the error in the system, but I don’t want the computer to look stupid, its far more smarter then I am. Do you see a danger of making computers funny or humanizing computers ? That is how the robotics and AI industry are marketing new technology, putting cute faces on computers to make our relationship towards them more empathetic. I want to imply that the computer’s modal of perception is the dominate way of seeing. Then human perception which is way too subjective. The way the computer sees things is a way more effective way of seeing then the human one. We fall in love with things and


11

become empathetic towards them. You started with the idea to have it in an office, but has that changed? I started by putting the story of the performance in the office a Silicon Valley software company with the company that makes the programs. But all the correspondence goes to them trying to improve their software and charge me over 1000 euros for a customized algorithm that would identify me as a man with 100% confidence The piece is now set in a health center ,the patient gets sent in and he goes through a series of exams on his vision. Then I take off my clothes and see how it gender is interpreted by the image recognition software. The suit makes the man but when i am naked i become 5% man and 5% lady. That is how I have figured out how the program works, dressing things like a man to see how human the objects are, a dustbin is really a human with just a suit on. I was looking at “visual agnosia”, that is a human condition that makes people have problems naming things difficult..like the foot is a shoe. There is a book called “The man mistook his wife for a hat” by Oliver Sacks (1985) and I was looking at this computer program that I am using that perceives the world in a really similar way. It guesses what object is based on associations, applefruit-banana-food. So the human shape is irrelevant, its all about the suit and clothing? Clothing is one element used by the computer to identify a human subject. I got this theory when it saw me from behind and the software perceived me as a lady. The white shirt and suit is the identifier of human male, and in the end i get replaced by a dustbin, because a


12

dustbin is more human than me.. I am just an another data object in front of the computer. I like the existence crisis it makes, that I can be replaced by a dustbin, because a dustbin with a shirt on is more of a man than me. I like the existential crisis it makes that a dustbin with a white shirt on can be equal to a living being. You are touching the topic about humans getting replaced by computers. Yes, like what is the role for us now anymore, like if a dustbin can be as human as me now, then whats my purpose then. And its kinda like a sci-fy fantasy like we all have, its got two ends of the argument. One is that we will always be able to program the computer and therefore will always be in control or the humanist and technophobic argument, that the computer is going to take over, singularity and it goes beyond your control and I like to play with the feelings, for I have some of those fears too, of us losing control of computers. Like I am at self services check out at Albert Heijn I get really frustrated with it, I think it not even close as good as a human, but we have still decided to use it over a human-


13

being, for they are quicker and more efficient and need less breaks. So with this performance I am getting analysed by the computer like I am at an automated health services. “Yes you there, come in, take off your clothes and we will see how human you are� and that is kinda happening now, we go online and check up your illnesses, 80% of people ow self-diagnose themselves first on Wikipedia. so we can laugh about it. What I am doing is to be really eccentric about the confidence levels of the computer. The computer is telling me that I am 40% human. I can live with that. It is kinda how people do when they read tabloids newspapers or watching all the horrible chat shows.. I am taking it to the extememe to reveal the errors and problems with software that could one day be better than me at certain tasks. In 1950 Alan Turing proposed the famous Turing test where he speculated on the computers ability to deceive someone into thinking they were human. And in 1966 Joseph Weizenbaum created Eliza, a chatbot that could imitate a human by repeating sentences and conversing with a human. He was shocked at how how simply a computer could deceive a human, that we could suspend our disbelief to such an extent to think a simple computer program could be human like. But as it transpires it is not a case of how well computers imitate us, it is how much we can incorporate and imitate them. So let us take it to the extreme and ask what happens when a dustbin with a shirt on is more of a man than me?


14

Emails with Imagga

Emails with Imagga


15

Imagga is an Image Recognition Platformas-a-Service providing Image Tagging APIs for developers and businesses to build and monetize scalable image intensive apps in the cloud. The Technology We develop and democratize technologies for advanced image analysis, recognition and understanding in the cloud. Our portfolio includes proprietary image auto-tagging, auto-categorization, color extraction and search, and smart cropping technologies.We’ve designed an infrastructure that can handle huge loads of images and auto-scale to accommodate a lot of concurrent queries. The Solutions We offer a set of APIs for automated image categorization and meta-data extraction, intended for business customers. Our APIs can be used either separately or in a combination, and they can save a lot of time and effort otherwise spent on manual curation of images. The application of our technology also leads to better user experience for the end-customers of our business customers. And last but not least, the level of automation that we offer, enables a lot of monetization opportunities that are simply not feasible or even not possible if huge amounts of images need to be handled manually. Currently we offer our APIs on a platform-as-aservice pay-as-you-go basis. (Source: http://imagga.com/company)


16

2 Apr 2015 Hi Max, We are excited to announce some changes to our API pricing policy. We’ve got lots of feedback and requests for more affordable way to access our APIs. Today, we are announcing Developer Plan for Imagga APIs, priced at $14/month that will allow you to use one of our APIs with up to 12 000 calls a month (3000/day, 2 requests/second). We believe this plan will give you more flexibility and the opportunity to apply our breakthrough technology on a more affordable price. Hacker plan remains free but we are reducing the monthly calls to 2000 (200/day, 1 request per second) and will be available as before just for image tagging API. The change will take place on 15th of April. We are eager to see you plug in our APIs in your projects. Send us feedback and any ideas you have regarding our technology offering in general or any tip you want to share. Happy tagging!


17

2 Apr 2015 Hi Pavel, thats good news about the developer plan, i am definetly interested. perhaps you can assist me with some questions regarding auto-tagging feature that i have emailed sales@imagga twice now and not received any response? below is a copy of the email ive sent if you could direct me to someone who may know a bit more about this id be very grateful., many thanks Max Hello, i currently only have a hacker account but am looking to upgrade to one of your other services but i had a few more questions regarding the auto - tagging feature of imagga. I have been mainly using it to auto-tag pictures of humans , and although am quite satisfied by the wide range of results I wanted to have a better understanding of the human terms available within the imagga dictionary. Ive seen ‘happiness, happy, smile, love and sexy and passion’ but I was wondering if you could inform me of the list of human associated terms, that are based around emotions. I am looking to use Imagga for a project I am doing but would like to have a better understanding of the vocabulary available to describe human emotion.


18

9 Apr 2015 Hey Max, I’m really sorry for the inconvenience caused with not answering your mail! We do not have specific vocabulary for human expressions and emotions. I might say that you’ve listed more of them in your mail, but we can offer custom training based on user provided data. For example you can collect images with different human expressions and emotions and organise them in needed categories/tags. Then we can train custom API algorithm based on your data. Usually we have pricing policy for the custom training, but your case sounds interesting and we can think on some collaboration. We are happy that you find our new Developer plan useful! If you have any other questions, please let me know. Best Pavel from Imagga


19

15 Apr 2015 Hi Pavel, Thanks for replying, I hope you dont mind answering another question of mine before I get the developer plan. I had previously been using the api for auto tagging ‘http://api. imagga.com/draft/tags’ and was posting an image using python. Now this API seems to have been taken offline and i am doing it via a get request to this api ‘ https://api.imagga.com/v1/tagging ‘ Is this the correct address that I should be using? if so is it possible to post a image rather than do a get request via a url link of an image? I would rather post images form my local machine then from a server if that is still possible? As for making a customising API, this is very interesting however I wont need that for this project. I am currently developing a performance using Imagga auto-tagging as a character in a theatre show. In this piece I am exploring what elements trigger different tags and what visual images are 100% confident. Thats why I asked about the human related words, because I am trying to perform with Imagga’s auto tagging tool to reach 100% confidence, for example to become a 100% human or 100% man. So I am interested in finding out through a performed process what visual elements generate specific tags to find out how tags are related to certain visual elements. The performance is still in development but will be toured in the summer and so if you were interested in collaborating it would be great to talk more about how that could work. Best, Max Dovey


20

21 Apr 2015 Hi Max, You can upload an image for tagging through the /content endpoint. Just upload (with a POST request) your file to https:// api.imagga.com/v1/content first, the API would respond with a content id which you can then provide to the /tagging endpoint via the “content� parameter (e.g. https://api. imagga.com/v1/tagging?content=mycontentid) and you will be given tags for the given image. You can also take a look at the respective docs for the tagging and content endpoints: http://docs.imagga.com/#tagging-endpoint http://docs.imagga.com/#content Feel free to write me if there is anything else I can help you with. Best, Ivan Penchev, Imagga API Team


21

21 Apr 2015 Hi Ivan, thanks for this. Ive signed up for the developer plan and began planning to execute auto tagging via browser webcam. however Ajax does not allow cross domain server requests. what do you recommend is the best way to do this? with a php script on my server that then posted to imagga ? thanks Max

21 Apr 2015 Hi Max, CORS should be allowed in the API. Could you share with me how you are submitting the request with ajax so I can help you identify the issue. Thank you. Here is an example code for using the tagging API with jQuery ajax. You should just enter your api key and secret for the respective variables’ values. Best Regards, Ivan Penchev


22

21 Apr 2015 Hello again, I forgot to include a link to the mentioned sample code in my last email. Sorry about that. Here it is: http://jsfiddle.net/ivanvpenchev/ckkb1uL8/. Thanks. Best, Ivan Penchev

21 Apr 2015 Hi Ivan, thanks for the quick response and the example. Thats very helpful, I want to take images from webcam. So do i have to get post the image into the DOM to get a source url? what format can i post it to Tagging API. ive got a rough js fiddle here http://jsfiddle.net/ maxdovey/5yyg4eff/ MX


23

22 Apr 2015 Hi Max, Thanks for asking. We are not supporting base64 encoded image uploads yet so I think the best way to do this would be to send the base64 data from imgData to a php script on your side, decode it and I can think of two ways to continue from there. The first one is to send the decoded data to our /content endpoint via a POST request (like you are making an ordinary file upload). Our API will issue you a content id which you should then send to our /tagging endpoint through the “content” parameter. The second way is to save the file on your server and just submit a publicly accessible URL directly to the /tagging endpoint via the “url” parameter. Hope this helps. Best Regards, Ivan Penchev


24

29 Apr 2015 Hi Pavel, I’ve signed up for the Developer plan and am really enjoying your auto-tagging software. I was wondering if it was possible for someone to tell me a little bit more about how the software is trained to recognise certain things? If you have any information on the software training process? or what image library you are using? this would help me out alot. Many thanks Max Dovey


25

29 Apr 2015 Hey Max, Sorry for the late response! I’m glad to see that you are satisfied with our service! On your first question, you can look at our technology page for more info - https://imagga.com/technology/auto-tagging.html. If you have any more specific question on this, please let me know. On the second one - we need 1000+ sample images per category/tag and then we can run a training process based on this. Do you have specific use case that need custom training? Regards Pavel from Imagga


26

2 May 2015 Hey Pavel, I would like for the auto tagging to be 100% confident with gender. For Custom training would i have to submit 1000+ pictures of men and women to achieve 100% confidence? thanks Max

4 May 2015 кажу ми да даде и ще направим тест. дори да не е 100% когато не сме сигурно ще е по нисък върнатия конфидънс и ще си преценява дали да го показва или да ходи за модератор Tell him to give us images and we will make a test. Even if it is not 100%, when we are not sure the returning confidence will be lower and will decide whether to show it or go to a moderator.


27

4 May 2015 H Max, Sorry for the late response! We can do the test and see what will be the confidence. You can send us the sample images grouped by gender. If the results and the confidence rate are satisfying you’ll be charged $1199 which is our standard rate for custom training. If the results don’t fit your expectations, you’ll not be charged anything. Let me know if you want to proceed with this. Regards Pavel from Imagga


28

Preformance

Preformance


29

black

11.84%

person

9.90%

3d

9.38%

dark people

business light man shadow interior

9.99%

9.73% 8.12% 7.87% 7.56% 7.52% 7.50%


30

leg

22.58%

suit

21.82%

people trouser person man adult business

caucasian male

22.06% 21.46% 18.99% 17.46% 16.96% 16.92% 15.95% 15.15%


31

suit

46.80%

business

38.15%

man businessman people person

male corporate professional

adult

38.48% 36.99% 33.30% 31.86% 31.19% 30.95% 28.58% 28.00%


32

suit

59.61%

man

44.78%

businessman business

corporate male executive professional

people success

47.68% 43.74% 40.55% 36.77% 35.00% 33.85% 32.22% 31.03%


33

suit

46.80%

man

38.15%

man business

businessman people person

male corporate professional

38.48% 36.99% 33.30% 31.86% 31.19% 30.95% 28.58% 28.00%


34

people

33.18%

person

29.84%

man business adult businessman male

success work caucasian

30.75% 28.12% 23.63% 22.86% 21.82% 21.58% 21.29% 21.22%


35

people

28.87%

leg

27.65%

person

man trouser adult business

male caucasian women

27.65% 27.02% 24.98% 24.38% 22.51% 21.30% 20.26% 20.26%


36

underwea

73.42%

adult

25.49%

body health

sexy slim fit skin

leg healthy

39.49% 24.77% 23.93% 21.13% 20.62% 20.57% 19.63% 19.25%


37

underwear

47.29%

adult

24.95%

body health sexy swimsuit caucasian fit attractive

healthy

36.40% 23.98% 22.73% 22.57% 20.31% 19.64% 18.82% 18.44%


38

underwear

73.42%

adult

25.49%

body health

sexy slim

caucasian fit skin

leg

39.49% 24.77% 23.93% 21.13% 20.76% 20.62% 20.57% 19.63%


39

underwear

62.97%

adult

26.69%

body sexy health slim skin fit

healthy attractive

40.29% 26.40% 26.27% 21.50% 20.39% 20.02% 19.06% 18.51%


40

underwear

62.97%

adult

26.69%

body sexy health slim skin fit healthy

attractive

40.29% 26.40% 26.27% 21.50% 20.39% 20.02% 19.06% 18.51%


41

weight

26.61%

body

24.89%

dumbbell attractive

underwear

person adult caucasian

sexy health

25.54% 22.79% 21.98% 21.80% 20.07% 18.90% 18.89% 17.63%


42

underwear

74.38%

swimsuit

33.30%

body swimming trunks

sexy adult health fit slim skin

39.21% 25.09% 23.58% 23.44% 23.29% 21.17% 20.97% 18.47%


43

underwear

27.84%

adult

25.27%

body caucasian leg person

sexy attractive

hand trouser

27.83% 23.60% 21.99% 21.28% 21.10% 20.39% 19.51% 17.68%


44

people

34.03%

business

31.80%

man person male adult businessman caucasian happy smiling

32.47% 29.08% 28.97% 28.94% 26.74% 24.81% 22.06% 20.58%


45

people

30.63%

adult

24.67%

person man caucasian business leg

male attractive

women

29.42% 22.49% 21.17% 20.10% 19.60% 19.24% 18.98% 18.90%


46

man

34.67%

people

32.70%

suit business businessman male person adult corporate work

33.15% 32.64% 29.27% 28.83% 28.38% 27.61%

25.72% 24.74%


47

business

44.93%

corporate

38.13%

businessman man success people executive professional person

suit

42.94% 37.97% 35.56% 34.40% 33.67% 32.66% 30.78% 30.47%


48

people

31.48%

man

29.35%

person adult suit male caucasian business attractive businessman

31.38% 25.96% 23.37% 23.05% 23.00% 22.94% 22.09% 21.08%


49

suit

64.51%

man

39.00%

businessman business corporate

male people executive

garment office

42.89% 38.80% 35.54% 31.26% 29.04% 28.62% 27.94% 27.56%


50

texture

18.13%

wallpaper

16.02%

design frame material pattern backdrop

highlight graphic sheet

17.33% 14.33% 12.91% 12.77% 11.92% 11.39% 11.25% 11.22%


51

furniture

39.38%

room

35.23%

medicine chest

cabinet furnishing interior

bathroom home wardrobe

floor

35.43% 22.90% 19.36% 19.36% 15.40% 12.23% 11.51% 10,89%


52

highlight

23.63%

pattern

19.28%

texture design wallpaper

graphic art material

space light

19.46% 17.54% 14.74% 13.56% 13.37% 13.09% 12.88% 12.02%


53

black

13.06%

sign

11.86%

scroll design

3d graphic art

business

symbol night

12.70% 11.69% 11.40% 11.19% 10.31% 10.16% 9.75% 9.12%


54

soap dispenser

18.41%

container

14.12%

dispenser

black

milk liquid paper towel

towel food

cup

17.84% 13.26% 12.00% 11.94% 10.85% 10.56% 9.17% 8.59%


55

lab coat

40.27%

man

31.08%

coat

male people overgarment

adult person worker

caucasian

34.73% 29.20% 27.84% 25.00% 24.36% 23.69% 23.60% 22,87%


56

adult

20.59%

people

19.93%

pretty caucasian sexy attractive person body hand

lifestyle

20.05% 19.88% 19.46% 19.11% 18.17% 17.71% 17.16% 17.07%


57

black

12.96%

design

11.31%

scroll sign graphic business art

3d symbol dark

11.36% 10.75%

10.32% 10.03% 9.52% 8.90% 8.49% 8.23%


58

guillotine

17.81%

tent

13.38%

instrument of execution instrument cradle shelter

structure person

people device

13.87% 10.84% 10.39% 9.80% 9.57% 9.43% 9.35% 9.05%


59

light

10.85%

instrument of execution

8.10%

guillotine home man house color

design black black

10.12% 7.93% 7.59% 7.44% 7.33% 7.14% 7.02% 6.82%


60

light

10.85%

instrument of execution

8.10%

guillotine home man house color design black black

10.12% 7.93% 7.59% 7.44% 7.33% 7.14% 7.02% 6.82%


61

Python code

Python code


62

import sys, os , time import requests import json import subprocess api_key = ‘acc_781af1b59b1fe08’ api_secret = ‘6fd9e104945f5d3dc973ec1bcaca0fa6’ with open(‘tags.txt’, ‘a’) as outfile: #upload each picture to your server for root, dirs, files in os.walk(“/Users/user/ Desktop/PZI/1/PYTHON/imagga-py-master/ tests/profile”): count = 0 # path = root.split(‘/’) for file in files: path = root + ‘/’ + file # print path # print file try: #upload pic to folder on your server and save directory cmd = os.system(“scp {} max@ headroom.pzwart.wdka.hro.nl:public_html/ images/profile/”.format(path)) # print cmd url = “http://headroom.pzwart. wdka.hro.nl/~max/images/profile/{}”.format(file) # print url r = requests.get(‘https://api.imagga.com/v1/tagging?url=’ + url , auth=(api_key, api_secret)) data = json.loads(r.text) listy = data[‘results’][0][‘tags’] for i in listy: word = i[‘tag’] confidence = i[‘confidence’] if word == ‘man’: print file conf = “%.1f” % confidence+”%”


63

print word, conf # outfile.write(file) # outfile.write(‘\n’) # outfile.write(word) # outfile.write(“ “) # outfile.write(conf) # outfile.write(‘\n’) # outfile.close() time.sleep(2) except: e = sys.exc_info()[0] print ‘error’


This book is made as a documentation project of Max Dovey graduation project “How To Be More Or Less Human” at The Piet Zwart Instiute (2015) Rotterdam NL by Thomas Walskaar www.walska.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.