THE MACHINE IS LEARNING ArtiďŹ cial Intelligence as artistic practice
THE MACHINE IS LEARNING
Artificial Intelligence as artistic practice
a bachelor thesis research by IULIA RADU
- coordinators prof. Manuel Ehrenfeld prof. Rudy Melli
Brera Academy of Fine Arts — School of New Media Arts 2017
INDEX
INTRO
6
I — BEHIND MACHINE INTELLIGENCE: HOW COMPUTERS SEE 1.1 From perspective to pixels
8
1.2 Introduction to machine learning
12
1.3 How ML started touching art fields
21
II — CASE STUDIES AND APPLICATIONS 2.1 Style Transfer and going mainstream
27
2.2 Categorisations via Google Arts & Culture
30
2.3 Machine Learning vs art historians
32
2.4 Recognition: past and present face to face
34
2.5 Mario Klingemann’s joy of order
36
2.6 Other examples: Microsoft’s Cognitive Services
38
2.7 The use of data as creative medium
42
III — CONCLUSIONS 3.1 Why should artists consider ML
45
3.2 Is the artistic figure still relevant?
47
BIOGRAPHY
51
SITES
54
INTRO Marshall McLuhan: “Art is anything you can get away with.” 1 Since the Stone Age, humans have used tools in order to extend their creative capabilities and to adapt to changing needs. Under these circumstances, the artistic field is getting evermore complex. Over the last decade, creative behaviours have changed from. The way we made art in the past differs from the way we make art today. Our notion of what art is, must adapt in stride with new art forms. Like the invention of applied pigments, the writing press, photography and computers, some believe machine intelligence is an innovation that will profoundly affect art. Artists have usually been responsive to experimenting with and even adopting certain concepts and devices resulting from new scientific and technological developments. Artificial intelligence is no exception. Recent artistic experiments using machine intelligence have produced results that should make us reexamine our preconceptions about creativity and machines. Artificial Intelligence has made significant strides since the 1950s, when it became established as a field. Today it’s used across industries. Already machines can process spoken language, recognise human faces, detect our emotions and target us with highly personalised media content thanks to constant developments in the field of machine learning. Given AI’s vast application and artists’ inclination to experiment with the latest technologies, it’s not surprising that AI and art have merged. Creative re-appropriation of AI techniques is necessary in order to refocus machine learning’s influence on visual culture. Artistic metaphors help 1
Marshall McLuhan. 1967 The Medium is the Massage: An Inventory of Effects with Quentin Fiore, produced by Jerome Agel; 1st
ed.: Random House; reissued by Gingko Press, 2001. 132-136.
6
clarify that which is otherwise shrouded by layers of academic jargon, making these highly specialised subjects more accessible to everyday people. Taking such an approach, we can repurpose these academic tools and harness their capabilities for creative expression and empowerment. As our technologies have matured, especially as everyday technologies have come to combine sophisticated computer processing and worldwide communication networks, we are embarking upon complex interactions. Nowadays interest in artistry from an AI perspective has begun to blossom, with yearly conferences, schools and PhD programs dedicated to computational creativity. A constant flow of ideas and techniques, that are at least computationally creative in intention, have moved into the mainstream: AI characters, artificial musicians, journalist bots, generative architecture and neural nets that “dream”. Excitement around AI as an art medium abounds. Futurist experts have estimated that by the year 2030, computers in the price range of inexpensive laptops will have computational power of that equivalent to human intelligence. The implications of this change will be dramatic and revolutionary, presenting significant opportunities and challenges to artists. More than ever artists need to look beyond human perception and consider the effects of their practice on the world and on what it means to be human in the era of artificial intelligence. This paper attempts to punctuate computational creativity, emphasising the use of artificial intelligence in the field of New Media arts and the importance of an artist figure during this process. As this field is in an ongoing exploration of all sorts of artistic manifestations, the current research focuses on machine learning techniques based on image and visual studies only.
7
I —BEHIND MACHINE INTELLIGENCE Blaise Aguera y Arcas, director of Google’s Machine Intelligence group in Seattle: “We’re witnessing a time of convergences: not just across disciplines, but between brains and computers; between scientists trying to understand and technologists trying to make; and between academia and industry. We don’t believe the convergence will yield a monoculture, but a vibrant hybridity.”2 1.1 From perspective to pixels WE. Variant of a manifesto by Dziga Vertov (1923): “I am kino-eye, I am a mechanical eye. I, a machine, show you the world as only I can see it. Now and forever, I free myself form the human immobility, I am in constant motion, I draw near, then away from objects, I crawl under, I climb onto them. I move apace with the muzzle of a galloping
horse, I plunge full
speed into the crowd, I outstrip running soldiers, I fall on my back, I ascend with an air plane, I plunge and soar together with plunging and soaring bodies. Now I, a camera, fling myself along their resultant, manoeuvring in the chaos of movement, recording movement, starting with movements composed of the most complex combinations. Freed from the rule of sixteen-seventeen frames per second, free from the limits of time and space, i put together any given points in the universe, no matter where I’ve recorded them. My path leads to the creation of a fresh perception of the world. I decipher in a new way unknown to you.” 3
2
https://medium.com/artists-and-machine-intelligence
3
http://artsites.ucsc.edu/faculty/Gustafson/FILM%20161.F06/readings/vertov.pdf
8
The history of images is a history of pigments and dyes, oils, acrylics, silver nitrate and gelatin–materials that one could use to paint a cave, a church, or a canvas. One could use them to make a photograph, or to print pictures on the pages of a magazine. As we see, our eyes become the converging point of information. In order to discypher paintings more easily, we came out with different tricks, the most important being the simulation of depth by drawing perspective. With machines in the middle of capturing the visible it is no longer possible to imagine everything converging on the human vision. The digitalisation of analog mediums imply that the difference between viewer and image began to blur. What’s truly revolutionary about the advent of digital images is the fact that they are fundamentally machine-readable: they can only be seen by humans in special circumstances and for short periods of time. A photograph shot on a phone creates a machine-readable file that does not reflect light in such a way as to be perceptible to a human eye. A secondary application, like a software-based photo viewer paired with a liquid crystal display and backlight may create something that a human can look at, but the image only appears to human eyes temporarily before reverting back to its immaterial machine form when the phone is put away or the display is turned off. However, the image doesn’t need to be turned into human-readable form in order for a machine to do something with it. We can say we no longer look at images, but images look at us. Moreover they no longer simply represent things, but actively intervene in everyday life. Artists have to begin to understand these changes if they are to challenge the exceptional forms of power flowing through the invisible visual culture that we find ourselves enmeshed within. In the computer, man has created not just an inanimate tool but an intellectual and actively creative partner that, when fully exploited, could be used to produce new art forms and possibly new aesthetic experiences.
9
To understand how novel forms of art practice can take advantage of computer vision and machine learning techniques, it is helpful to begin with an understanding of the kinds of problems that vision algorithms have been developed to address, and their basic mechanisms of operation. In order to better understand how computers “read” an image we can use and analogy made by Vilém Flusser in his essay Line and Surface (1973) regarding the difference between the way be read a text and the way we look at a picture. As Flusser suggests, when reading written lines “we follow the text of a line from left to right; we jump from line to line from above to below; we turn the pages from left to right.“ The method changes when we look at a picture: “by passing our eyes over its surface in pathways vaguely suggested by the structure of the picture.”4 By analysing this comparison, we can point out that the difference seems to stand in the method itself. We must follow the written text if we want to get at its message. But in pictures we may get the message first, and then try to decompose it. Digital cameras can take pictures by converting light into a two dimensional array of numbers known as pixels, then it assigns hex codes representing specific colour values to each pixel in an image. But these are just lifeless numbers, they do not carry meaning in themselves. Basically, computers are visually blind. Unlike text, digital data in its basic form — stored solely as a stream of rectangular pixel buffers — contains no intrinsic semantic or symbolic information. As a result, a computer, without additional programming, is unable to answer even the most elementary questions about whether an in image contains a person or an object. The discipline of computer vision has
4
http://www.flusserbrasil.com/arte87.pdf
10
developed to address this need. Many low-level computer vision algorithms are geared to the task of distinguishing which pixels, if any, belong to people or other objects of interest in the scene. With the gradual incorporation of digitalised and digital images into our visual culture, the demand for straightforward computer vision capabilities has grown as well. Computer vision algorithms are increasingly used in interactive and other computerbased artworks to track people's activities. There are techniques which can create real-time reports about people's identities, locations, gestural movements, facial expressions, gait characteristics, gaze directions, and other characteristics. Although the implementation of some vision algorithms require advanced understandings of image processing and statistics, a number of widely-used and highly effective techniques can be implemented easily. Digital computers are constructed from a myriad of electronic components whose purpose is to switch extremely small electric currents nearly instantaneously. The innermost workings of the computer are controlled by a set of instructions called a program. Although computers must be explicitly instructed to perform each operation, higher-level programming languages enable pyramiding of programming statements that are later expanded into basic computer instructions by special compiler programs. Computers most certainly are only machines, but they are capable of performing millions of operations in a fraction of a second and with incredible accuracy. Thus, computers can appear to show intelligence. They might assess the result of past actions and modify their programmed algorithms to improve previous result; we can say computers potentially could be programmed to learn.
11
1.2 Introduction to Machine Learning Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks as, for example, discovering proofs for mathematical theorems or playing chess with great proficiency5. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition. Some say artificial intelligence isn't the rise of the machine but the “machinification” of humans. 6 Artificial intelligence (AI) is the ability of a digital computer or a computercontrolled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalise, or learn from past experience.
5
The first chess match between world chess champion Garry Kasparov and an IBM supercomputer called Deep Blue took place in
Philadelphia on February 10th 1996. The later 1997 match was the first defeat of a reigning world chess champion to a computer under tournament conditions. 6
https://snips.ai/content/intro-to-ai/#voice-command
12
Machine learning (ML) is a core subarea of artificial intelligence concerned with the implementation of computer softwares that are able to learn autonomously. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look. Moreover, machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. The emphasis of machine learning is on automatic methods. In other words, the goal is to devise learning algorithms that do the learning automatically without human intervention or assistance. Machine learning algorithms can figure out how to perform important tasks by generalising from examples. Continuously evolving models produce increasingly positive results, reducing the need for human interaction. These evolved models can be used to automatically produce reliable and repeatable decisions. In the past decade, machine learning has given us self-driving cars7, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Today, ML is so implemented that we probably use it everyday without even knowing it. A great part of our daily activities are powered by self-learing algorithms. Examples include:
- personalised content (Amazon, Netflix, Spotify) - advanced web search results (Google search by image) - real-time ads on web pages and mobile devices - text-based sentiment analysis. - pattern and image recognition - email spam filtering 7
le cosiddette self-driving cars sono veicoli autonomi basati su tecniche di radar, GPS e visione artificiale in grado di soddisfare le
principali capacità di trasporto di una macchina tradizionale senza l’ausilio dell’uomo
13
To further punctuate the importance of this field in our everyday life we can look at the Gartner’s Hype Cycle, a branded graphical presentation developed and used by American information technology research and advisory firm Gartner for representing the maturity, adoption and social application of specific technologies. The Hype Cycle for Emerging Technologies report provides a cross-industry perspective on the technologies and trends that we should consider in developing emerging-technology portfolios. In mid 2016 the chart identified three key trends that will probably have the highest priority one of which features Machine Learning technologies. As Gartner suggests, smart machine technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep neural networks that will allow organisations with smart machine technologies to harness data in order to adapt to new situations and solve problems that no one has encountered previously.
FIGURE 1 | “GARTNER HYPE CYCLE FOR EMERGING TECHNOLOGIES” 2016
14
By experimenting with machine learning techniques, we can introduce new endeavours in the art fields, emphasising the capability of this technology to become one of the means of creative expression. Nevertheless coder artists should not be seeking a legitimacy in technologic features more than in the artistry of the production of content. As Domenico Quaranta admits, most new media art is lacking any kind of content, personal statement or conceptual idea: “technology for technologies sake does not equal art.”8
But how can we really comprehend how computers understand? Firstly, we can compare machine leaning with the way we teach infants to recognise things. In his book Fantasia, Bruno Munari explains how children see and think.9 A lot of people believe kids have a great fantasy because they see in their drawings things that seem strange and unreal. Actually, as Munari suggests, children only apply a simple task: projecting all that they know on the things that they don’t know for sure. In order for the child to become a creative person, we have to make sure the child memorises all the information he can receive so that he can make more possible relations and connections between things. Animals and humans can learn to see, perceive, act, and communicate with an efficiency that no machine learning method can approach. The brains of humans and animals are "deep", in the sense that each action is the result of a long chain of synaptic communications (many layers of processing). As humans, we try to frame everything we see, hear, learn within the context of what we already know, and we build on top of that.
8
Quote by Michael Importico as taken from Domenico Quaranta’s Beyond New Media Art, LINK Editions, Brescia 2013. p. 227
9
Bruno Munari. Fantasia, Universale Laterza, 1977.
15
This can be purely visual like seeing faces in clouds. Or it can be more critical as it affects how we learn, make decisions, construct theories or develop prejudices based on the limited knowledge that we have. If we don’t have sufficient information, the assumptions we make are likely to be incorrect, as are the decisions we make as a result of them. Machine learning can be seen as an extension of how we make sense of the world.
FIGURE 2 | SHINSEUNGBACK KIMYONGHUN - CLOUDFACES (2012)
An Artificial Neural Network (ANN) is a computational model is inspired by the way biological neural networks in the human brain process information. It is designed based on approximating brain’s architecture. Artificial Neural Networks have generated a lot of excitement in Machine Learning research and industry, thanks to many breakthrough results in speech recognition, computer vision and text processing.
16
The majority of practical machine learning uses supervised learning. It is called supervised learning because the process of an algorithm learning from the training dataset can be thought of as a teacher supervising the learning process. We know the correct answers, the algorithm iteratively makes predictions on the training data and is corrected by the teacher. Learning stops when the algorithm achieves an acceptable level of performance. ft to their own devises to discover and present the interesting structure in the data. When an artificial neural network receives an input such as an image, it tries to make sense of it based on what it already knows. The image data flows through the network, ‘activating’ neurons. A neuron, often called a node of unit is the basic unit of computation in a neural network. It receives input from some other nodes, or from an external source and computes an output. Effectively the image is ripped apart and scanned for features that the network recognises.
FIGURE 3 | STRUCTURE OF AN ARTIFICIAL NEURAL NETWORK
17
Data flows in one side of this neuron network (via input nodes), gets processed along the network, and something gets out as output on the other side of (via output nodes). Basically, a neural network processes images like humans. When we look at a dog, we don’t just see “dog”. Raw pixel data travels from our eye to our brain, where it passes through layer after layer of processing. At the most basic layers neurons pick out edges, lines, movement, dark and light. Those simple features are passed on to the next layers where they might be assembled into slightly more complex ones, like texture, hair, skin, and basic shapes. Those layers feed information deeper into your brain until eventually you’ve assembled a “dog”. A deep convolutional neural network (CNN or ConvNet) works in much the same way. ‘Deep’ just means it has lots of layers – we can think of it as a whole series of neural networks, layered one after the other. A deep neural network might have 10 to 20 hidden layers, whereas a typical neural network may have only a few. The more layers in the network, the more characteristics it can recognise. Unfortunately, the more layers in the network, the longer it will take to calculate, and the harder it will be to train. Convolutional Neural Networks are designed to recognise visual patterns directly from pixel images with minimal preprocessing. They can recognise patterns with extreme variability (such as handwritten characters), and with robustness to distortions and simple geometric transformations. All of this is self-learned. A neural network isn’t told to look for these recurring features, it discovers them by processing vast quantities of data. In order to take advantage of this kind of learning, huge datasets to train on and lots of processing power as well as weeks of expert tweaking are needed.
18
The results are much more robust though, and deep learning systems are incredibly good at spotting patterns and motifs in data, whether it’s numbers, image pixels or musical notes. As an example of a deep learning algorithm application, we can take image recognition. Since living organisms process images with their visual cortex, many researchers have taken the architecture of the visual cortex as a model for neural networks designed to perform image recognition. The biological research goes back to the 1950s. LeNet was one of the very first convolutional neural networks which helped propel the field of Deep Learning. This pioneering work by Yann LeCun was named LeNet after many previous successful iterations since the year 1998. At that time the LeNet architecture was used mainly for character recognition tasks such as reading zip codes, digits, etc. In esssence, LeNet consists of layers all of which contain trainable parameters weights. As input, LeNet uses a large collection of 32x 32 pixel images.
FIGURE 4 | LA RETE NEURALE CONVOLUZIONLAE LENET
19
To be more specific, the database used to train and test the network consists of 9298 segmented numerals digitised from handwritten zip codes that appeared on U.S mail passing through the Buffalo, NY post office. The digits were written by many different people using a great variety of sizes, writing styles and instruments, with a widely varying amounts of care.Basically, to recognise an image, a convolutional neural network “applies” a series of feature extraction filters (such as blur, edge detection, sharpness) and learns the values of these filters on its own during the training process. The more number of filters we have, the more image features get extracted and the better our network becomes at recognising patterns in unseen images. The landscape of machine learning is becoming evermore active in the visual world where its continued expansion is starting to have profound effects on human creativity. If we want to understand the invisible world of machine imagination, we need to unlearn how to see like humans. We need to learn how to see a parallel universe composed of pixels, neurons, classifiers and training sets.
20
1.3 How ML started touching art fields
Blaise Aguera y Arcas: ”As with art in any medium, some of it undoubtedly will be kitsch — we have already seen examples — but some will be beautiful, provocative, frightening, enthralling, unsettling, revelatory, and everything else that good art can be.”10 From the very beginning, machine learning saw an explosion of research and speculation on the possibility of creating artificially intelligent machines. The goals of these applications are varied. Some researchers imagine fully sentient beings capable of anything we might define as intelligent, while others are concerned with specific activities such as game playing, planning, pattern finding and social interaction. Since the first days of AI, researchers have searched for ways to imbue computers with the spark of creativity. In contrast to systems focused on reasoning and deduction, these projects explore strategies for creation. Computers being involved in the creation of art is not a new development. To some extent, we are all familiar with the computerised art. Softwares that are used to create or manipulate art are ubiquitous, but these are mere tools for a human artist. Curiosity pushed visual artists, coders and musicians alike to play with artificial intelligence since its first endeavours in various scientific applications. Among the most famous examples in the artificial generated art (both abstract and representational) is Simon Colton’s “The Painting Fool”. Inspired by the pioneering works of Harold Cohen and his computer program AARON.11, “The Painting Fool” describes itself as “a computer
10
https://medium.com/artists-and-machine-intelligence/what-is-ami-ccd936394a83#.v92gsxb32
11
AARON was the first automatic system able to paint without any human assistance. It is considered the first AI artist.
21
program and an aspiring painter.”12 As it’s inventor, computer scientist and Imperial College of London professor Simon Colton puts it, "The goal of the project is not to produce software that can make photos look like they've been painted, Photoshop has done that for years.”13 The goal is to see whether software can be accepted as creative in its own right.” In fact, Colton developed this piece of software to create artworks by its own, seeking artistic inspiration online and essentially drawing on human experience. As Colton sustains, “it will wake up in the morning and look at the newspaper headlines”. For example, the software created a painting regarding the war in Afghanistan by reading an article published in The Guardian, extracting keywords and searching for images for these words as inspirations. The result is an eerie collage composed of fighter plane with an explosion, a family, an Afghan girl, and a field of war graves. As the algorithms behind it began to be more complex (using machine vision methods to tell whether it had achieved an image which was appropriate to its mood, and machine learning techniques to learn to be better at this in the future), by 2011 The painting Fool began displaying imagination on its own creating pictures from scratch with the invention of visual objects and scenes that don't exist in reality.For “The Dancing Salesman Problem” no digital photograph was used in its construction. The piece is so named because a solution to an instance of the travelling salesman problem used to generate the brush strokes.14
12
http://www.thepaintingfool.com/
13
http://www.wired.co.uk/article/can-computers-be-creative
14
The Travelling Salesman Problem is a classic computer science problem where a Travelling Salesman has to drive from town to
town without returning to one previously visited.
22
FIGURE 5 | PAINTING FOOL’S INTERPRETATION OF THE WAR IN AFGANISTAN
FIGURE 6 | THE DANCING SALESMAN PROBLEM
23
We can say that while we humans work, play and rest, our machines are ceaselessly reinterpreting old data and even spitting out all sorts of new, weird material. Maybe the best example in this case is Google’s DeepDream. In 2015, software engineers Alexander Mordvintsev, Christopher Olah and Mike Tyka published a blog post regarding the idea behind the project: to test the extent to which a neural network had learned to recognise various animals and landscapes by asking the computer to describe what it saw.15 In other words, the aim of their research was to peek inside the networks behind image classification and recognition to better understand their functionality, layer by layer. So instead of just showing a computer a picture of a tree and saying, "tell me what this is," engineers would show the computer an image and say, "enhance whatever it is you see”. Where before there was an empty landscape, DeepDream creates pagodas, cars, bridges and human body parts and especially hallucinated animals (for example the famous “puppyslugs” that went viral in 2015 as a result of the algorithm being trained on ImageNet16 which was by the time being updated with dozens of images of dog breeds). As Google engineers found out, DeepDream also generated some beautiful art so they decided to publish their techniques and made their code open source. As a result a number of tools in the form of web services, mobile applications, and desktop software appeared on the market to enable users to transform their own photos.
15
https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-
neural.html 16
The ImageNet project is a large visual database designed for use in visual object recognition software research. As of 2016,
ImageNet included more than 10 millions URLs of images organised in various categories.
24
FIGURE 7 | DEEPDREAM EXAMPLES
This new genre of algorithmically generated imagery requires no criticality or particular intellectual effort to digest, nor does it provide much reward in return. Deep Dream is our own visual culture feed back to us. Following the blog post, and especially after the release of the source code, they witnessed a tremendous interest not only from the machine learning community but also from the creative coding community. Additionally, several artists such as Memo Akten, Kyle McDonald, Gene Kogan and many others immediately started experimenting with the technique as a new way to create art beyond the kitsch typecast of puppyslugs and acid-trip imagery. As artist J.T Nimoy suggests in one of his blog posts, “It is the job of the artist to select a meaningful guide image, whose relationship to the training set is of interesting cultural significance. Without that curated relationship, all you have is a good old computational acid trip.”17
17
http://jtnimoy.com/blogs/projects/50616707-deepdream-avoiding-kitsch
25
FIGURE 8 | J.T NIMOY
FIGURE 9 | J.T NIMOY
26
II — CASE STUDIES AND ART APPLICATIONS 2.1 Style transfer and going mainstream In August 2015, Leon A Gtys, Alexander S. Ecker and Matthias Bathge from the University of Tübingen in Germany, published a paper called A neural algorithm of artistic style in which they introduced the technique that allowed the creation of new images that combined the style of one with the content of another with the use of convolutional neural networks.18 By separating the representations of content and style, they figured out they can manipulate the outcome given two different source images. Their first example was generating a mix of the content of a photo shot in Tübigen and the style of several well-known artworks. Effectively, this renders the photograph in the style of the artwork, such that the appearance of the synthesised image resembles the work of art, even though it shows the same content as the photograph. This method should later be know as Style Transfer.While the global arrangement of the original photograph is preserved, the colours and local structures that compose the global scenery are provided by the artwork. Once again this discovery took the creative coding community by storm and immediately many artists and coders began experimenting with the new algorithm, resulting in Twitter bots and other explorations and experiments. Smartphone apps like Prisma and various social-networking plugins opened the research to mass-usage. A more recent implementation regards video style transfer, with improved frame-to-frame stability and even (near) real-time style transfer on the webcam feed. 18
https://arxiv.org/abs/1508.06576
27
As suggested by Gtys, Ecker and Bathge, Style Transfer is a new fascinating method of study that can provide new stimuli for “better understanding the perception and neural representation of art, style and content-independent image appearance in general�. Moreover, Style Transfer can be useful to explore why humans have the ability to abstract content from style and therefore enjoy art as we know it.
28
FIGURE 10 | FIRST EXAMPLES OF STYLE TRANSFER
29
2.2 Categorisation via Google Arts and Culture Style recognition has also been used in order to classify artworks to the period in which they were created. To this extent, Google Cultural Institute developed a platform for curating and organising artworks. For this project called Google Arts and Culture, Google Cultural Institute has secured millions of high-quality images of artworks and artefacts from hundreds of partner museums around the world. An interesting aspect of Google Arts and Culture is that the platform provides various machine-leaning-based experiments. We can say that these experiments straddle the line between being interfaces organising existing cultural objects and brand new artworks in and of themselves. For example “X Degrees of separation”19 lets the visitor search the collection based on artworks’ similarities and traits. The result looks like a logical evolution. The connections are conjured out of a machine’s understanding of artistic similarities so half the pleasure is the unexpected leaps this produces.
FIGURE 11 | X DEGREES OF SEPARATION
19
The title refers to the “six degrees of separation” theory, by which all living things and everything else in the world is six or fewer
steps away from each other so that a chain of "a friend of a friend" statements can be made to connect any two people in a maximum of six steps.
30
The platform also provides the visualisation of all their image clusters organised based on different criteria of association meaning how the computer has decided to sort and cluster artworks in relation to one another, based on its understanding of aesthetic similarities. Zooming in on the terrain, you discover that the points are actually images of artworks.
FIGURE 12 | FREE FALL
By using machine-learning, Google managed to surpass the “Image Search” option its browser provides, organising and categorising millions of artworks. The diversity is striking. In a lot of ways, these are truly dazzling achievements. The pitch is to make art engaging to the people on the web. Many could argue over the fact that artworks should be enjoyed in a museum. Even Amit Sood, Google Cultural Institute’s director, stated: “You can never replicate the experience of seeing a work of art online. I still prefer seeing van Gogh's The Starry Night in person”20. Even so, we must understand that cultural accessibility in the virtual era must be used for the sake of education.
20
https://www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit-sood-google-cultural-
institute-art-project
31
2.3 Machine Learning vs art historians When examining a painting, an art expert can usually determine its style, its genre, the artist and the period to which it belongs. Art historians often go further by looking for the influences and connections between artists, a task that is even more complex. The possibility that a computer might be able to classify paintings and find connections between them at first glance seems laughable. A group of researchers from Rutgers University in New Jersey guided by Babak Saleh, took the chance and used some of the latest image processing and classifying techniques to automate the process of discovering how great artists have influenced each other.21 They have even been able to uncover influences between artists that art historians have never recognised until now. The way art experts approach this problem is by comparing artworks according to a number of high-level concepts such as the artist’s use of space, texture, form, shape, colour and so on. Experts may also consider the way the artist uses movement in the picture, harmony, variety, balance, contrast, proportion and pattern. Other important elements can include the subject matter, brushstrokes, meaning, historical context and so on. The method used by the researchers was developed at Dartmouth College in New Hampshire and Microsoft research in Cambridge, UK and regards the classification of pictures according to the visual concepts that they contain. Comparing images is then a process of comparing the words that describe them, for which there are a number of well-established techniques.
21
https://it.mathworks.com/company/newsletters/articles/creating-computer-vision-and-machine-learning-algorithms-that-can-
analyze-works-of-art.html
32
Salah and co apply this approach to over 1700 paintings by 66 artists working in 13 different styles. Together, these artists cover the time period from the early 15th century to the late 20th century. To create a ground truth against which to measure their results, they also introduced expert opinions on the matter. In many cases, their algorithm clearly identifies influences that art experts have already found. As the researchers sustain, the algorithms correctly identified 60% of the 55 influences recognised by art historians, suggesting that visual similarity alone provides sufficient information for algorithms (and possibly for humans) to determine many influences. Of course, this explanation does not claim that this kind of algorithm can take the place of an art historian. After all, the discovery of a link between paintings in this way is just the starting point for further research about an artist’s life and work, but it is a fascinating insight into the way that machine learning techniques can throw new light on a subject as grand and well studied as the history of art.
33
2.4 Recognition: past and present face to face Many art museums have started to ask themselves how to get people interested in art and how can technology get involved in order to facilitate this quest. Recently Tate Britain organised the IK Prize 2016 , a competition which promotes the use of digital technology in the exploration of art at Tate Britain or on the Tate website. Evidently, AI was chosen as the theme because, as multimedia producer for Tate Tony Guillan says, “getting machines to do what humans can do is one of the most exciting frontiers in technology.” 22 The winning entry, Recognition, came from Fabrica, a communication research center in Treviso, Italy. It is a site that features an program that continuously screens about 1,000 news photographs a day, supplied by Reuters, and tries to match them with 30,000 British artworks in the Tate’s database, based on similarities in faces, objects, theme and composition. With Recognition the team managed to link our everyday lives to art collections, illuminating similarities between the present and the past. To some degree art history seems to repeat itself in contemporary photojournalism. Besides being a creative test of computer vision capabilities, like object recognition, facial recognition, and composition analysis, the program’s matches make the old seem new again, and vice versa. In doing so, the program provokes questions by drawing connections between cultures and time periods that, on the surface, seem alien to one another.
22
https://www.nytimes.com/2016/10/30/arts/design/artificial-intelligence-as-a-bridge-for-art-and-reality.html
34


FIGURE 13 | EXAMPLES TAKEN FROM RECOGNITION WEBSITE
35
2.5 Mario Klingemann’s joy of order An interesting aspect of machine learning is the poetic beauty of classification. Artist and creative coder Mario Klingemann alias Quasimondo has been doing some fascinating work using images from The British Library 1 million collection on Flickr. 23 At first, the problem with the collection was that it lacked any kind of order. The only way to find interesting images was to browse the collection either randomly or linearly since the only useful metadata available were the title of the book, the publishing year and the author. This motivated Klingemann to use machine learning techniques to add meaningful tags to the images and start the creation of thematic collections. Furthermore he created various artworks with the found images many of which are based on the principle of order. By training and manipulating algorithms to pick out faces and analyze color, he’s able to make his own beautiful discovery. Besides the order in itself Klingemann points out that “when you start building up a sorting system, whenever you find something that fits that system you get happy”. Basically, Klingemann starts with a single category of images and by using clustering algorithms he splits them intro smaller categories based on different tags and thematics. What is interesting about this process is that by putting similar elements together we can actually enjoy the beauty of every single one. The key factor in Klingemann’s work is in fact the drive to overcome limitations by creatively repurposing and recombining objects and systems to reveal their hidden qualities. 23
http://labs.bl.uk/The+Order+of+Things
36
FIGURE 14 | M. KLINGEMANN
FIGURE 15 | M. KLINGEMANN - FOURTYFOUR GENTLEMEN WHO LOOK LIKE FOURTYFOUR
37
2.6 Other examples: Microsoft’s Cognitive Services Obviously or not, Microsoft is also present into the AI field field with 25 Cognitive Services APIs24 , giving developers a way to add features like speech recognition, language understanding, sentiment detection and more to their applications. Previously named Project Oxford, this suite of APIs wants to democratise the use of machine intelligence by third party apps (for example, Uber25 uses the Face API for their services’ security, helping to ensure the driver using the app matches the account on file). Currently, Microsoft offers 25 APIs across 5 AI categories (i.e. vision, speech, language, knowledge, and search). One of the great things about the suite as it stands today is that you don’t need to be a developer to try these services out yourself. On many of the pages for the individual services you have the ability to trial them. One of the most intriguing applications is the Emotion API. It detects emotions in photographed faces using machine-learning techniques. When you upload a picture to the site, the software scans the subject’s face and attempts to read the feelings behind their expression. It then presents a scorecard for each subject, ranking them on eight emotions: anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise.
24
Application Programming Interfaces are sets of functions and procedures that allow the creation of applications which access
the features or data of an operating system, application, or other service. 25
Uber is a San Francisco–based service that provides private automobile transport thanks to a simple app that directly connects
drivers and passengers. The service operates in more than 66 countries and 508 cities worldwide
38
A little experiment from 2015 ran famous paintings through the API, detecting emotions in artworks such as Leonardo’s Mona Lisa and Vermeer’s Girl with a Pearl Earring. Art historians and critics have spent countless hours analysing the emotions of famous portrait subjects. It is rather interesting to retrieve these parameters in a scientific way, in some way demystifying centuries of ambiguity. Of course this is just an attempt without any further proof. Feelings and emotions have always been a matter of subjectiveness. The site does offer a disclaimer: “Recognition is experimental, and not always accurate.”26 The process of reading feelings in faces has long been considered an intuitive rather than scientific one. Nonetheless it is interesting to represent emotions using scores and percentages. To some extent, this operation can be compared to Marcel Duchamp’s action of painting a moustache to the Mona Lisa, the 1919 L.H.O.O.Q.
FIGURE 16 | EMOTION API
26
FIGURE 17 | M. DUCHAMP - L.H.O.O.Q.
https://www.microsoft.com/cognitive-services/en-us/documentation
39
Another interesting attempt to understand how machines perceive art is by uploading paintings to the Computer Vision API and seeing the descriptions the algorithm return. The algorithm has been trained on several thousands of recognisable objects, living beings, scenery, and actions. The results are rather captivating. Furthermore by comparing the initial uploaded painting with an image taken from Google after running the description, the analogy becomes even more poetic. The most fascinating examples are the ones that use abstract paintings as inputs. 
 
 

40


FIGURE 18 | W. KANDINSKY - SKY BLUE (1940)
FIGURE 19 | A GROUP OF PEOPLE FLYING COLOURFUL KITES
FIGURE 20 | W. KANDINSKY - COMPOSITION VIII (1923)
FIGURE 21 | A ROW OF MOTORCYCLES CONFIDENCE : 0.021678318591068602
41
2.7 The use of data as creative medium Kyle McDonald: “As an artist, I feel a responsibility to explore creative possibilities for algorithms and systems traditionally used for control.“27 New technologies have had a significant impact on artistic practice, and art has the duty to explore this potential. Nicolas Bourriaud makes some rather interesting reflections on this argument by seeing art as post-production and the artist as a multidisciplinary figure, able to select from reality objects with cultural value and put them in new settings. In Postproduction he questions himself over the need of new artistic actions: “The artistic question is no longer “what can we make that is new?” but “how can we make do with what we have?” In other words, how can we produce singularity and meaning from this chaotic mass of objects, names, and references that constitutes our daily life?”28 The artists of this generation are experiencing the creation of the imagery of the internet from the inside: the ever-expanding mass of amateur photography and low-res videos, but also postcards, greetings cards, little animations and artefacts of all kinds produced from an ingenuous use of the standard tools and effects of the multimedia production studio that is the resource at our fingertips. Many already found the use of pre-existing data as a suitable substitute of the blank canvas, recognising its artistic potential. From Google’s Arts and Culture platform to Klingemann’s classifications, the starting point of this artistic outputs are image data clusters ranging from hundreds to millions. 27
https://twitter.com/kcimc/status/798874169496715266
28
Nicolas Bourriaud. Postproduction: Culture as Screenplay: How Art Reprograms the World. New York: Lukas & Sternberg, 2002.
42
To some extent we can say data has become a dignified material that coders and artists alike can use to create. Within the digital era, the unlimited amount of online data can be regarded as an source full of perspective. As we know, data is king in Machine Learning. By the time you read the last sentence approximately 4.000 new photos were added on Instagram, 5.600 new posts were published on Tumblr and nearly 555.000 new videos were uploaded on YouTube.29 The data volumes are exploding, more data has been created in the past years than in the entire previous history of the human race, yet at the moment just a little fraction of all data is ever analysed and used (which is quite interesting as new technologies and researches including autonomous vehicles, machine learning, and the Internet of Things30 all produce and rely on large quantities of data). As taken from the Free On-Line Dictionary Of Computing - FOLDOC’s definition of “data”, data on its own has no meaning. Only when interpreted by some kind of data processing system does it take on meaning and become information. As big data 31 becomes the new normal, a new artistic behaviour emerges. We live in a global digital culture in which the materials and techniques of new media are widely available and accessible to a growing proportion of the population. Millions and millions of people around the world participate in social media, and have the ability to produce and share with millions and millions of other people their own texts, images, sound recordings, videos and GPS traces.
29
http://www.internetlivestats.com/one-second/
30
Internet of things è un neologismo riferito all'estensione di Internet al mondo degli oggetti e dei luoghi concreti.
31
Big data è il termine usato per descrivere una raccolta di dati così estesa in termini di volume, velocità e varietà da richiedere
tecnologie e metodi analitici specifici per l'estrazione di valore.
43
But how can artistic behaviours deal with the expanding cluster of images, videos, Facebook posts and tweets? The practice of using “found objects” is anything but new. Objects or products that weren’t normally considered art (often because they already have a non-art function) were already being exhibited as art pieces years ago. To be exact 101 years ago, when Marcel Duchamp made Fountain - a urinal exhibited as part of the Society of Independent Artists at The Grand Central Palace in New York. He didn't "make" the object—it was a commercially manufactured urinal—he "made" the artwork. Fountain is art because Duchamp called it art. Its influence on art is undeniable and it is considered the most influential artwork of the 20th century. By decontextualising every-day objects and giving them new aesthetic meaning as ready-mades, Duchamp opened the path to new contextual operations and praxis. As Domenico Quaranta affirms 32, for at least the last fifteen years contemporary art has been the art of the information age. By using data sets and information than don’t really have meaning outside an artistic setting (if not statistical value), artists and researches in the field of New Media can explore the potential that lies within the everyday flow of data through play.
32
Domenico Quaranta. Beyond New Media Art, LINK Editions, Brescia 2013. p.17
44
III - CONCLUSIONS
3.1 Why should artists consider ML?
Art tends to give shape and weight to the most invisible processes. Beyond pure execution and illustration, the artistic practice is a combination of style and influences mixed up in new surprising and innovative ways. Artistic activity has always been a game, whose forms, patterns and functions develop and evolve according to periods and social contexts. The first photograph was taken in 1826. It was crude and took several days of exposure to achieve a poorly composed and grainy image of a roof. How could a technology ever rival the vivid human energy captured by a master painter? But here we are two hundred years later. Portraiture is dead. Glass lenses and digital hard drives capture the human visage in bits and bytes, by the millions every day. We face today yet another radical change to our visual culture all of which is conditioned by machine intelligence.
FIGURE 23 | NICEPHORE NIEPCE - VIEW FORM THE WINDOW AT LE GRAS (1826)
45
In Art of the Digital Age, Bruce Wands depicts the digital artist as “someone equipped with technological skills and a good dose of technological curiosity; often a programmer, used to working in collaboration with other programmers and IT engineers”.33 Attracted to new technologies, the new media artist is viewing art in terms of research and experimentation and ventures off the the beaten tracks of established languages and forms. Domenico Quaranta adds that art in the era of New Technologies “appears to have entirely overcome the romantic conception of the artist as genius, and seems to be more interested in returning to the Renaissance models of artist as artisan and artist as scientist.”34. As we already saw, Nicolas Bourriaud punctuates the fact that artists today program forms more than they compose them. The advances of technologies and today’s freedom of choice puts the artist in front of a vast variety of digital mediums, opensourced or not. The problem doesn’t stand in the practical realisation but rather in the ideation of a project that can mirror society and resonate with today’s dynamism. With machine learning as a field moving forward at a breakneck pace and rapidly becoming part of many, if not most , online services, the opportunities for artistic uses are as wide as they are unexplored and perhaps overlooked. However the interest is growing rapidly: the University of London is offering a course on Machine learning and art while NYU ITP offers a similar program. Ars Electronica’s 2017 topic will be AI and and as we saw Tate Modern’s IK Prize 2016 topic was AI as well. A recent report from the McKinsey Global Institute asserts that machine learning will be the driver of the next big wave of innovation.
33
Quote from Domenico Quaranta’s Beyond New Media Art, LINK Editions, Brescia 2013. p.99
34
Domenico Quaranta. Beyond New Media Art, LINK Editions, Brescia 2013. p.100
46
3.2 Is the artistic figure still relevant?
Marshall McLuhan: “First we shape our tools, thereafter they shape us.”35
We are the only species to perform sophisticated creative acts regularly. If we can break this process down intro computer code, what does that leave to human counterpart? We've come to accept that computers are, in some ways, smarter than humans, or at least more powerfully logical. We have a harder time entertaining the question of whether machines could ever be as creative as humans In 2015 teams from Dutch museums Mauritshuis and Rembranthuis, alongside Microsoft, ING and the Delft University of Technology developed a deep-learning algorithm trained on Rembrandt's 346 known paintings and then asked to produce a brand-new one replicating the artist's subject matter and style. Creating a faithful replication of a Rembrandt painting required huge amounts of data, with the team describing it was a "marriage" between technology and art. Dubbed The Next Rembrandt, the result is a portrait of a caucasian male, a recurrent subject in the artist’s body of work. Since its official unveiling in Amsterdam it's become a controversial flash point between the worlds of technology and creativity, raising uncomfortable questions about the future of artificial intelligence and art. Imagination is supposed to be our exclusive province, the spark that makes us special, the thing computers could never dream of mastering. The Next Rembrandt questions that, much to the glee of many technologists and the 35
Marshall McLuhan. 1967 The Medium is the Massage: An Inventory of Effects with Quentin Fiore, produced by Jerome Agel; 1st
ed.: Random House. audiobook version
47
consternation of many art historians. Beyond the layered, unresolved arguments about the nature of creativity, there lies great potential for us to appreciate the artistic collaboration that is possible between humans and computers. During this process, the machine acquires a creative role thanks to algorithms controlling parts of the artistic process. Overall, the control and the direction of this kind of process is without any doubt in the hands of the artist. Even tout the interaction between him and the machine is differente form the traditional way of interaction (that are patini and his brushes or the sculptor and his tools), his role as a guide and curator will remain an important part of the artwork. Moreover, his education and the social context will give him the possibility to control even better this experience. There is no doubt the traditional artistic mediums will not be discarded but there will be conditioned and influenced by the “thinking” machines. Doubters will surely have a myriad objections. For example, aren’t these algorithms just simply visual filters on existing content? Other will sustain that teaching an algorithm to “make” art is not the same as creating it. Artists take inspiration from various sources and inputs, making new actions from them. So is it that different when a neural network gets thousands of examples of artworks and creates something “original” from what it learned? This question invites us to rethink art as something generated by and then consumed by hybrid entities. As Vilém Flusser puts it, “tools are extensions of human organs: extended teeth, fingers, hands, arms, legs.”36 Preindustrial tools, like paintbrushes or pickaxes, extend the biomechanics of the human body, while more sophisticated machines extend
36
Vilém Flusser. Towards A Philosophy of Photography, trans. Anthony Mathews, London: Reaktion Books, 2000. p.23-24
48
prosthetically into the realms of information and thought. Hence, all apparatuses, not just computers are artificial intelligences, the camera included. Even Marshall McLuhan says that all media are extensions of some human faculty— psychic or physical. “The wheel is an extension of the foot. The book is an extension of the eye. Clothing and extension of the skin, electric circuitry an extension of the central nervous system.”37 We can say that machine algorithms are an extension of the human brain and that the convolutional networks used in machine learning are loosely inspired by our own visual cortex. At first blush, AI-produced art may appear autonomous. For some, using AI in art is a way to show the autonomy and creativity of a machine, thereby downplaying the human role in the art creation process. By doing so, machines go beyond the point of being an extension of the artist and into the territory of artist themselves. This remains for now only a futuristic thought. AARON’s inventor, Harold Cohen argues, a robot would have to develop a sense of self. He admits that he doesn’t think computers are as creative as himself in writing the program: “I think for a program to be fully creative in a more complete sense, it has to be able to modify its own performance and that is a very difficult problem.“38 That may or may not happen, and if it doesn’t, it means that machines will never be creative in the same sense that humans are creative. In the meantime, it’s worth recognising the human effort associated with this art: all the algorithms, the applied formulas and subsequent coding are all “forged” by human minds. By doing so, we move past the hype surrounding AI today to discover the deeper, human layers that make this 37
Marshall McLuhan. 1967 The Medium is the Massage: An Inventory of Effects with Quentin Fiore, produced by Jerome Agel; 1st
ed.: Random House; reissued by Gingko Press, 2001. p. 26-41 38
Harold Cohen and AARON: Ray Kurzweil interviews Harold Cohen about AARON
http://www.computerhistory.org/revolution/computer-graphics-music-and-art/15/231/2306
49
artwork interesting and compelling. The human effort in AI art shouldn’t stay behind the curtain—it needs to be emphasised for various reasons. Explaining the process of AI-produced art may help give AI credibility as another medium for creative expression. Some people think this type of art does not require human effort. Showing or describing the AI process dispels this myth and reveals the human intent, thought and labor involved, giving them a context and, by doing so creating a humanised connection between action and content. This information may encourage others to experiment artistically with AI. This is important, because it’s easy to dismiss something when it’s not well understood. Yet, sharing too much information may diminish the excitement behind an artwork.
As we already saw, the machine is able to create original works, inspired by human imagery or not. But the question is: can we, as humans, accept the machine as an artist per sé or is it bound to forever remain an imitator and/ or a human auxiliary tool? The trick is to stop trying to compare computer artists to human ones. If we can embrace computer creativity for what it is and stop trying to make it look human, not only will computers teach us new things about our own creative talents, but they might become creative in ways that we can’t begin to imagine. The human artist has always been tormented and inspired by the interplay between social, emotional, historical, psychological, and physiological factors. Maybe one day a machine might develop an equivalent sensibility, but even if that never comes to pass, it doesn’t mean that machines have no part to play with respect to creativity. As the Machine Learning experiences presented in this paper demonstrate, Artificial Intelligence offers the artist something beyond a simple assistant: a new creative collaborator.
50
BIBLIOGRAPHY
1. Nicolas Bourriaud. Relational Aesthetics. Paris: Presses du réel, 2002. 2. Nicolas Bourriaud. Postproduction: Culture as Screenplay: How Art Reprograms the World. New York: Lukas & Sternberg, 2002. 3. Vilém Flusser. Towards A Philosophy of Photography, trans. Anthony Mathews, London: Reaktion Books, 2000. 4. Vilém Flusser. Writings, edit. Andreas Strohl, trans. Erik Eisel. University of Minnesota Press, 2002. 5. Marshall McLuhan. 1967 The Medium is the Massage: An Inventory of Effects with Quentin Fiore, produced by Jerome Agel; 1st ed.: Random House; reissued by Gingko Press, 2001. 6. John Berger. Ways of Seeing. UK: Penguin Press, 1972. 7. Bruno Munari. Fantasia, Universale Laterza, 1977. 8. Lev Manovich. The Language of New Media. Cambridge, Mass.: MIT Press, 2001. 9. Lev Manovich. Software Culture, trans. Matteo Tarantino (Milano: Edizioni Olivares, 2010. 10.Lev Manovich. Software Takes Command. New York: Bloomsbury Academic, 2013. 11.Walter Benjamin. Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit 1936 (The Work of Art in the Age of Mechanical Reproduction, original 1936), edit. Hannah Arendt, 1968. 12.International Journal for Digital Art History (Graphentis Verlag) issue n.2, edit. Harald Klinke, Liska Surkemper. Published in cooperation with artihistoricum.net, 2016.
51
13.Helen Armstrong. Digital Design Theory: Readings from the field. Princeton Architectural Press, 2016. 14.Ruben Pater. The Politics of Design: A (Not So) Global Manual for Visual Communication. Bis Publishers, 2016. 15.Andrea Balzola, Paolo Rosa. L’arte fuori di sé: Un manifesto per l’età posttecnologica. Serie Bianca Feltrinelli, 2011. 16.Nicholas Feltron. Photoviz: Visualizing Information Through Photography. published by Gestalten, 2016. 17.Casey Reas, Chandler McWilliams, LUST. Form+Code in Design, Art, and Architecture: a guide to computational aesthetics. Princeton Architectural Press, 2012. 18.Brand Stewart. The Media Lab: inventing the future at MIT. Viking Penguin Inc, 1987. 19.Tom M. Mitchell. Machine Learning.
McGraw-Hill Science/Engineering/
Math, 1997. 20.Donald A. Norman. Living with Complexity. MIT Press, 2010. 21.Guy Debord. The Society of the Spectacle, originally published in France as “La societè‚ du spectacle” in 1967 by BuchetChastel. trans Donald Nicholson Smith. Zone Books, 1994. 22.Domenico Quaranta. Beyond New Media Art, LINK Editions, Brescia 2013. 23.Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal. Towards Automated Discovery of Artistic Influence. Department of Computer Science Rutgers, The State University of New Jersey, 14 aug 2014. 24.Leon A. Gatys, Alexander S. Ecker, Matthias Bethge. A Neural Algorithm of Artistic Style. Tūbigen University, Germany 2 sep 2015.
52
25.C. Jara-Figueroa, Amy Z. Yu, Cesar A. Hidalgo. The medium is the memory: how communication technologies shape what we remember, The MIT Media Lab, Massachusetts Institute of Technology, 12 sep 2016. 26.Claire Bishop. Participation: Documents of contemporary Art, Whitechapel and The MIT Press, 2006. 27.Bruce Wands, Creating continuity between computer art history and contemporary art, CAT Conference, London 3 feb 2010. 28.W. Kandinsky. Concerning the spiritual in art. translated By Michael T. H. Sadler, ďŹ rst ed: 1912.
53
SITOGRAPHY
ssbkyh.com jtnimoy.com genekogan.com ml4a.github.io recognition.tate.org.uk www.thepaintingfool.com www.nextrembrandt.com yann.lecun.com/exdb/lenet www.nextrembrandt.com snips.ai/content/intro-to-ai www.flickr.com/photos/quasimondo www.google.com/culturalinstitute/beta www.gartner.com/newsroom/id/3412017 www.microsoft.com/cognitive-services/en-us/apis www.do2learn.com/organizationtools/EmotionsColorWheel news.microsoft.com/features/democratizing-ai newatlas.com/creative-ai-algorithmic-art-painting-fool-aaron thenewinquiry.com/essays/invisible-images-your-pictures-are-looking-at-you
54
© 2017 All rights reserved. No part of this work may be reproduced or used in any form or by any means (graphic, electronic, or mechanical, including photocopying or information storage and retrieval systems) without written permission from the writer. All images © of their respective owners.
55