Developing Creative AI: The Making of Aida

In this post I outline, from start to finish, the entire process of creating Aida, from early experiments, to formulation of ideas, to the final steps, taking into account all the changes made along the way and why they were made.

Early Development
To get started, I first began installing all the necessary dependencies and packages needed to work with AI. In this case, I am using Tensorflow and python. To enable Tensorflow to use the GPU (to speed up the learning process dramatically) I also had to install NVidia CUDA and CuDNN. During the training process, I use Tensorboard to keep track of the progression of training. Since I am using Windows instead of Linux, which most AI examples are built for, I am also using Cygwin; a unix-style command line interface that allows the use of Linux commands on Windows. These are just the basics; on top of this, there is a long list of extra packages that are needed, depending on what is being worked on.

My first experiment into using AI was using GANs (Generative Adversarial Networks) to demonstrate image to image translations. GANs learn by having two sides of a network, a Generator and a Discriminator, compete to both improve their methods (To see a full explanation, see my post on Aida here)

To start, I downloaded CycleGAN Horse2Zebra, both as a test to ensure all dependencies were installed correctly, but also to determine what level of results I would get from using this kind of system. I downloaded the sample dataset and, after a little tweaking, the first images start to appear!

CycleGAN Horse2Zebra works both ways: It learns to turn horses to zebras and vice versa simultaneously. Below are some examples of images it outputted during the training process:

B_00_0099.jpg

Image produced in the first cycle of ‘Horse2Zebra’

A_00_0199.jpg

Image produced in the first cycle of ‘Zebra2Horse’

Typically, these first images are blurry and somewhat nonsensical, but they do offer insight into what the neural network is ‘latching onto’; such as picking out stripes from the zebra or the horse from the background.

As training progresses, the network slowly improves and this is shown through the outputted images.

A_72_0775.jpg

72nd Cycle of ‘Zebra2Horse’

B_72_0175.jpg

72nd Cycle of ‘Horse2Zebra’

I ran this network for 200 epochs (cycles through the training data), taking roughly a solid week of training. These are some of the final results with the input image side-by-side:

GANtest3.PNG

Horse to Zebra

gantest6.PNG

Zebra to Horse


 

Edges to Objects

Next, I had a go working with other forms of GANs, in this case ‘Lines to Handbags’ and Lines to Shoes’ using pix2pix. This works on the same concept (and similar code) of Horse2Zebra/Zebra2Horse, except it doesn’t learn to “work backwards” – simply because it doesn’t need to. This has the added benefit of speeding up the training process (although in this case not, because the dataset is much much larger than Horse2Zebra).

Due to the amount of time taken to train these models, I stopped training before it completed. Below are some examples of output images:

shoetrain.PNG

An early shoe created by ‘edges to shoes’

firstbag.PNG

The first bag created by ‘lines to bags.’

51_AB-outputs.png

A later shoe output

During this process, I also came across my first ‘failed’ GAN.

train_00_24499.png

Image produced by the failed GAN.

This failure was most likely caused by Generator loss – in this case the only thing that can be done is to stop it and try again.

After this, I ran into my second failed GAN, where a single wrongly formatted image within the handbags dataset (out of a total 138,000) caused the whole system to crash.


Early Idea Generation

Very early on in the project, I had the idea of creating something with a philosophical meaning for viewers to reflect on. Some of my earliest ideas were working with the concept of “Impermanence”, or the idea that all of existence is transient and inconstant, and somehow reflecting this through the use of Artificial Intelligence.

After working with Edges to Bags/Shoes, I had the idea to work with translations with lines to coloured/textured images. I liked the idea of ‘producing something from nothing’, and using the GAN-created images for something. After looking at pieces such as Codex Seraphinianus for inspiration, I liked the idea of creating strange creatures. I also liked the idea of having some level of interactivity for viewers during the exhibition.

I got the idea of creating a tool for users to create line drawing of fish, which would then be sent to a GAN to texture, then brought to life in a virtual fish tank, possibly by using projection. I decided to use fish because the images and textures produced by GANs can look ‘off’ or misshapen. Since fish often have bright colours and unusual shapes (and there are many yet to be discovered), they are much less likely to look ‘off’ as opposed to animals like zebras. The bright colours and mix of textures also make them look visually appealing.

This also ties in with Impermanence, that viewers can, in a sense, ‘leave their mark’ on the piece, in the world created by the AI. To further this idea, none of the fish would last for a long period of time; perhaps being replaced after a certain number were in the tank or simply disappearing after a certain amount of time.

As time went on, I realised that this would be too much work – there are a lot of variables within the system and a lot of places where errors could occur. Not only could animating these fish in real-time be difficult with so many variations to take into account, there could also be issues with user inputted drawings. Since ‘Bad’ lines can lead to ‘Bad’ outputs, there could be a lot of ‘fish’ in the tank that look something like this:

Picture1.png

A failed output due to ‘bad’ lines

Having a tank containing only fish that look like that would be completely unacceptable – ruining the experience of the installation for viewers. Even the best trained GAN would still run into issues like this with user-inputted lines – it is unavoidable. To combat this I decided to instead lose this form of user interaction and instead take a different path (but stay with the fish idea for reasons stated earlier).

I decided on making an exhibition of GAN-created “paintings” of sea creatures, with an option for viewers to have a go at collaborating with the system. This allowed me to keep the interactive aspect of the system and show of its capabilities, but not in such a way that a failure would be catastrophic for the entire installation.

This idea ties in with challenging public perceptions of machine created artworks, and making observers question the creation of art – Is it a uniquely human trait, or are we not as unique and creative as we really think we are?


Automated Dataset Creation & Training

Generally, datasets for GANs consist of thousands of images. Since datasets require a large amount of correctly formatted images, it would be impossible to create this by hand.

To make my edges-to-fish training dataset, I first used Python package ‘Google-images-download’. This enables the scraping of a large number of images from Google with certain keywords and preferences. In my case, I used this tool to scrape thousands of images of colourful fish from google, all with white backgrounds.

At this point, a little intervention is needed, as the images that are downloaded aren’t always perfectly suited for the job. Any scraped images that aren’t suitable (such as containing extra items) must be removed. This is the only part that requires review, however.

Since these image-to-image translations take paired images to train, I needed to find a way to generate line drawings from the scraped images. To start with, I used another GAN to generate its own lines from images. To do this, I had to first format the images correctly to be used by the GAN. I used Python Image Library (PIL) to change the format and size and convert the image to RGB, whilst adding extra space for the produced lines to be added to later.

image1.jpg

Image ready for processing by GAN, with white space.

Whilst the use of this second GAN to generate lines created a level of variation, it turned out to be bad for the training of the second GAN, since the generated lines did not match the image closely enough to produce a well coloured/textured result. I eventually decided to use another means to create line drawings, but kept this creative edge detector to use later to experiment with variation.

test_0053.png

A fish lineart drawn by the GAN – note the unusual shape and mix of texture.

To effectively train the colour/texturing GAN, I needed a clear set of line drawings that closely match the target image (the image scraped from Google). Firstly, I experimented with Python Image Library (PIL) as it has an inbuilt edge-detection tool. When applied to an image, it produces something like this:

newimage.png

PIL edge detect

To make the outcome a little closer to what the GAN needs, i tried inverting it:

image2result.jpg

Inverted PIL edge detect

Whilst this did work, it turned out to be inconsistent. When applied to the full dataset of over 1,000 images, some images turned out almost completely white whilst others turned almost completely black.

piledge.png

Inconsistencies of PIL edge detect.

This would have been even less effective for training than the second GAN method, so I decided to try something else.

Next I decided to try Canny Edge Detection in python. This proved to be much more effective than the GAN method in producing clear lines, and was much more consistent across a wide variety of images compared to using PIL edge detect.

test_0002.png

Lines produced with Canny Edge Detection.

I then put this all together into a block of python code using PIL. It cycles through a folder of images, taking an image, resizing it and formatting it correctly, before duplicating it. The original image has white space added, whilst the copy is ‘converted’ to lines using canny edge detection. These lines are then pasted into the white space, and the file is given an appropriate name and saved into a new folder, ready to be used by the texturing/colouring GAN.

After these datasets were fully created, I started  training on them using pix2pix tensorflow. Since the datasets were of high quality and not too large, the training process was quicker than the earlier examples and produced better results much faster. Once I had successfully trained the first realistic model, I began to experiment into breaking the typical training process and working out how to produce the most interesting results.

epoch.PNG

Training Epochs

Once the colouring/texturing GAN was fully trained with the accurate Canny Edge Detection line drawings, I revisited the lineart GAN as a means to create variation within outputs during the testing phases.


Dealing with Issues

When working with AI, it can take a lot of trial and error to get started. Often, things will crash without offering any kind of explanation, and it can take a fair amount of time to resolve these issues. Some of the most common errors are issues such as running out of memory or having the wrong version of a certain dependency. Since I am also working on windows with Cygwin, this can often cause other issues such as version discrepancies and errors.

If a GAN is not configured correctly, it will fail to even start training. In order to avoid errors such as these, it is important to first verify that all dependencies are working and are of the correct version. With the GPU-accelerated version, it is very important to make sure that Tensorflow is actually engaging the GPU instead of solely relying on the CPU – although this is not essential to make the model run, this is easy to overlook and will slow down the process considerably.

Next, it is essential to make sure that the hardware being used is capable of handling the GAN, and making modifications to allow it to work successfully. GANs can run into memory errors at any point during the process, but this is usually seen earlier rather than later. Whilst there is no “one-size-fits-all” solution to avoiding memory errors, modifying image sizes is generally a good start. It can take a lot of trial and error to find a point where it runs smoothly depending on the system being used. In the case of Edges to Shoes, the scale of the image must be a power of 2 to enable it to divide the image into equal integers (to work with the side-by-side matched pairs dataset format).

Avoiding a majority of errors during the training process is down to being observant of the training process – keeping an eye on the outputted images and the Generator/Discriminator losses to ensure they stay balanced. Since training can take a very long time, the last thing you want is to spend a week training a GAN that failed a few hours in! One way to do this is to monitor the process using Tensorboard:

scalars.PNG

Screenshot of Tensorboard during training process.

Typically, Generator and Discriminator loss should stay balanced, such as in the example above.

fishy.png

Output image shown during training process in Tensorboard.

Sometimes, a single bad image can cause a GAN to crash. This can be avoided by taking precautions to ensure that all images that are going to be used are correctly and uniformly formatted.


 

Planning the Presentation

Planning the presentation of the piece goes hand in hand with creating an ‘identity’ for the project. An acceptance of “Aida” as an artist relies very much on how it is perceived by those viewing it. This starts with the idea of making AI feel more human and less robot. Whilst this might seem pointless, even something as simple as giving the system a name helps with this.

Aida’s name is a reflection of Ada Lovelace, both in homage and in reflection of her famous quote, “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform”, as challenging this idea is at the core of Aida’s existence. It can also be an acronym, with AI standing for artificial intelligence.

Aida also has a logo, consisting of the name with the letters AI highlighted, where the I is a paintbrush. This highlights the creativity of the machine but also hints at the inner workings and inspirations behind it. This is paired with a running design theme, including consistent colours and fonts.

For my presentation, I created two large posters explaining how the system works, with flow charts and sample images. This was inspired by the works of Stephen Willats, but also inspired by the way information is typically presented in a museum. Since Aida is to be presented as an exhibition piece, it needs to have some form of explanation as to what it is or the experience falls flat. A lot of the work that goes into making GANs goes on behind the scenes, and the posters highlight how the system works in a simple way for those who are unfamiliar with AI.

The second part of my presentation includes the demonstration. Whilst this holds less importance than I had previously planned, I still consider it to be important as it allows user interactivity.


Building the Presentation Interactive Elements

This physical interactive part involved a difficult process – finding a way to present a typically very resource-heavy neural network in a neat and compact way (preferrably without having to demonstrate on a laptop, as this would look less professional and break the immersion). My first attempt was to train the least resource-heavy model possible and display it on a Raspberry Pi with a touch screen. This would allow users to interact with the piece in real time but also display premade outputs, and even animations during a “resting state”. This, however, did not work out; even during the considerably less taxing ‘testing’ phase (producing outcomes rather than learning), the amount of memory needed proved to be too much, with the Pi often overheating.

Since I still wanted to keep this interaction, I decided to try a different method. I used Bazel (a building software) to create a quantized version of my model. Quantization essentially “compresses” the model, and is typically used where AI is needed on low-resource and low-space systems such as mobile phones. Quantization does have a side effect of reducing the accuracy of the system, but in this case the compromise will have to be made or there would be no live demonstration at all!

Once again, response times from the model on the Raspberry Pi were very slow – even with a fully quantized model. The system was no longer running into memory errors, but instead would take upwards of an hour to produce a single output – nowhere near fast enough to use in an exhibition setting.

To fix this, I decided to take a slightly different approach. I continued using the quantized model, but instead of running it from the Raspberry Pi, I instead hosted it on my remote server, using tensorflow.js. Although responses aren’t instantaneous, they are considerably faster – particularly after the model has been run for the first time. This webpage can then be displayed fullscreen on the Raspberry Pi – allowing users to interact with it and collaborate with Aida.


Building the Presentation: Stand & Idle Animations

I made a short After Effects animation to play on the Raspberry Pi screen whilst it is idle. This animation is informative and shows some insight into how the system works, such as time lapses of training. When the screen is tapped, the animation stops playing and the system is ready for user interaction (live demo).

The animation contains footage of the Aida system running as well as a time-lapse of it training. The time lapse was made by making the model output images whilst it is training, and then stitching them together using VirtualDub. Because the images are not named correctly, I had to first write a short script to rename all the files to numbers, as well as delete any images that were not direct outputs of the system. The final time-lapse gives an insight into how the GAN improves its methods through training.

IMG_20180528_161735.jpg

Raspberry Pi 3 in acrylic stand.

The Raspberry Pi is supported by a cut acrylic stand inside a box. This gives it stability, so users can touch the screen without risking it moving or falling.

 

Inspirations: AI and machine Creativity

AARON

AARON is a painting robot made by Harold Cohen, capable of using and mixing real paints to create works on canvas. AARON displays a level of unpredictability; with even it’s creator not knowing what it will make. AARON is, however, not technically artificial intelligence, lying somewhere closer to a form of autonomous code. (Cohen, 2018)

Microsoft’s Drawing AI

Microsoft have designed a creative machine capable of making images of what it is told. The machine takes inputs in the form of text, which it then uses to determine what to create. The result is pixel-by-pixel generated images, sitting somewhere between photograph and painting. (Roach, 2018)

References:

Cohen, H. (2018). Harold Cohen Online Publication. [online] Aaronshome.com. Available at: http://www.aaronshome.com/aaron/publications/index.html [Accessed 2 Feb. 2018].

Roach, J. (2018). Microsoft researchers build a bot that draws what you tell it to – The AI Blog. [online] The AI Blog. Available at: https://blogs.microsoft.com/ai/drawing-ai/ [Accessed 2 Feb. 2018].

Inspirations: The Art of Randomness

Conversations on Chaos
By fito_segrera

Markov Chain poetry from Randomness (Segrera, 2016)

Conversations on Chaos is an artwork based on representation of randomness. It consists of two main parts: A pendulum suspended over multiple electromagnetic oscillators. The software also implements the use of Markov Chains, enabling the system to create a human-like ‘voice’, and bringing meaning back into chaos.  (Segrera, 2015) Together, this creates a system of ‘two machines that hold a dynamic conversation about chaos’. (Visnjic, 2018)

Codex Seraphinianus
By Luigi Serafini, 1981

Excerpt from Codex Seraphinianus (Serafini and Notini, 1981)

Codex Seraphinianus is a book written in an invented language with no translation. It also has a collection of visuals; some familiar, some not. The format of the book is reminiscent of a guide book or scientific text. (Jones, 2018) The book could be interpreted as an introduction to an alien or alternate reality with influences from our own.

Neural Network Critters
By Eddie Lee


Video: Neural Network Critters! by Eddie Lee (Lee, 2017)

Neural Network Critters is a visual example of how neural networks can be used to make art. In this free program, a series of ‘critters’ are created. (Visnjic, 2018) The ones that are fittest (i.e. make it furthest through the maze) are asexually reproduced until they make it to the end of the maze. (Lee, 2018)

School for Poetic Computation (SFPC)

The School for Poetic Computation is a small school based in New York, that aims to bring together art and computing.  (Sfpc.io, 2018)


References:

Jones, J. (2018). An Introduction to the Codex Seraphinianus, the Strangest Book Ever Published. [online] Open Culture. Available at: http://www.openculture.com/2017/09/an-introduction-to-the-codex-seraphinianus-the-strangest-book-ever-published.html [Accessed 11 Feb. 2018].

Lee, E. (2018). Neural Network Critters by Eddie Lee. [online] itch.io. Available at: https://eddietree.itch.io/neural-critters [Accessed 11 Feb. 2018].

Lee, E. (2017). Neural Network Critters – Vimeo. Available at: https://vimeo.com/225961685 [Accessed 11 Feb. 2018].

Serafini, L. and Notini, S. (1981). Codex seraphinianus. New York: Abbeville Press, p.98.

Segrera, F. (2015). Conversations on Chaos. [online] Fii.to. Available at: http://fii.to/pages/conversations-on-chaos.html [Accessed 10 Feb. 2018].

Segrera, F. (2016). Conversations on Chaos. [image] Available at: http://www.creativeapplications.net/linux/conversations-on-chaos-by-fito-segrera/ [Accessed 11 Feb. 2018].

Sfpc.io. (2018). SFPC | School for Poetic Computation. [online] Available at: http://sfpc.io [Accessed 11 Feb. 2018].

Visnjic, F. (2018). Neural Network Critters by Eddie Lee. [online] CreativeApplications.Net. Available at: http://www.creativeapplications.net/news/neural-network-critters-by-eddie-lee/ [Accessed 11 Feb. 2018].

Visnjic, F. (2018). Conversations On Chaos by Fito Segrera. [online] CreativeApplications.Net. Available at: http://www.creativeapplications.net/linux/conversations-on-chaos-by-fito-segrera/ [Accessed 11 Feb. 2018].

Interactive Artworks

Pool of Fingerprints
By Euclid Masahiko Sati, Takashi Kiriyama, 2010

pooloffingerprints.PNG

Fingerprint Scanner (Clauss, 2010)

In Pool of Fingerprints, users are invited to scan their own fingerprint into the piece. This mixes with all the fingerprints of other visitors, until it eventually returns to its owner. This piece is a reflection on individuality and their sense of presence. (Google Cultural Institute, 2010)

Transmart Miniascape
By Yasuaki Kakehi, 2012


Video: Transmart Miniascape by Yasuaki Kakehi (Kakehi, 2015)

Transmart Miniascape is an interactive and reactive artwork consisting of multiple glass panels containing pixels. These pixels are representative of the four seasons, and their appearance changes based on the surrounding area. (NTT InterCommunication Center [ICC], 2014)

Through the Looking Glass
By Yasuaki Kakehi, 2004


Video: Through the Looking Glass by Yasuaki Kakehi (Kakehi, 2015)

Through the Looking Glass invites visitors to play a game of tabletop hockey against your own reflection. The piece defies the logic of mirrors, as the screens both sides of the mirror display different images! (NTT InterCommunication Center [ICC], 2004)

Tablescape Plus
By Yasuaki Kakehi Takeshi Naemura and Mitsunori Matsushita, 2006


Video: Tablescape Plus, 2006 (Kakehi, 2016)

Tablescape Plus is a playful interface, allowing visitors to create their own stories with characters upon a screen. It blends physical objects with digital images. The physical objects can be manipulated by visitors, allowing them to move characters and objects together to form interactions or trigger movements. (Kakehi, 2016)


References:

Clauss, N. (2010). Pool of Fingerprints – Fingerprint Scanner. [image] Available at: https://www.google.com/culturalinstitute/beta/asset/pool-of-fingerprints-details/KwFE71waZ4_t1g [Accessed 7 Feb. 2018].

Google Cultural Institute. (2010). Pool of Fingerprints/details – Euclid Masahiko Sato (b.1954, Japan) Takashi Kiriyama (b.1964, Japan) (Photo : Nils Clauss) – Google Arts & Culture. [online] Available at: https://www.google.com/culturalinstitute/beta/asset/pool-of-fingerprints-details/KwFE71waZ4_t1g [Accessed 7 Feb. 2018].

Kakehi, Y. (2016). Tabletop Plus. Available at: https://vimeo.com/124536961 [Accessed 8 Feb. 2018].

Kakehi, Y. (2015). Transmart Miniascape. Available at: https://vimeo.com/124540477 [Accessed 7 Feb. 2018].

Kakehi, Y. (2015). Through the Looking Glass. Available at: https://vimeo.com/81712999 [Accessed 7 Feb. 2018].

NTT InterCommunication Center [ICC]. (2014). ICC | “Transmart miniascape” – KAKEHI Yasuaki (2014). [online] Available at: http://www.ntticc.or.jp/en/archive/works/transmart-miniascape/ [Accessed 7 Feb. 2018].

NTT InterCommunication Center [ICC]. (2004). ICC | “through the looking glass” (2004). [online] Available at: http://www.ntticc.or.jp/en/archive/works/through-the-looking-glass/ [Accessed 7 Feb. 2018].

Artworks from Code

Moon 

ai-weiwei-olafur-eliasson-moon-designboom-03

Moon up close (Designboom, 2014)

Moon is an interactive installation piece created by Olafur Eliasson and Ai Weiwei. It invites viewers from around the globe to draw and explore a digital “Moonscape”. (Feinstein, 2014)

Eliasson and Weiwei’s work is focused around community and the link between the online and offline world. (Austen, 2013)

Over the course of its 4 years of existence, Moon grew from simple doodles and drawings, to collaborations & clusters of work, such as the “Moon Elisa”, where multiple users came together to recreate the classic Mona Lisa painting. (Cembalest, 2013)

“The moon is interesting because it’s a not yet habitable space so it’s a fantastic place to put your dreams.” – Olafur Eliasson, on Moon (Feinstein, 2014)

Library of Babel
By Jonathan Basile

The Library of Babel is a website based on Borge’s “The Library of Babel” (Borges, 2018); a theoretical piece about a library containing every possible string of letters. It is theorized that the books contain every word that has ever been said and will ever be said, translations of every book ever written, and the true story of everyone’s death. (Basile, 2018)

babel

A section of the Library of Babel (Basile, 2018)

The actual workings of the Library of Babel are quite complex – using randomized characters with an algorithm complex enough to create the same block of text within the same place in the library every time it is viewed. When a search is made for a specific string within the library, the program works backwards to calculate its position based on the random seed that would produce that output.  (Basile, 2018)

Code Poetry
By Daniel Holden & Chris Kerr

Code Poetry is a collection of code-based pieces, each written in a different programming language with a different concept behind it. The collection was published into a book in 2016. (Holden and Kerr, 2018)

Some examples of the content of this book are as follows:

IRC (Markov Chain Poetry)
Markov chains are generated sequences based on probability. In this example, poetry is created by using strings generated from IRC logs. (Theorangeduck.com, 2018)
Similar: Create lyrics using markov chains

Water
Water is a piece written in c++ that is styled in such a way to resemble rain clouds. When run, the code generates raindrops. (Holden and Kerr, 2018) Water is an interesting piece as it challenges the way we traditionally view and approach code.

water

Machine Learning Art
By William Anderson

Using Markov Chains and a collection of training images from the Bauhaus art movement, an artist was able to create new artworks in this iconic style. (Anderson, 2017)

bauhaus genart

Bauhaus art generated by AI (Anderson, 2017)


References:

Anderson, W. (2017). Using Machine Learning to Make Art – Magenta. [online] Magenta. Available at: https://magenta.as/using-machine-learning-to-make-art-84df7d3bb911 [Accessed 7 Feb. 2018].

Anderson, W. (2017). Bauhaus Markov chain art. [image] Available at: https://magenta.as/using-machine-learning-to-make-art-84df7d3bb911 [Accessed 7 Feb. 2018].

Austen, K. (2013). Drawing on a moon brings out people’s best and worst. [online] New Scientist. Available at: https://www.newscientist.com/article/dn24702-drawing-on-a-moon-brings-out-peoples-best-and-worst/ [Accessed 30 Oct. 2017].

Basile, J. (2018). About. [online] Library of Babel. Available at: https://libraryofbabel.info/About.html [Accessed 6 Feb. 2018].

Basile, J. (2018). Library of Babel. [image] Available at: http://libraryofbabel.info/browse.cgi [Accessed 6 Feb. 2018].

Basile, J. (2018). Theory – Grains of Sand. [online] Library of Babel. Available at: https://libraryofbabel.info/theory4.html [Accessed 6 Feb. 2018].

Borges, J. (2018). The Library of Babel. [ebook] Available at: https://libraryofbabel.info/libraryofbabel.html [Accessed 6 Feb. 2018].

Designboom (2014). Moon close up. [image] Available at: https://www.designboom.com/art/ai-weiwei-olafur-eliasson-give-rise-to-moon-interactive-artwork-11-26-2013/ [Accessed 30 Oct. 2017].

Cembalest, R. (2013). How Ai Weiwei and Olafur Eliasson Got 35,000 People to Draw on the Moon | ARTnews. [online] ARTnews. Available at: http://www.artnews.com/2013/12/19/how-ai-weiwei-and-olafur-eliasson-got-35000-people-to-draw-on-the-moon/ [Accessed 30 Oct. 2017].

Feinstein, L. (2014). Make Your Mark On The Moon With Olafur Eliasson and Ai Weiwei. [online] Creators. Available at: https://creators.vice.com/en_uk/article/yp5zkj/make-your-mark-on-the-moon-with-olafur-eliasson-and-ai-weiwei [Accessed 30 Oct. 2017].

 

Holden, D. and Kerr, C. (2018). ./code –poetry. [online] Code-poetry.com. Available at: http://code-poetry.com/home [Accessed 6 Feb. 2018].

Holden, D. and Kerr, C. (2018). water.c. [online] Code-poetry.com. Available at: http://code-poetry.com/water [Accessed 6 Feb. 2018].

Holden, D. and Kerr, C. (2018). The code behind Water. [image] Available at: http://code-poetry.com/water [Accessed 6 Feb. 2018].

Theorangeduck.com. (2018). 17 Line Markov Chain. [online] Available at: http://theorangeduck.com/page/17-line-markov-chain [Accessed 6 Feb. 2018].

Inspirational Art 2 – Projection Mapping

Projection Mapping – Catan/D&D
By Silverlight/Roll20

DDBigTeaser

Projection mapping – D&D (Projection Mapping Central, 2018)

This projection mapping piece brings together tabletop gaming and projection mapping.This not only creates a more immersive enronment for players, it also provides tools for gamers, such as using real time tracking to calculate a character’s line of sight. (Sodhi, 2018)

Crystalline Chlorophyll
By Joseph Gray, 2009

 

Video: Crystalline Chlorophyll (Gray, 2009)

Crystalline Chlorophyll is an interactive sculpture that reacts to people in the space around it. During the course of an exhibition, the sculpture tracks motion in the room and transforms from an icy blue to a natural green.

The sculpture is built from card stock, but was originally designed in blender. The colour changing effects are achieved by two ceiling-mounted video projectors. (Gray, 2014)


 

Sources:

Gray, J. (2009). Crystalline Chlorophyll. Available at: https://vimeo.com/6886025 [Accessed 31 Jan. 2018].

Gray, J. (2014). Crystalline Chlorophyll. [online] Grauwald Creative. Available at: http://grauwald.com/art/crystallinechlorophyll/ [Accessed 31 Jan. 2018].

Projection Mapping Central (2018). D&D Projection mapping. [image] Available at: http://projection-mapping.org/dungeons-dragons-projection-mapping/ [Accessed 31 Jan. 2018].

Sodhi, R. (2018). Dungeons & Dragons and Settlers of Catan with Projection Mapping -…. [online] Projection Mapping Central. Available at: http://projection-mapping.org/dungeons-dragons-projection-mapping/ [Accessed 31 Jan. 2018].

Inspirational Artworks

 

EELS 3D Projection-Mapping game
By Leo Seeley, 2011

Video: EELS projection mapping multiplayer game (Seeley, 2011)

EELS is an interactive multiplayer game bringing together three-dimensional projection mapping and mobile application design. Users can control the movement of an eel as it moves across 3D space. (Casperson, 2018)

Ohne Titel (Hello World.) / Untitled (Hello World.)
By Valentin Ruhry, 2011 

Ohne Titel (Hello World) – Installation (Ruhry, 2018)

Reciprocal Space
By Ruari Glynn, 2005

Reciprocal Space challenges the perception of buildings as a solid and unchanging space. (We Make Money Not Art, 2005)


Video: Reciprocal Space in action. (Glynn, 2011)

The Agency at the End of Civilization.
By Stanza, 2014


Video: Agency at the End of Civilization (Stanza, 2014)

This installation uses real-time data from UK car number plate recognition systems across the South of England.

The piece includes 24 screens and multiple speakers and CCTV cameras, engaging the audience into the role of the observer. (Stanza.co.uk, 2014)


Sources

Seeley, L. (2011). EELS projection mapping multiplayer game. [Video] Available at: https://vimeo.com/32161590 [Accessed 31 Jan. 2018].

Casperson, M. (2018). Projection Mapping Multiplayer Game – Projection Mapping Central. [online] Projection Mapping Central. Available at: http://projection-mapping.org/projection-mapping-multiplayer-game/ [Accessed 31 Jan. 2018].

Ruhry, V. (2018). Ohne Titel (Hello world) – Installation. [image] Available at: http://ruhry.at/en/work/items/untitled-hello-world.html [Accessed 31 Jan. 2018].

We Make Money Not Art. (2005). Reciprocal Space. [online] Available at: http://we-make-money-not-art.com/reciprocal_spac/ [Accessed 31 Jan. 2018].

Glynn, R. (2011). Reciprocal Space. Available at: https://vimeo.com/27775272 [Accessed 31 Jan. 2018].

Stanza (2014). The Agency at the End of Civilization. Available at: https://vimeo.com/97613466 [Accessed 31 Jan. 2018].

Stanza.co.uk. (2014). The Agency At The End Of Civilisation. By Stanza. [online] Available at: http://www.stanza.co.uk/agency/index.html [Accessed 31 Jan. 2018].