Developing Creative AI: The Making of Aida

In this post I outline, from start to finish, the entire process of creating Aida, from early experiments, to formulation of ideas, to the final steps, taking into account all the changes made along the way and why they were made.

Early Development
To get started, I first began installing all the necessary dependencies and packages needed to work with AI. In this case, I am using Tensorflow and python. To enable Tensorflow to use the GPU (to speed up the learning process dramatically) I also had to install NVidia CUDA and CuDNN. During the training process, I use Tensorboard to keep track of the progression of training. Since I am using Windows instead of Linux, which most AI examples are built for, I am also using Cygwin; a unix-style command line interface that allows the use of Linux commands on Windows. These are just the basics; on top of this, there is a long list of extra packages that are needed, depending on what is being worked on.

My first experiment into using AI was using GANs (Generative Adversarial Networks) to demonstrate image to image translations. GANs learn by having two sides of a network, a Generator and a Discriminator, compete to both improve their methods (To see a full explanation, see my post on Aida here)

To start, I downloaded CycleGAN Horse2Zebra, both as a test to ensure all dependencies were installed correctly, but also to determine what level of results I would get from using this kind of system. I downloaded the sample dataset and, after a little tweaking, the first images start to appear!

CycleGAN Horse2Zebra works both ways: It learns to turn horses to zebras and vice versa simultaneously. Below are some examples of images it outputted during the training process:


Image produced in the first cycle of ‘Horse2Zebra’


Image produced in the first cycle of ‘Zebra2Horse’

Typically, these first images are blurry and somewhat nonsensical, but they do offer insight into what the neural network is ‘latching onto’; such as picking out stripes from the zebra or the horse from the background.

As training progresses, the network slowly improves and this is shown through the outputted images.


72nd Cycle of ‘Zebra2Horse’


72nd Cycle of ‘Horse2Zebra’

I ran this network for 200 epochs (cycles through the training data), taking roughly a solid week of training. These are some of the final results with the input image side-by-side:


Horse to Zebra


Zebra to Horse


Edges to Objects

Next, I had a go working with other forms of GANs, in this case ‘Lines to Handbags’ and Lines to Shoes’. This works on the same concept (and code) of Horse2Zebra/Zebra2Horse, except it doesn’t learn to “work backwards” – simply because it doesn’t need to. This has the added benefit of speeding up the training process (although in this case not, because the dataset is much much larger than Horse2Zebra).

Due to the amount of time taken to train these models, I stopped training before it completed. Below are some examples of output images:


An early shoe created by ‘edges to shoes’


The first bag created by ‘lines to bags.’


A later shoe output

During this process, I also came across my first ‘failed’ GAN.


Image produced by the failed GAN.

This failure was most likely caused by Generator loss – in this case the only thing that can be done is to stop it and try again.

After this, I ran into my second failed GAN, where a single wrongly formatted image within the handbags dataset (out of a total 138,000) caused the whole system to crash.

Early Idea Generation

Very early on in the project, I had the idea of creating something with a philosophical meaning for viewers to reflect on. Some of my earliest ideas were working with the concept of “Impermanence”, or the idea that all of existence is transient and inconstant, and somehow reflecting this through the use of Artificial Intelligence.

After working with Edges to Bags/Shoes, I had the idea to work with translations with lines to coloured/textured images. I liked the idea of ‘producing something from nothing’, and using the GAN-created images for something. After looking at pieces such as Codex Seraphinianus for inspiration, I liked the idea of creating strange creatures. I also liked the idea of having some level of interactivity for viewers during the exhibition.

I got the idea of creating a tool for users to create line drawing of fish, which would then be sent to a GAN to texture, then brought to life in a virtual fish tank, possibly by using projection. I decided to use fish because the images and textures produced by GANs can look ‘off’ or misshapen. Since fish often have bright colours and unusual shapes (and there are many yet to be discovered), they are much less likely to look ‘off’ as opposed to animals like zebras. The bright colours and mix of textures also make them look visually appealing.

This also ties in with Impermanence, that viewers can, in a sense, ‘leave their mark’ on the piece, in the world created by the AI. To further this idea, none of the fish would last for a long period of time; perhaps being replaced after a certain number were in the tank or simply disappearing after a certain amount of time.

As time went on, I realised that this would be too much work – there are a lot of variables within the system and a lot of places where errors could occur. Not only could animating these fish in real-time be difficult with so many variations to take into account, there could also be issues with user inputted drawings. Since ‘Bad’ lines can lead to ‘Bad’ outputs, there could be a lot of ‘fish’ in the tank that look something like this:


A failed output due to ‘bad’ lines

Having a tank containing only fish that look like that would be completely unacceptable – ruining the experience of the installation for viewers. Even the best trained GAN would still run into issues like this with user-inputted lines – it is unavoidable. To combat this I decided to instead lose this form of user interaction and instead take a different path (but stay with the fish idea for reasons stated earlier).

I decided on making an exhibition of GAN-created “paintings” of sea creatures, with an option for viewers to have a go at collaborating with the system. This allowed me to keep the interactive aspect of the system and show of its capabilities, but not in such a way that a failure would be catastrophic for the entire installation.

This idea ties in with challenging public perceptions of machine created artworks, and making observers question the creation of art – Is it a uniquely human trait, or are we not as unique and creative as we really think we are?

Automated Dataset Creation & Training

Generally, datasets for GANs consist of thousands of images. Since datasets require a large amount of correctly formatted images, it would be impossible to create this by hand.

To make my edges-to-fish training dataset, I first used Python package ‘Google-images-download’. This enables the scraping of a large number of images from Google with certain keywords and preferences. In my case, I used this tool to scrape thousands of images of colourful fish from google, all with white backgrounds.

At this point, a little intervention is needed, as the images that are downloaded aren’t always perfectly suited for the job. Any scraped images that aren’t suitable (such as containing extra items) must be removed. This is the only part that requires review, however.

Since these image-to-image translations take paired images to train, I needed to find a way to generate line drawings from the scraped images. To start with, I used another GAN to generate its own lines from images. To do this, I had to first format the images correctly to be used by the GAN. I used Python Image Library (PIL) to change the format and size and convert the image to RGB, whilst adding extra space for the produced lines to be added to later.


Image ready for processing by GAN, with white space.

Whilst the use of this second GAN to generate lines created a level of variation, it turned out to be bad for the training of the second GAN, since the generated lines did not match the image closely enough to produce a well coloured/textured result. I eventually decided to use another means to create line drawings, but kept this creative edge detector to use later to experiment with variation.


A fish lineart drawn by the GAN – note the unusual shape and mix of texture.

To effectively train the colour/texturing GAN, I needed a clear set of line drawings that closely match the target image (the image scraped from Google). Firstly, I experimented with Python Image Library (PIL) as it has an inbuilt edge-detection tool. When applied to an image, it produces something like this:


PIL edge detect

To make the outcome a little closer to what the GAN needs, i tried inverting it:


Inverted PIL edge detect

Whilst this did work, it turned out to be inconsistent. When applied to the full dataset of over 1,000 images, some images turned out almost completely white whilst others turned almost completely black.


Inconsistencies of PIL edge detect.

This would have been even less effective for training than the second GAN method, so I decided to try something else.

Next I decided to try Canny Edge Detection in python. This proved to be much more effective than the GAN method in producing clear lines, and was much more consistent across a wide variety of images compared to using PIL edge detect.


Lines produced with Canny Edge Detection.

I then put this all together into a block of python code using PIL. It cycles through a folder of images, taking an image, resizing it and formatting it correctly, before duplicating it. The original image has white space added, whilst the copy is ‘converted’ to lines using canny edge detection. These lines are then pasted into the white space, and the file is given an appropriate name and saved into a new folder, ready to be used by the texturing/colouring GAN.

After these datasets were fully created, I started the GAN training on them. Since the datasets were of high quality and not too large, the training process was quicker than the earlier examples and produced better results much faster. Once I had successfully trained the first realistic model, I began to experiment into breaking the typical training process and working out how to produce the most interesting results.


Training Epochs

Once the colouring/texturing GAN was fully trained with the accurate Canny Edge Detection line drawings, I revisited the lineart GAN as a means to create variation within outputs during the testing phases.

Dealing with Issues

When working with AI, it can take a lot of trial and error to get started. Often, things will crash without offering any kind of explanation, and it can take a fair amount of time to resolve these issues. Some of the most common errors are issues such as running out of memory or having the wrong version of a certain dependency. Since I am also working on windows with Cygwin, this can often cause other issues such as version discrepancies and errors.

If a GAN is not configured correctly, it will fail to even start training. In order to avoid errors such as these, it is important to first verify that all dependencies are working and are of the correct version. With the GPU-accelerated version, it is very important to make sure that Tensorflow is actually engaging the GPU instead of solely relying on the CPU – although this is not essential to make the model run, this is easy to overlook and will slow down the process considerably.

Next, it is essential to make sure that the hardware being used is capable of handling the GAN, and making modifications to allow it to work successfully. GANs can run into memory errors at any point during the process, but this is usually seen earlier rather than later. Whilst there is no “one-size-fits-all” solution to avoiding memory errors, modifying image sizes is generally a good start. It can take a lot of trial and error to find a point where it runs smoothly depending on the system being used. In the case of Edges to Shoes, the scale of the image must be a power of 2 to enable it to divide the image into equal integers (to work with the side-by-side matched pairs dataset format).

Avoiding a majority of errors during the training process is down to being observant of the training process – keeping an eye on the outputted images and the Generator/Discriminator losses to ensure they stay balanced. Since training can take a very long time, the last thing you want is to spend a week training a GAN that failed a few hours in! One way to do this is to monitor the process using Tensorboard:


Screenshot of Tensorboard during training process.

Typically, Generator and Discriminator loss should stay balanced, such as in the example above.


Output image shown during training process in Tensorboard.

Sometimes, a single bad image can cause a GAN to crash. This can be avoided by taking precautions to ensure that all images that are going to be used are correctly and uniformly formatted.


Planning the Presentation

Planning the presentation of the piece goes hand in hand with creating an ‘identity’ for the project. An acceptance of “Aida” as an artist relies very much on how it is perceived by those viewing it. This starts with the idea of making AI feel more human and less robot. Whilst this might seem pointless, even something as simple as giving the system a name helps with this.

Aida’s name is a reflection of Ada Lovelace, both in homage and in reflection of her famous quote, “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform”, as challenging this idea is at the core of Aida’s existence. It can also be an acronym, with AI standing for artificial intelligence.

Aida also has a logo, consisting of the name with the letters AI highlighted, where the I is a paintbrush. This highlights the creativity of the machine but also hints at the inner workings and inspirations behind it. This is paired with a running design theme, including consistent colours and fonts.

For my presentation, I created two large posters explaining how the system works, with flow charts and sample images. This was inspired by the works of Stephen Willats, but also inspired by the way information is typically presented in a museum. Since Aida is to be presented as an exhibition piece, it needs to have some form of explanation as to what it is or the experience falls flat. A lot of the work that goes into making GANs goes on behind the scenes, and the posters highlight how the system works in a simple way for those who are unfamiliar with AI.

The second part of my presentation includes the demonstration. Whilst this holds less importance than I had previously planned, I still consider it to be important as it allows user interactivity.

Building the Presentation Interactive Elements

This physical interactive part involved a difficult process – finding a way to present a typically very resource-heavy neural network in a neat and compact way (preferrably without having to demonstrate on a laptop, as this would look less professional and break the immersion). My first attempt was to train the least resource-heavy model possible and display it on a Raspberry Pi with a touch screen. This would allow users to interact with the piece in real time but also display premade outputs, and even animations during a “resting state”. This, however, did not work out; even during the considerably less taxing ‘testing’ phase (producing outcomes rather than learning), the amount of memory needed proved to be too much, with the Pi often overheating.

Since I still wanted to keep this interaction, I decided to try a different method. I used Bazel (a building software) to create a quantized version of my model. Quantization essentially “compresses” the model, and is typically used where AI is needed on low-resource and low-space systems such as mobile phones. Quantization does have a side effect of reducing the accuracy of the system, but in this case the compromise will have to be made or there would be no live demonstration at all!


Once again, response times from the model on the Raspberry Pi were very slow – even with a fully quantized model. The system was no longer running into memory errors, but instead would take upwards of an hour to produce a single output – nowhere near fast enough to use in an exhibition setting.

To fix this, I decided to take a slightly different approach. I continued using the quantized model, but instead of running it from the Raspberry Pi, I instead hosted it on my remote server, using tensorflow.js. Although responses aren’t instantaneous, they are considerably faster – particularly after the model has been run for the first time. This webpage can then be displayed fullscreen on the Raspberry Pi – allowing users to interact with it and collaborate with Aida.



Netscapes: Insight – IVT Testing

Today we did our final build and first live test in the Immersive Vision Theatre (IVT). We started by fitting the Raspberry Pi and touchscreen inside the plinth, then transporting the equipment to the dome ready for our presentation.


Fitting Pi3 + Touchscreen

Chris added wooden beams to support the weight of the Pi, as it will be under a lot of pressure when the touchscreen is in use. This should prevent the touchscreen moving away from the pinth.


Setting up in the IVT – Modifying Code

Whilst in the IVT, Gintare updated her code to work better within the shape of the screen. She moved some of the key elements of the visuals so they were more centered within the dome, bringing them to the viewer’s attention.



Setting up the visualization

We transported the physical part of our project to the IVT and decided where to set it up. We then tested the project within the space to understand how it will look and feel to the viewers and how the colours will display in the dome.

head interface.png

Glass head with touchscreen interface

We took this as an opportunity to double-check our database connections were working. During this time we ran into issues with page refreshing (which I quickly resolved) and with internet connection, which we resolved by using a mobile access point.


Glass head interface in front of the projection.

We even invited Luke to test out our user interface, and have a go at inputting his own data into the visualization!

head interaction.png

Luke testing out the user interface!


Head test with visualization within the dome.

Netscapes: Building Bluetooth Connections – Part 2

Today we had access to the physical side of the project, so I tested my Bluetooth code (see my previous post) with the Arduino side. Luckily, after pairing with the HC-05 Bluetooth component, the code worked first time without need for debugging!


The Arduino side, with HC-05 Bluetooth component & Neopixel ring

Chris and I modified the Arduino code to output different lighting effects based on the character sent across Bluetooth. We decided on the default being Red, with a breathing effect (which I created for a previous project) and a rainbow spin effect.


Bluetooth message sent on tapping “Generate”

How it works

  • When the local server is started, it searches through paired devices to find the HC-05 module.
  • When it is found, it opens a connection and sends it the instruction to turn on.
  • When the generate button is pressed, a new message is sent across the connection instructing it to run the rainbow effect.

Critical analysis/Reflection

To begin with, we were going to use a separate mobile app to input user data across Bluetooth to the Arduino. Switching instead to using the same input as the user data adds a level of interactivity than we would have previously had from a separate phone app. It allows a user to instantly see the effect their inputs have had even before the visualization updates.

This also ties the piece together better, making it an all-in-one system rather than being split up.

Future Improvements

If we had more time, I would modify the code to react differently depending on some of the user inputted data, such as changing colours or effects based on values.



Netscapes: Building Bluetooth connections

To bring together the visualisation and physical prototype, I started working on a Bluetooth connection to the MongoDB connection code I previously built.


Physical prototype with HC-05 Bluetooth module

Since we already have the HC-05 Bluetooth module in place and working with the Bluetooth terminal input on mobile, I simply had to look up how to create an output system in our .js code to match the inputs we previously designed for the Arduino.

BSP design.jpg

Initial flow diagram of program

I looked into how this could be done and began researching into using Bluetooth-Serial-Port module for Node.js.

After getting to grips with how the library works, I experimented with creating a basic framework for opening a Bluetooth connection and sending a basic input.  This code will check for a connection with the correct name, find the matching address, open a connection, and if it is successful, and the character ‘a’. When hooked up to the glass head model, this should activate the LED ring, making it light up.

bluetooth serial build code

My experimentation with BSP within the previously made MongoDB connection code



  • Certain information missing from Bluetooth-Serial-Port NPM documentation – I had to work around this by searching for other uses of BSP to fill in the gaps
  • Method to call previously paired Bluetooth devices doesn’t work on linux systems, so a workaround has to be made (looping through available connections and matching a name)

Next Steps

  • Update Arduino-side code: Modify existing code to include more interesting light effects, such as those I previously created for my ‘Everyware’ project. These would not be direct copies, but modifications of this pre-existing code, for a unique lighting effect.
  • Thoroughly test this code to ensure a secure connection is made and maintained for the duration of the installation.

Code Referencing/Libraries Used

Below is a list of the code documentations I used as reference when building my code. Whilst code was not directly copied, it was heavily referenced from the documentation:

JS express –
JS json body parser –
JS path –
JS Mongo Client –

Netscapes: Making & MLabs

Today we worked further on bringing the project together, drawing together all our current work and making improvements where necessary.

MLabs/Visualization connection

I worked on building a connection to the mLab database, pulling data and using them for parameters for a circle. The code checks the database for a new entry every 15 seconds.


Reading values from Database

For example, I set up mapping for sliders to RGB: The slider takes a value of 0 to 8 for the user, which is mapped to a number between 0 and 255 for 3 of the values (in this case the vars kind, trust and help). I also applied this to the radius and speed of movement.

Next, Gintaré and Chris will take this to build into their visualisation in its current state.

User Interface Modifications

We then looked at Gintaré’s slider inputs and how they would look in the physical build.


First slider test in plinth (without the glass head or diffuser)

After reviewing both its looks and ease of interaction, we decided to make a few changes, such as making the text/scrollbar larger and removing the numbers from the sliders (As they do not display properly on Raspberry Pi).

Gintaré made modifications based on these observations and we quickly reviewed it. We also decided to colour code each section of sliders to each section of the CANOE model. This not only breaks it up but makes it more visually appealing in a way that makes sense.


Touchscreen with enlarged scroll bar for ease of use.

We decided it would still be best to display the touchscreen with the stylus for ease of use as the sliders can still be difficult to use at this size.


Touch screen with colour coded sections (per canoe model)

Since the touchscreen has no enabled right-click function, once the app is full-screen it is very difficult to get out of – meaning the viewers won’t be able to (intentionally or accidentally!) exit it.

We decided to bevel the edges that surround the screen as they make it difficult for users to easily reach the screen. This will also make it look more inviting to a user by bringing it into their view.

Connecting MongoDB/mLab to front-end

I started working on code to input values to the database using Gintaré’s previously made slider interface. This was built using express, npm and node.js. On recommendation from Chris B, Express was used in place of PHP.

When run, the code hosts the necessary files (such as Gintaré’s sliders) on a local server, which sends the data to the remote server when “Generate” is pressed.


Since Node.js means the code is ‘modular’, we decided to put the login details in a separate .js file (rather than censor the mongoDB login details when on GitHub)


Installing Node.js & npm to Raspberry Pi

Once this was up and running (and confirmed to work on mLab), I moved the files and installed the necessary npm packages on my Raspberry Pi. I then tested the connection to mLab to ensure the data was working.


Running the local server (Hosting the sliders form) on Raspberry Pi

We then put this server connection together with Gintaré’s updated user interface.

data canoe test

Data inserted into mLab via Raspberry Pi

mlabs multi canoe

Multiple documents in MongoDB database.

Now that we have data both coming into and out of the database, we are ready to move onto the next steps!

Next Steps

  • Finish Visualization
  • Put together final physical prototype (Seat raspi, sort out power supplies .etc)
  • Preview in IVT – test visualisations before presentation
  • (If time allows) Make a system for colour of head based on last data entry.

Netscapes: Building – MongoDB & Python

This week I have focused on the building stage. After helping my team members get started with p5.js, I got to work building my parts of the project: the back-end and LED control.

Emotion/colour Sliders – Python & Arduino LED control

Part of our project includes the representation of emotions in a visual sense. We decided on creating this using a basic slider input, so I got to work developing it.

I built this using:

  • Raspberry Pi 3
  • Arduino
  • 5″ GPIO touchscreen
  • Python

I created my app using libraries including PySerial (for serial connections using Python) and tkinter (For rapid production of basic user interfaces). I decided to use Python as I have previous experience with creating connections to Arduino using PySerial.

Building circuits

Firstly, I setup the Raspberry Pi 3 with the necessary drivers & fitted the touchscreen. I created a script on the desktop to open an on-screen keyboard (so I wouldn’t have to use a keyboard for setup later). I then built a basic circuit with an RGB LED and hooked it up to the Raspberry Pi.

IMG_20171208_003440 (1)

My Rasberry Pi with GPIO touchscreen.


I started off by building a basic slider interface using tkinter and Python. I made sure it was appropriately scaled to the screen and then worked out how to get the output of the slider in real time. In this case, I used 1 to 8 to match our data input app.


Basic RGB LED wiring

Once the slider was working correctly, I set up a basic serial connection to the arduino using PySerial. Since PySerial needs data to be sent in bytes, I had to make sure the characters sent were encoded. I then built a basic script on the Arduino side to receive the characters and change the colour of the RGB LED based on how far the slider was moved (in this case blue to yellow for sad to happy).

Link to my code on GitHub: 

Sequence 01_1

My completed LED slider

My next steps are to further develop the user interface, and to figure out how to use this in conjunction with the other user inputs (for database connection).


I created a database in MongoDB, and hosted it on mLabs (due to a software conflict I couldn’t easily host it on my own server, so this was the next best thing!)

The database will hold all the input data from our app; and will be used in the creation of our visualization.

mongo input document

Document within MongoDB database

The next step is to connect this database up to the input app and visualization once they are both completed.

Related Links





Art & the Internet of Things

By Timo Arnall, Einar Sneve Martinussen & Jack Schulze

Immaterials is a collection of pieces centered around the increasing use of ‘invisible interfaces’ such as WiFi and mobile networks, and the impact they have on us. (Arnall, 2013)

Immaterials: Light Painting & WiFi explores the scale of WiFi networks in urban spaces, and translates signal strength into unique light paintings.

Immaterials: Light painting WiFi  (Arnall, 2011)

Immaterials also utilises a series of satellite sensitive lamps that change light intensity according to the strength of GPS signals reveived. (Arnall, 2013)

The Nemesis Machine
By Stanza


The Nemesis Machine in exhibition (Stanza, n.d.)

The Nemesis Machine is a travelling installation artwork. It uses a combination of Digital Cities and IOT technology. It visualises life in the city based off real time data from wireless sensors, representing the complexities of cities and city life. (, n.d.)

By Shunichi Kasahara, Ryuma Niiyama, Valentin Heun & Hiroshi Ishii

Incorporates touchscreen interactions into the real world. Users can touch objects shown in live video; dragging them across the screen and across physical space. (Kasahara et al., 2012)

exTouch in action (exTouch, 2013)

By Dávid Lakatos, Matthew Blackshaw, Alex Olwal, Zachary Barryte, Ken Perlin & Hiroshi Ishii

T(ether) is a platform for gestural interaction with objects in digital 3D space, with a handheld device acting as a window into virtual space. T(ether) has potential as a platform for 3D modelling and animation. (Lakatos et al., 2012)



Arnall, T. (2013). The Immaterials Project. [online] Elastic Space. Available at: [Accessed 1 Nov. 2017].

Arnall, T. (2011). Immaterials: Light Painting WiFi. [Video] Available at: [Accessed 1 Nov. 2017].


Stanza (n.d.). The Nemesis Machine Installation. [image] Available at: [Accessed 1 Nov. 2017]. (n.d.). The Nemesis Machine – From Metropolis to Megalopolis to Ecumenopolis. A real time interpretation of the data of the environment using sensors.. [online] Available at: [Accessed 1 Nov. 2017].


Kasahara, S., Niiyama, R., Heun, V. and Ishii, H. (2012). exTouch. [online] Available at: [Accessed 1 Nov. 2017].

exTouch. (2013). [Video] MIT Media Lab: MIT Media Lab. Available at: [Accessed 1 Nov. 2017].


Lakatos, D., Blackshaw, M., Olwal, A., Barryte, Z., Perlin, K. and Ishii, H. (2012). T(ether). [online] Available at: [Accessed 1 Nov. 2017].

Everyware: The Matter of the Immaterial

The brief for “Everyware” is entitled “The Matter of the Immaterial”, and is focused around ubiquitous computing and making the intangible tangible. I took this idea and used it as a starting point for some research into what is already available.




Ultrahaptics development kit (Ultrahaptics, 2015)

Ultrahaptics is a startup company focused on making the virtual world physical. Using an array of ultrasonic projectors and hand tracking, users can feel & interact with virtual environments, as well as feel real tactile feedback without the need for wearing or holding special equipment. (Ultrahaptics, 2017) Read more on my other blog post.


Ultrahaptics Diagram  (Ultrahaptics, 2015)

Ultrahaptics follows a similar concept to the Geomagic Touch X 3D pen (Previously known as Sensable Phantom Desktop), which I have used!



DaisyPi system (DaisyPi, 2017)

The Daisy Pi is a Raspberry Pi powered home monitoring system. It is fitted with multiple sensors including temperature, light intensity and humidity. It is also capable of capturing audio and video feeds, which can be accessed remotely by devices such as mobile phones or tablets. (Lopez, 2017)



Moon up close (Designboom, 2014)

Moon is an interactive installation piece created by Olafur Eliasson and Ai Weiwei. It invites viewers from around the globe to draw and explore a digital “Moonscape”. (Feinstein, 2014)

Eliasson and Weiwei’s work is focused around community and the link between the online and offline world. (Austen, 2013)

Over the course of its 4 years of existence, Moon grew from simple doodles and drawings, to collaborations & clusters of work, such as the “Moon Elisa”, where multiple users came together to recreate the classic Mona Lisa painting. (Cembalest, 2013)

“The moon is interesting because it’s a not yet habitable space so it’s a fantastic place to put your dreams.” – Olafur Eliasson, on Moon (Feinstein, 2014)

Illuminating Clay

Illuminating Clay is a platform for exploring 3D spatial models. Users can manipulate the clay into different shapes (even adding other objects), and using a laser scanner and projector, a height map is projected back onto the surface. It can also be used to work out data such as travel times and land erosion.  (Piper et al., 2002)

Physical Telepresence


Interaction through Physical Telepresence (Vice, 2015)

Physical Telepresence is a work created by students at MIT, based around shared workspaces and remote manipulation of physical objects. (Leithinger et al., 2014) The work consists of a pin-based surface that can be used to interact with physical objects. (Pick, 2015)

Near Field Creatures

Near Field Creatures is a game made by students as a part of the mubaloo annual appathon at Bristol Uni. Users scan NFC tags (such as in certain student cards) and collect different animals of differing values. These collected animals can then be used to compete with other users. (Mubaloo, 2015)

Pico is an interactive work that explores human-computer interaction, allowing people and computers to collaborate in physical space. Pico is interacted with by use of pucks, which can be used by both the computer and the user. (Patten, Alonso and Ishii, 2005)

PICO 2006 from Tangible Media Group on Vimeo. (Pico 2006, 2012)




Ultrahaptics (2015). Ultrahaptics Development Kit. [image] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics. (2017). Ultrahaptics – A remarkable connection with technology. [online] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics (2015). Ultrahaptics diagram. [image] Available at: [Accessed 28 Oct. 2017].


DaisyPi (2017). Daisy Pi Unit. [image] Available at: [Accessed 28 Oct. 2017].

Lopez, A. (2017). Daisy Pi | The home monitoring e-flower. [online] Available at: [Accessed 28 Oct. 2017].


Designboom (2014). Moon close up. [image] Available at: [Accessed 30 Oct. 2017].

Feinstein, L. (2014). Make Your Mark On The Moon With Olafur Eliasson and Ai Weiwei. [online] Creators. Available at: [Accessed 30 Oct. 2017].

Cembalest, R. (2013). How Ai Weiwei and Olafur Eliasson Got 35,000 People to Draw on the Moon | ARTnews. [online] ARTnews. Available at: [Accessed 30 Oct. 2017].

Austen, K. (2013). Drawing on a moon brings out people’s best and worst. [online] New Scientist. Available at: [Accessed 30 Oct. 2017].


Piper, B., Ratti, C., Wang, Y., Zhu, B., Getzoyan, S. and Ishii, H. (2002). Illuminating Clay. [online] Available at: [Accessed 30 Oct. 2017].


Vice (2015). Interaction with Physical Telepresence. [image] Available at: [Accessed 30 Oct. 2017].

Leithinger, D., Follmer, S., Olwal, A. and Ishii, H. (2014). Physical Telepresence. [online] Available at: [Accessed 30 Oct. 2017].

Pick, R. (2015). Watch a Robotic Floor Play with Blocks. [online] Motherboard. Available at: [Accessed 30 Oct. 2017].


Mubaloo. (2015). Mubaloo and Bristol University hold third annual Appathon. [online] Available at: [Accessed 28 Oct. 2017].


Patten, J., Alonso, J. and Ishii, H. (2005). PICO. [online] Available at: [Accessed 30 Oct. 2017].

Pico 2006. (2012). MIT: MIT Tangible Media Group. Available at:


Netscapes: Week 1 – Part 2: Inspirations

AI & Deep Learning



Virtualitics AR office space (Virtualitics, 2017)

Virtualitics is a cross-platform application that merges AI, Big Data & AR/VR. The program features deep learning to transform big data into easily understandable reports and data visualisations within a shared virtual office, helping companies to grow. (WIRE, 2017)

Typically, analysing big data is no easy task. When using large amounts of data, even with visualisation technology, it can be difficult to pick out useful information. The Virtualitics platform uses AI to manage this, by means of algorithms that determine which metrics matter depending on what you are most interested in learning from that data. (Siegel, 2017)

The Virtualitics platform acts as a base for presenting and analyzing big data, and can allow for up to 10 dimensions of data to be shared, giving companies a competitive edge.  (Takahashi, 2017)

The platform could be applied to many different applications, ranging from industries such as Universities or Hospitals, and has already been successfully applied to finance and scientific research applications. (Team, 2017)


  • Highly interactive environment
  • Can be used in multiple business applications and settings
  • Makes big data accessible to everyone – even those who are untrained can easily access data.
  • Simple and easy to use, automatically turns data into useful graphs based on what you want to learn from it.


  • 3D VR office space may not be be appropriate for all applications.
  • VR headsets can be expensive – If the platform requires multiple headsets (such as the shared office space) this could end up being quite costly for a company.

Augmented Reality



Ultrahaptics development kit (South West Business, 2016)

Ultrahaptics is a startup company based around allowing users to feel virtual objects in a physical sense. By using ultrasonic projections and hand tracking, users can feel & interact with virtual environments, as well as feel real tactile feedback without the need for wearing or holding special equipment. (Ultrahaptics, 2017)

  • rsz_142
    Ultrahaptics diagram (Ultrahaptics, 2015)

The system is built using an array of ultrasound emitters in conjunction with motion sensors. Haptic feedback is created by first defining a a space in which to model the acoustic field. Within this field, focus points are created, that have differing types & intensities of feedback. (Kevan, 2015) This can allow for users to use both hands simultaneously or to interact with multiple objects.(Kahn, 2016)


  • Highly Interactive – encourages user engagement
  • Can be used in multiple applications
  • Could make other AR and VR apps more immersive when used together
  • All in one development kit, tools and support.
  • Possibility to create multiple “objects” within 3D space.


  • In certain applications, physical buttons could be more appropriate
  • Users can still “push through” objects – they can be felt, but are not solid.
  • The platform can (and does!) create noise and vibrations, whilst work is being done to minimize this, it will most likely always be present.

Whilst this sort of technology is still in its infancy, it offers a promising insight into the future of interactive technologies. In future, it could be applied to uses such as 3D sculpt modelling and similar applications, or making much more immersive VR and AR experiences.



Digilens in-car HUD (Digilens, 2017)

Digilens combines AR and holographic technologies. They build AR screens for use in multiple applications, including inside car windshields and in aeroplanes. These screens can display real-time data, enhancing driver awareness and safety. (DigiLens, Inc., 2017)


  • Fully customisable displays.
  • Wide range of uses, both commercial and private.
  • Can enhance driver awareness & Road safety
  • Less bulky than tradition displays


  • Could be distracting for drivers by taking their view away from the road
  • Cost of building and adding to cars

Interactive Art

After Dark
The Workers, 2014


Robot from After Dark, Tate Britain, 2014 (The Workers, 2014)

After Dark is an exhibition piece built using Raspberry pi. It allows viewers to take control of & drive a robot, exploring the exhibitions of TATE Britain via live video feed after closing time. (, 2014)

It was created as a way to engage new audiences in art; allowing them to explore the exhibitions without even having to set foot inside the building. Whilst viewers were driving the robots, art experts provided live commentary, providing new insights and engagement into the pieces on display. (The Workers, 2014)

The robots were fitted with cameras and lighting, as well as sensors to ensure they could navigate the galleries without complication. (Tate, 2014)


  • Highly Interactive – encourages user engagement.
  • Acts as a platform for learning and exploration.
  • Live art expert commentary makes the experience more than just “driving a robot”.


  • Could be costly to build & run
  • Battery powered robots – battery life is always a concern, particularly when these robots are connected to the internet and streaming for multiple hours.
  • Special measures must be taken to ensure damage to museum exhibits doesn’t happen.

Whilst this is an interesting idea, it is important to note that virtual museum tours already exist (such as video tours or even VR tours, which also sometimes provide commentary), and the act of driving the robot could be considered nothing more than a gimmick.

Zach Gage, 2016


Installation View (Gage, 2016)

Glaciers is an installation piece built using 40 Raspberry Pi systems, exploring the interactions between digital platforms(In this case search engines) and humans. They are programmed to take the top 3 autocomplete suggestions that follow various phrases, and display them on a screen, creating odd poetry that reflects the nature of the modern age. (Bate, 2016)

Although the screens appear static, the phrases are updated once a day based on the most popular auto-completes. Due to the nature of this, the poems could change daily, but are unlikely to. (Gage, 2016)


“He Says” by Zach Gage, part of “Glaciers” (Gage, 2017)


  • Relatively cheap and simple to build – Technology behind it is relatively cheap and easy to come by.
  • Simplistic nature
  • Concept understandable to most viewers


  • Not interactive
  • Due to the nature of Google autocomplete, poems do not change often (sometimes not at all)

Further images of Glaciers can be seen here.



Virtualitics (2017). Virtualitics office space. [image] Available at: [Accessed 28 Oct. 2017].

WIRE, B. (2017). Virtualitics Launches as First Platform to Merge Artificial Intelligence, Big Data and Virtual/Augmented Reality. [online] Available at: [Accessed 28 Oct. 2017].

Virtualitics (2017). Virtualitics [online] Available at: [Accessed 28 Oct. 2017].

Siegel, J. (2017). How this Pasadena startup is using VR and machine learning to help companies analyze data. [online] Built In Los Angeles. Available at: [Accessed 28 Oct. 2017].

Takahashi, D. (2017). VR analytics startup Virtualitics raises $4.4 million. [online] VentureBeat. Available at: [Accessed 28 Oct. 2017].

Team, E. (2017). Virtualitics: Caltech & NASA Scientists Build VR/AR Analytics Platform using AI & Machine Learning – insideBIGDATA. [online] insideBIGDATA. Available at: [Accessed 28 Oct. 2017].



South West Business (2016). Ultrahaptics development kit. [image] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics. (2017). Ultrahaptics – A remarkable connection with technology. [online] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics (2015). Ultrahaptics diagram. [image] Available at: [Accessed 28 Oct. 2017].

Kevan, T. (2015). Touch Control with Feeling | Electronics360. [online] Available at: [Accessed 28 Oct. 2017].

Kahn, J. (2016). Meet the Man Who Made Virtual Reality ‘Feel’ More Real. [online] Available at: [Accessed 28 Oct. 2017].


Digilens (2017). Digilens car HUD. [image] Available at: [Accessed 29 Oct. 2017].

DigiLens, Inc. (2017). Home – DigiLens, Inc.. [online] Available at: [Accessed 29 Oct. 2017].


The Workers (2014). After Dark Robot. [image] Available at: [Accessed 28 Oct. 2017]. (2014). After Dark. [online] Available at: [Accessed 28 Oct. 2017].

The Workers. (2014). The Workers: After Dark. [online] Available at: [Accessed 28 Oct. 2017].

Tate. (2014). IK Prize 2014: After Dark – Special Event at Tate Britain | Tate. [online] Available at: [Accessed 28 Oct. 2017].


Gage, Z. (2016). Installation View. [image] Available at: [Accessed 28 Oct. 2017].

Bate, A. (2016). Using Raspberry Pi to Create Poetry. [online] Raspberry Pi. Available at: [Accessed 28 Oct. 2017].

Gage, Z. (2016). ZACH GAGE – Glaciers @ Postmasters: March 25 – May 7, 2016. [online] Available at: [Accessed 28 Oct. 2017].

Gage, Z. (2017). He Says. [image] Available at: [Accessed 28 Oct. 2017].

Setting up Eduroam on Raspberry Pi

Anyone who has ever used Eduroam on a Raspberry pi will know that it’s no easy task to set it up. Fortunately, it is possible, it just takes a lot of trial and error.

This has been tested on a Pi2, Pi3, and a model b+ with a WiFi adapter.

How to set up an Eduroam WiFi connection on Raspberry pi:

Firstly, you will need to find out your university’s network information – this will vary depending on which university you are at. As this guide is made (and tested) for Plymouth University, you may have to find your own university’s information. In this case, the information was readily available on the university’s website — you will need to look this up in case there are any differences (this part is up to you!).

Before you start, you may need to stop network connections:

sudo service networking stop

Warning:  This will disable any currently open network connections – if you are using your Raspi with SSH, this will disconnect it, so be sure to do this using a mouse/keyboard/screen.


If you have used WiFi on a Raspberry Pi before, you may have noticed your password is stored in plain text – this is not okay! We can combat this by hashing it. You can convert your password by opening a command prompt and typing in:

read -s input ; echo -n $input | iconv -t utf16le | openssl md4

then type in your password. It will feed back a hashed version of your password. This needs to be added to the the ‘wpa_supplicant.conf’ file as indicated later.


Editing the Config files

The two files we need to edit are ‘/etc/wpa_supplicant/wpa_supplicant.conf’ and ‘/etc/network/interfaces’. What you put into these files depends on your university’s network.

The first can be edited in the terminal by typing:

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

in ‘wpa_supplicant’:

   ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
scan_ssid=1 scan_ssid=1
pairwise=CCMP TKIP pairwise=CCMP TKIP

where <eduroam username> is your usual eduroam login and <eduroam password> is the hashed password.

Next, edit ‘interfaces’ by typing into the terminal:

sudo nano /etc/network/interfaces

and adding in:

 auto lo wlan0
     iface lo inet loopback
     iface eth0 inet dhcp
     iface wlan0 inet dhcp
        wpa-driver wext
        wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
     iface default inet dhcp


You may also need your university’s security certificate – this can usually be found with the other details for manually connecting to your university’s WiFi. Once you have found it, add it to the folder ‘/etc/ssl/certs/’ and then link back to it from within your ‘wpa_supplicant.conf’ file by adding:


where ‘/etc/certs/<NameofCert>’ is the name/location of the certificate needed.

Once this is done, you will need to run wpa_supplicant:

sudo wpa_supplicant -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf -B

You may need to reboot to get it to connect.


You may find that your Raspberry pi resets key_mgmt to “none” on connecting to Eduroam and lists as “disassociated from Eduroam” – if this is the case, you may find it easier to work on a copy and overwriting the original with the Eduroam version.

Useful links

Eduroam for RasPi at Bristol University

Eduroam for RasPi at Cambridge University