Inspirational Artworks


EELS 3D Projection-Mapping game
By Leo Seeley, 2011

Video: EELS projection mapping multiplayer game (Seeley, 2011)

EELS is an interactive multiplayer game bringing together three-dimensional projection mapping and mobile application design. Users can control the movement of an eel as it moves across 3D space. (Casperson, 2018)

Ohne Titel (Hello World.) / Untitled (Hello World.)
By Valentin Ruhry, 2011 

Ohne Titel (Hello World) – Installation (Ruhry, 2018)

Reciprocal Space
By Ruari Glynn, 2005

Reciprocal Space challenges the perception of buildings as a solid and unchanging space. (We Make Money Not Art, 2005)

Video: Reciprocal Space in action. (Glynn, 2011)

The Agency at the End of Civilization.
By Stanza, 2014

Video: Agency at the End of Civilization (Stanza, 2014)

This installation uses real-time data from UK car number plate recognition systems across the South of England.

The piece includes 24 screens and multiple speakers and CCTV cameras, engaging the audience into the role of the observer. (, 2014)


Seeley, L. (2011). EELS projection mapping multiplayer game. [Video] Available at: [Accessed 31 Jan. 2018].

Casperson, M. (2018). Projection Mapping Multiplayer Game – Projection Mapping Central. [online] Projection Mapping Central. Available at: [Accessed 31 Jan. 2018].

Ruhry, V. (2018). Ohne Titel (Hello world) – Installation. [image] Available at: [Accessed 31 Jan. 2018].

We Make Money Not Art. (2005). Reciprocal Space. [online] Available at: [Accessed 31 Jan. 2018].

Glynn, R. (2011). Reciprocal Space. Available at: [Accessed 31 Jan. 2018].

Stanza (2014). The Agency at the End of Civilization. Available at: [Accessed 31 Jan. 2018]. (2014). The Agency At The End Of Civilisation. By Stanza. [online] Available at: [Accessed 31 Jan. 2018].



Netscapes: Insight – Final Presentation

Insight: The Big Five is an interactive artwork inspired by psychology and personality. Users are invited to take a short personality test on the in-built touch screen, and see their results displayed in real-time within the Immersive Vision Theatre (IVT). The glass head also reacts to inputted data with a variety of lighting effects.

Here are some photos of our final presentation within the IVT.


User inputs


User input with changing visuals


User inputs



Netscapes: Insight – Critical Analysis & Reflection

Whilst some aspects of the project could have gone better, overall I consider the project to be a success.

We had many issues with settling on an idea to begin with, although we knew roughly the technologies we wanted to work with, it took a few weeks of discussion and planning to fully settle on an idea. We went from building small robots with limited user interaction to a fully fledged user-interaction based installation piece; as well as moving from small scale organically-inspired projection mapping to abstract visualisations within the IVT.

Naturally, the project was subject to many changes as time went on. This is a natural part of the process; although it does mean our project is quite different from the initial idea.

Below are some of the choices we made and why I feel they were effective:

  • Heavy concept/research basis: Our project had a strong background of research behind it – Every choice has reasoning
  • Immersive vision theater (IVT): We chose to use this as it offers full surround of visuals and soundscapes – much like your personality reflects the way you view the world. bring it into physical sense .etc We chose to use the IVT because it reflects the feeling of being “inside” the head of the user. We also made use of the surround sound system, adding a further dimension to the experience.
  • All in one interface:  Instead of using two interfaces ( Pi for input of user data, and Mobile app to change head colour) we decided to bundle this into one input (Pi). This works much better as it merges both sides of the project, helping to keep the immersion of the user.
  • Multiple wireless networks: The use of both WiFi connections and Bluetooth for one seamless connection. This helps to keep the piece as all-in-one. Whilst this could have been done in a serial connection (see previous post) we already had the Bluetooth framework in place, so we decided to make use of it rather than change the code again.

What could have gone better:

  • ‘Plan B’ for internet connection: internet access in the dome is unreliable and setting up Eduroam can be difficult on certain platforms. The only difficulty with this is finding a workaround that still satisfies the requirements of the brief.
  • More user inputs: Make the visualization take more user’s data inputs at display them at once. This means changing both the way the visualization works and how the database read works, but would be implemented if the project carried on longer.
  • Stronger visuals: Have much more organic and interesting visuals to watch that incorporate more inputs.

Although we had some issues with group dynamics and the overall flow of the process, we were able to work around this and effectively work together to create something we are all proud of!

Netscapes: Insight – IVT Testing

Today we did our final build and first live test in the Immersive Vision Theatre (IVT). We started by fitting the Raspberry Pi and touchscreen inside the plinth, then transporting the equipment to the dome ready for our presentation.


Fitting Pi3 + Touchscreen

Chris added wooden beams to support the weight of the Pi, as it will be under a lot of pressure when the touchscreen is in use. This should prevent the touchscreen moving away from the pinth.


Setting up in the IVT – Modifying Code

Whilst in the IVT, Gintare updated her code to work better within the shape of the screen. She moved some of the key elements of the visuals so they were more centered within the dome, bringing them to the viewer’s attention.



Setting up the visualization

We transported the physical part of our project to the IVT and decided where to set it up. We then tested the project within the space to understand how it will look and feel to the viewers and how the colours will display in the dome.

head interface.png

Glass head with touchscreen interface

We took this as an opportunity to double-check our database connections were working. During this time we ran into issues with page refreshing (which I quickly resolved) and with internet connection, which we resolved by using a mobile access point.


Glass head interface in front of the projection.

We even invited Luke to test out our user interface, and have a go at inputting his own data into the visualization!

head interaction.png

Luke testing out the user interface!


Head test with visualization within the dome.

Netscapes: Building Bluetooth Connections – Part 2

Today we had access to the physical side of the project, so I tested my Bluetooth code (see my previous post) with the Arduino side. Luckily, after pairing with the HC-05 Bluetooth component, the code worked first time without need for debugging!


The Arduino side, with HC-05 Bluetooth component & Neopixel ring

Chris and I modified the Arduino code to output different lighting effects based on the character sent across Bluetooth. We decided on the default being Red, with a breathing effect (which I created for a previous project) and a rainbow spin effect.


Bluetooth message sent on tapping “Generate”

How it works

  • When the local server is started, it searches through paired devices to find the HC-05 module.
  • When it is found, it opens a connection and sends it the instruction to turn on.
  • When the generate button is pressed, a new message is sent across the connection instructing it to run the rainbow effect.

Critical analysis/Reflection

To begin with, we were going to use a separate mobile app to input user data across Bluetooth to the Arduino. Switching instead to using the same input as the user data adds a level of interactivity than we would have previously had from a separate phone app. It allows a user to instantly see the effect their inputs have had even before the visualization updates.

This also ties the piece together better, making it an all-in-one system rather than being split up.

Future Improvements

If we had more time, I would modify the code to react differently depending on some of the user inputted data, such as changing colours or effects based on values.



Netscapes: Building Bluetooth connections

To bring together the visualisation and physical prototype, I started working on a Bluetooth connection to the MongoDB connection code I previously built.


Physical prototype with HC-05 Bluetooth module

Since we already have the HC-05 Bluetooth module in place and working with the Bluetooth terminal input on mobile, I simply had to look up how to create an output system in our .js code to match the inputs we previously designed for the Arduino.

BSP design.jpg

Initial flow diagram of program

I looked into how this could be done and began researching into using Bluetooth-Serial-Port module for Node.js.

After getting to grips with how the library works, I experimented with creating a basic framework for opening a Bluetooth connection and sending a basic input.  This code will check for a connection with the correct name, find the matching address, open a connection, and if it is successful, and the character ‘a’. When hooked up to the glass head model, this should activate the LED ring, making it light up.

bluetooth serial build code

My experimentation with BSP within the previously made MongoDB connection code



  • Certain information missing from Bluetooth-Serial-Port NPM documentation – I had to work around this by searching for other uses of BSP to fill in the gaps
  • Method to call previously paired Bluetooth devices doesn’t work on linux systems, so a workaround has to be made (looping through available connections and matching a name)

Next Steps

  • Update Arduino-side code: Modify existing code to include more interesting light effects, such as those I previously created for my ‘Everyware’ project. These would not be direct copies, but modifications of this pre-existing code, for a unique lighting effect.
  • Thoroughly test this code to ensure a secure connection is made and maintained for the duration of the installation.

Code Referencing/Libraries Used

Below is a list of the code documentations I used as reference when building my code. Whilst code was not directly copied, it was heavily referenced from the documentation:

JS express –
JS json body parser –
JS path –
JS Mongo Client –

Everyware: The Icebreaker

Presentation Link:

The Icebreaker T-shirt is a unique tool for connecting people. It is designed for use in conventions, open days, and other large social events. Plymouth University alone has approximately 23,000 students, with many of these students attending open days and freshers fairs. (University of Plymouth, 2018) With the phrase “It’s not what you know but who you know” being more true than ever; it is vital to grow your social circle and create opportunities for both now and the future.


Icebreaker mobile app

The Icebreaker mobile app accompanies the T-shirt, allowing users to input their login details as well as add in their hobbies and interests to match upon. The app is simple, allowing you to input your details and go, so your attention is always in the moment, rather than at your phone.

Watch our promotional video:

The interactive T-shirt makes the immaterial concept of shared interests and brings them to life in a visual way. When you meet someone you have similar interests to, your shirt will light up, telling you exactly who you have matched with and what interests you match on. The accompanying app tells you exactly how close the person you matched with is, so you never miss an opportunity to meet someone.

T-shirt designs

Icebreaker Tshirt designs

The Icebreaker T-shirt allows wearers to expand their social circle and meet new people – Perfect for university freshers fairs and open days.

Security features

Don’t want to match with anyone? Only want to match with people of a similar age? Don’t want to share your location? No problem. With the Icebreaker mobile app, you can choose from a range of privacy options.

Future updates include:

  • Personalisation: Customisation of colours and lighting effects based on preferences in the mobile app.
  • More Matches: A more extensive list of hobbies and interests to choose from.
  • Washable: Switching to washable conductive threads & removable LED panels, allowing the T-shirt to be reused


University of Plymouth. (2018). Facts and figures. [online] Available at: [Accessed 19 Jan. 2018].


Netscapes: Technologies

Technologies we have used or have thought about using:



Browserify allows you to bundle all your add-ons into one using require. Useful for when you have multiple dependencies! (, 2018)

This would be perfect for when we are building our server connections, as this requires multiple packages to achieve.


TouchOSC enables you to easily build touch based interfaces for mobile devices. It allows you to rapidly build user interfaces with buttons, switches, sliders .etc (, 2018)

Although this is primarily for use with sound control, it was suggested to us to try and re-purpose. We were going to use this for emotion input to influence the head model colour before we switched to single input from the Raspberry Pi.

References: (2018). Browserify. [online] Available at: [Accessed 17 Jan. 2018]. (2018). h e x l e r . n e t | TouchOSC. [online] Available at: [Accessed 17 Jan. 2018].

Netscapes: Making & MLabs

Today we worked further on bringing the project together, drawing together all our current work and making improvements where necessary.

MLabs/Visualization connection

I worked on building a connection to the mLab database, pulling data and using them for parameters for a circle. The code checks the database for a new entry every 15 seconds.


Reading values from Database

For example, I set up mapping for sliders to RGB: The slider takes a value of 0 to 8 for the user, which is mapped to a number between 0 and 255 for 3 of the values (in this case the vars kind, trust and help). I also applied this to the radius and speed of movement.

Next, Gintaré and Chris will take this to build into their visualisation in its current state.

User Interface Modifications

We then looked at Gintaré’s slider inputs and how they would look in the physical build.


First slider test in plinth (without the glass head or diffuser)

After reviewing both its looks and ease of interaction, we decided to make a few changes, such as making the text/scrollbar larger and removing the numbers from the sliders (As they do not display properly on Raspberry Pi).

Gintaré made modifications based on these observations and we quickly reviewed it. We also decided to colour code each section of sliders to each section of the CANOE model. This not only breaks it up but makes it more visually appealing in a way that makes sense.


Touchscreen with enlarged scroll bar for ease of use.

We decided it would still be best to display the touchscreen with the stylus for ease of use as the sliders can still be difficult to use at this size.


Touch screen with colour coded sections (per canoe model)

Since the touchscreen has no enabled right-click function, once the app is full-screen it is very difficult to get out of – meaning the viewers won’t be able to (intentionally or accidentally!) exit it.

We decided to bevel the edges that surround the screen as they make it difficult for users to easily reach the screen. This will also make it look more inviting to a user by bringing it into their view.

Connecting MongoDB/mLab to front-end

I started working on code to input values to the database using Gintaré’s previously made slider interface. This was built using express, npm and node.js. On recommendation from Chris B, Express was used in place of PHP.

When run, the code hosts the necessary files (such as Gintaré’s sliders) on a local server, which sends the data to the remote server when “Generate” is pressed.


Since Node.js means the code is ‘modular’, we decided to put the login details in a separate .js file (rather than censor the mongoDB login details when on GitHub)


Installing Node.js & npm to Raspberry Pi

Once this was up and running (and confirmed to work on mLab), I moved the files and installed the necessary npm packages on my Raspberry Pi. I then tested the connection to mLab to ensure the data was working.


Running the local server (Hosting the sliders form) on Raspberry Pi

We then put this server connection together with Gintaré’s updated user interface.

data canoe test

Data inserted into mLab via Raspberry Pi

mlabs multi canoe

Multiple documents in MongoDB database.

Now that we have data both coming into and out of the database, we are ready to move onto the next steps!

Next Steps

  • Finish Visualization
  • Put together final physical prototype (Seat raspi, sort out power supplies .etc)
  • Preview in IVT – test visualisations before presentation
  • (If time allows) Make a system for colour of head based on last data entry.

Netscapes: Building

Today we focused on finishing developing the physical side of our project.


The LED ring powered glass head. Colours are being chosen by a mobile phone (off screen).

The second Neopixel ring arrived, so we soldered in some headers and built a circuit board for it. We installed the necessary drivers into the Arduino IDE and programmed it to output different colours.

We then merged this code with the Bluetooth code Chris made earlier.

We mounted the Arduino, breadboard and Bluetooth module to the interior on the plinth. We drilled holes into the head base to accommodate the wiring for the RGB LED ring.


The Arduino & Circuitry mounted inside the plinth.

This LED ring is brighter than the other, even inside the diffuser, so is even better for our finalised build!


The LED ring mounted in the plinth.

Our next steps are to:

  • I will develop an app to send characters across a Bluetooth connection, enabling us to remotely change the colour of the head without the need for the Bluetooth Terminal.
  • I need to build server connections for our visualization, input sliders and MySQL Database.
  • Gintaré and Chris need to complete the visualization for the IVT theatre.