Netscapes: Insight – IVT Testing

Today we did our final build and first live test in the Immersive Vision Theatre (IVT). We started by fitting the Raspberry Pi and touchscreen inside the plinth, then transporting the equipment to the dome ready for our presentation.


Fitting Pi3 + Touchscreen

Chris added wooden beams to support the weight of the Pi, as it will be under a lot of pressure when the touchscreen is in use. This should prevent the touchscreen moving away from the pinth.


Setting up in the IVT – Modifying Code

Whilst in the IVT, Gintare updated her code to work better within the shape of the screen. She moved some of the key elements of the visuals so they were more centered within the dome, bringing them to the viewer’s attention.



Setting up the visualization

We transported the physical part of our project to the IVT and decided where to set it up. We then tested the project within the space to understand how it will look and feel to the viewers and how the colours will display in the dome.

head interface.png

Glass head with touchscreen interface

We took this as an opportunity to double-check our database connections were working. During this time we ran into issues with page refreshing (which I quickly resolved) and with internet connection, which we resolved by using a mobile access point.


Glass head interface in front of the projection.

We even invited Luke to test out our user interface, and have a go at inputting his own data into the visualization!

head interaction.png

Luke testing out the user interface!


Head test with visualization within the dome.


Netscapes: Building Bluetooth Connections – Part 2

Today we had access to the physical side of the project, so I tested my Bluetooth code (see my previous post) with the Arduino side. Luckily, after pairing with the HC-05 Bluetooth component, the code worked first time without need for debugging!


The Arduino side, with HC-05 Bluetooth component & Neopixel ring

Chris and I modified the Arduino code to output different lighting effects based on the character sent across Bluetooth. We decided on the default being Red, with a breathing effect (which I created for a previous project) and a rainbow spin effect.


Bluetooth message sent on tapping “Generate”

How it works

  • When the local server is started, it searches through paired devices to find the HC-05 module.
  • When it is found, it opens a connection and sends it the instruction to turn on.
  • When the generate button is pressed, a new message is sent across the connection instructing it to run the rainbow effect.

Critical analysis/Reflection

To begin with, we were going to use a separate mobile app to input user data across Bluetooth to the Arduino. Switching instead to using the same input as the user data adds a level of interactivity than we would have previously had from a separate phone app. It allows a user to instantly see the effect their inputs have had even before the visualization updates.

This also ties the piece together better, making it an all-in-one system rather than being split up.

Future Improvements

If we had more time, I would modify the code to react differently depending on some of the user inputted data, such as changing colours or effects based on values.



Netscapes: Building Bluetooth connections

To bring together the visualisation and physical prototype, I started working on a Bluetooth connection to the MongoDB connection code I previously built.


Physical prototype with HC-05 Bluetooth module

Since we already have the HC-05 Bluetooth module in place and working with the Bluetooth terminal input on mobile, I simply had to look up how to create an output system in our .js code to match the inputs we previously designed for the Arduino.

BSP design.jpg

Initial flow diagram of program

I looked into how this could be done and began researching into using Bluetooth-Serial-Port module for Node.js.

After getting to grips with how the library works, I experimented with creating a basic framework for opening a Bluetooth connection and sending a basic input.  This code will check for a connection with the correct name, find the matching address, open a connection, and if it is successful, and the character ‘a’. When hooked up to the glass head model, this should activate the LED ring, making it light up.

bluetooth serial build code

My experimentation with BSP within the previously made MongoDB connection code



  • Certain information missing from Bluetooth-Serial-Port NPM documentation – I had to work around this by searching for other uses of BSP to fill in the gaps
  • Method to call previously paired Bluetooth devices doesn’t work on linux systems, so a workaround has to be made (looping through available connections and matching a name)

Next Steps

  • Update Arduino-side code: Modify existing code to include more interesting light effects, such as those I previously created for my ‘Everyware’ project. These would not be direct copies, but modifications of this pre-existing code, for a unique lighting effect.
  • Thoroughly test this code to ensure a secure connection is made and maintained for the duration of the installation.

Code Referencing/Libraries Used

Below is a list of the code documentations I used as reference when building my code. Whilst code was not directly copied, it was heavily referenced from the documentation:

JS express –
JS json body parser –
JS path –
JS Mongo Client –

Netscapes: Making & MLabs

Today we worked further on bringing the project together, drawing together all our current work and making improvements where necessary.

MLabs/Visualization connection

I worked on building a connection to the mLab database, pulling data and using them for parameters for a circle. The code checks the database for a new entry every 15 seconds.


Reading values from Database

For example, I set up mapping for sliders to RGB: The slider takes a value of 0 to 8 for the user, which is mapped to a number between 0 and 255 for 3 of the values (in this case the vars kind, trust and help). I also applied this to the radius and speed of movement.

Next, Gintaré and Chris will take this to build into their visualisation in its current state.

User Interface Modifications

We then looked at Gintaré’s slider inputs and how they would look in the physical build.


First slider test in plinth (without the glass head or diffuser)

After reviewing both its looks and ease of interaction, we decided to make a few changes, such as making the text/scrollbar larger and removing the numbers from the sliders (As they do not display properly on Raspberry Pi).

Gintaré made modifications based on these observations and we quickly reviewed it. We also decided to colour code each section of sliders to each section of the CANOE model. This not only breaks it up but makes it more visually appealing in a way that makes sense.


Touchscreen with enlarged scroll bar for ease of use.

We decided it would still be best to display the touchscreen with the stylus for ease of use as the sliders can still be difficult to use at this size.


Touch screen with colour coded sections (per canoe model)

Since the touchscreen has no enabled right-click function, once the app is full-screen it is very difficult to get out of – meaning the viewers won’t be able to (intentionally or accidentally!) exit it.

We decided to bevel the edges that surround the screen as they make it difficult for users to easily reach the screen. This will also make it look more inviting to a user by bringing it into their view.

Connecting MongoDB/mLab to front-end

I started working on code to input values to the database using Gintaré’s previously made slider interface. This was built using express, npm and node.js. On recommendation from Chris B, Express was used in place of PHP.

When run, the code hosts the necessary files (such as Gintaré’s sliders) on a local server, which sends the data to the remote server when “Generate” is pressed.


Since Node.js means the code is ‘modular’, we decided to put the login details in a separate .js file (rather than censor the mongoDB login details when on GitHub)


Installing Node.js & npm to Raspberry Pi

Once this was up and running (and confirmed to work on mLab), I moved the files and installed the necessary npm packages on my Raspberry Pi. I then tested the connection to mLab to ensure the data was working.


Running the local server (Hosting the sliders form) on Raspberry Pi

We then put this server connection together with Gintaré’s updated user interface.

data canoe test

Data inserted into mLab via Raspberry Pi

mlabs multi canoe

Multiple documents in MongoDB database.

Now that we have data both coming into and out of the database, we are ready to move onto the next steps!

Next Steps

  • Finish Visualization
  • Put together final physical prototype (Seat raspi, sort out power supplies .etc)
  • Preview in IVT – test visualisations before presentation
  • (If time allows) Make a system for colour of head based on last data entry.

Netscapes: Building – MongoDB & Python

This week I have focused on the building stage. After helping my team members get started with p5.js, I got to work building my parts of the project: the back-end and LED control.

Emotion/colour Sliders – Python & Arduino LED control

Part of our project includes the representation of emotions in a visual sense. We decided on creating this using a basic slider input, so I got to work developing it.

I built this using:

  • Raspberry Pi 3
  • Arduino
  • 5″ GPIO touchscreen
  • Python

I created my app using libraries including PySerial (for serial connections using Python) and tkinter (For rapid production of basic user interfaces). I decided to use Python as I have previous experience with creating connections to Arduino using PySerial.

Building circuits

Firstly, I setup the Raspberry Pi 3 with the necessary drivers & fitted the touchscreen. I created a script on the desktop to open an on-screen keyboard (so I wouldn’t have to use a keyboard for setup later). I then built a basic circuit with an RGB LED and hooked it up to the Raspberry Pi.

IMG_20171208_003440 (1)

My Rasberry Pi with GPIO touchscreen.


I started off by building a basic slider interface using tkinter and Python. I made sure it was appropriately scaled to the screen and then worked out how to get the output of the slider in real time. In this case, I used 1 to 8 to match our data input app.


Basic RGB LED wiring

Once the slider was working correctly, I set up a basic serial connection to the arduino using PySerial. Since PySerial needs data to be sent in bytes, I had to make sure the characters sent were encoded. I then built a basic script on the Arduino side to receive the characters and change the colour of the RGB LED based on how far the slider was moved (in this case blue to yellow for sad to happy).

Link to my code on GitHub: 

Sequence 01_1

My completed LED slider

My next steps are to further develop the user interface, and to figure out how to use this in conjunction with the other user inputs (for database connection).


I created a database in MongoDB, and hosted it on mLabs (due to a software conflict I couldn’t easily host it on my own server, so this was the next best thing!)

The database will hold all the input data from our app; and will be used in the creation of our visualization.

mongo input document

Document within MongoDB database

The next step is to connect this database up to the input app and visualization once they are both completed.

Related Links





Art & the Internet of Things

By Timo Arnall, Einar Sneve Martinussen & Jack Schulze

Immaterials is a collection of pieces centered around the increasing use of ‘invisible interfaces’ such as WiFi and mobile networks, and the impact they have on us. (Arnall, 2013)

Immaterials: Light Painting & WiFi explores the scale of WiFi networks in urban spaces, and translates signal strength into unique light paintings.

Immaterials: Light painting WiFi  (Arnall, 2011)

Immaterials also utilises a series of satellite sensitive lamps that change light intensity according to the strength of GPS signals reveived. (Arnall, 2013)

The Nemesis Machine
By Stanza


The Nemesis Machine in exhibition (Stanza, n.d.)

The Nemesis Machine is a travelling installation artwork. It uses a combination of Digital Cities and IOT technology. It visualises life in the city based off real time data from wireless sensors, representing the complexities of cities and city life. (, n.d.)

By Shunichi Kasahara, Ryuma Niiyama, Valentin Heun & Hiroshi Ishii

Incorporates touchscreen interactions into the real world. Users can touch objects shown in live video; dragging them across the screen and across physical space. (Kasahara et al., 2012)

exTouch in action (exTouch, 2013)

By Dávid Lakatos, Matthew Blackshaw, Alex Olwal, Zachary Barryte, Ken Perlin & Hiroshi Ishii

T(ether) is a platform for gestural interaction with objects in digital 3D space, with a handheld device acting as a window into virtual space. T(ether) has potential as a platform for 3D modelling and animation. (Lakatos et al., 2012)



Arnall, T. (2013). The Immaterials Project. [online] Elastic Space. Available at: [Accessed 1 Nov. 2017].

Arnall, T. (2011). Immaterials: Light Painting WiFi. [Video] Available at: [Accessed 1 Nov. 2017].


Stanza (n.d.). The Nemesis Machine Installation. [image] Available at: [Accessed 1 Nov. 2017]. (n.d.). The Nemesis Machine – From Metropolis to Megalopolis to Ecumenopolis. A real time interpretation of the data of the environment using sensors.. [online] Available at: [Accessed 1 Nov. 2017].


Kasahara, S., Niiyama, R., Heun, V. and Ishii, H. (2012). exTouch. [online] Available at: [Accessed 1 Nov. 2017].

exTouch. (2013). [Video] MIT Media Lab: MIT Media Lab. Available at: [Accessed 1 Nov. 2017].


Lakatos, D., Blackshaw, M., Olwal, A., Barryte, Z., Perlin, K. and Ishii, H. (2012). T(ether). [online] Available at: [Accessed 1 Nov. 2017].

Everyware: The Matter of the Immaterial

The brief for “Everyware” is entitled “The Matter of the Immaterial”, and is focused around ubiquitous computing and making the intangible tangible. I took this idea and used it as a starting point for some research into what is already available.




Ultrahaptics development kit (Ultrahaptics, 2015)

Ultrahaptics is a startup company focused on making the virtual world physical. Using an array of ultrasonic projectors and hand tracking, users can feel & interact with virtual environments, as well as feel real tactile feedback without the need for wearing or holding special equipment. (Ultrahaptics, 2017) Read more on my other blog post.


Ultrahaptics Diagram  (Ultrahaptics, 2015)

Ultrahaptics follows a similar concept to the Geomagic Touch X 3D pen (Previously known as Sensable Phantom Desktop), which I have used!



DaisyPi system (DaisyPi, 2017)

The Daisy Pi is a Raspberry Pi powered home monitoring system. It is fitted with multiple sensors including temperature, light intensity and humidity. It is also capable of capturing audio and video feeds, which can be accessed remotely by devices such as mobile phones or tablets. (Lopez, 2017)



Moon up close (Designboom, 2014)

Moon is an interactive installation piece created by Olafur Eliasson and Ai Weiwei. It invites viewers from around the globe to draw and explore a digital “Moonscape”. (Feinstein, 2014)

Eliasson and Weiwei’s work is focused around community and the link between the online and offline world. (Austen, 2013)

Over the course of its 4 years of existence, Moon grew from simple doodles and drawings, to collaborations & clusters of work, such as the “Moon Elisa”, where multiple users came together to recreate the classic Mona Lisa painting. (Cembalest, 2013)

“The moon is interesting because it’s a not yet habitable space so it’s a fantastic place to put your dreams.” – Olafur Eliasson, on Moon (Feinstein, 2014)

Illuminating Clay

Illuminating Clay is a platform for exploring 3D spatial models. Users can manipulate the clay into different shapes (even adding other objects), and using a laser scanner and projector, a height map is projected back onto the surface. It can also be used to work out data such as travel times and land erosion.  (Piper et al., 2002)

Physical Telepresence


Interaction through Physical Telepresence (Vice, 2015)

Physical Telepresence is a work created by students at MIT, based around shared workspaces and remote manipulation of physical objects. (Leithinger et al., 2014) The work consists of a pin-based surface that can be used to interact with physical objects. (Pick, 2015)

Near Field Creatures

Near Field Creatures is a game made by students as a part of the mubaloo annual appathon at Bristol Uni. Users scan NFC tags (such as in certain student cards) and collect different animals of differing values. These collected animals can then be used to compete with other users. (Mubaloo, 2015)

Pico is an interactive work that explores human-computer interaction, allowing people and computers to collaborate in physical space. Pico is interacted with by use of pucks, which can be used by both the computer and the user. (Patten, Alonso and Ishii, 2005)

PICO 2006 from Tangible Media Group on Vimeo. (Pico 2006, 2012)




Ultrahaptics (2015). Ultrahaptics Development Kit. [image] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics. (2017). Ultrahaptics – A remarkable connection with technology. [online] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics (2015). Ultrahaptics diagram. [image] Available at: [Accessed 28 Oct. 2017].


DaisyPi (2017). Daisy Pi Unit. [image] Available at: [Accessed 28 Oct. 2017].

Lopez, A. (2017). Daisy Pi | The home monitoring e-flower. [online] Available at: [Accessed 28 Oct. 2017].


Designboom (2014). Moon close up. [image] Available at: [Accessed 30 Oct. 2017].

Feinstein, L. (2014). Make Your Mark On The Moon With Olafur Eliasson and Ai Weiwei. [online] Creators. Available at: [Accessed 30 Oct. 2017].

Cembalest, R. (2013). How Ai Weiwei and Olafur Eliasson Got 35,000 People to Draw on the Moon | ARTnews. [online] ARTnews. Available at: [Accessed 30 Oct. 2017].

Austen, K. (2013). Drawing on a moon brings out people’s best and worst. [online] New Scientist. Available at: [Accessed 30 Oct. 2017].


Piper, B., Ratti, C., Wang, Y., Zhu, B., Getzoyan, S. and Ishii, H. (2002). Illuminating Clay. [online] Available at: [Accessed 30 Oct. 2017].


Vice (2015). Interaction with Physical Telepresence. [image] Available at: [Accessed 30 Oct. 2017].

Leithinger, D., Follmer, S., Olwal, A. and Ishii, H. (2014). Physical Telepresence. [online] Available at: [Accessed 30 Oct. 2017].

Pick, R. (2015). Watch a Robotic Floor Play with Blocks. [online] Motherboard. Available at: [Accessed 30 Oct. 2017].


Mubaloo. (2015). Mubaloo and Bristol University hold third annual Appathon. [online] Available at: [Accessed 28 Oct. 2017].


Patten, J., Alonso, J. and Ishii, H. (2005). PICO. [online] Available at: [Accessed 30 Oct. 2017].

Pico 2006. (2012). MIT: MIT Tangible Media Group. Available at:


Netscapes: Week 1 – Part 2: Inspirations

AI & Deep Learning



Virtualitics AR office space (Virtualitics, 2017)

Virtualitics is a cross-platform application that merges AI, Big Data & AR/VR. The program features deep learning to transform big data into easily understandable reports and data visualisations within a shared virtual office, helping companies to grow. (WIRE, 2017)

Typically, analysing big data is no easy task. When using large amounts of data, even with visualisation technology, it can be difficult to pick out useful information. The Virtualitics platform uses AI to manage this, by means of algorithms that determine which metrics matter depending on what you are most interested in learning from that data. (Siegel, 2017)

The Virtualitics platform acts as a base for presenting and analyzing big data, and can allow for up to 10 dimensions of data to be shared, giving companies a competitive edge.  (Takahashi, 2017)

The platform could be applied to many different applications, ranging from industries such as Universities or Hospitals, and has already been successfully applied to finance and scientific research applications. (Team, 2017)


  • Highly interactive environment
  • Can be used in multiple business applications and settings
  • Makes big data accessible to everyone – even those who are untrained can easily access data.
  • Simple and easy to use, automatically turns data into useful graphs based on what you want to learn from it.


  • 3D VR office space may not be be appropriate for all applications.
  • VR headsets can be expensive – If the platform requires multiple headsets (such as the shared office space) this could end up being quite costly for a company.

Augmented Reality



Ultrahaptics development kit (South West Business, 2016)

Ultrahaptics is a startup company based around allowing users to feel virtual objects in a physical sense. By using ultrasonic projections and hand tracking, users can feel & interact with virtual environments, as well as feel real tactile feedback without the need for wearing or holding special equipment. (Ultrahaptics, 2017)

  • rsz_142
    Ultrahaptics diagram (Ultrahaptics, 2015)

The system is built using an array of ultrasound emitters in conjunction with motion sensors. Haptic feedback is created by first defining a a space in which to model the acoustic field. Within this field, focus points are created, that have differing types & intensities of feedback. (Kevan, 2015) This can allow for users to use both hands simultaneously or to interact with multiple objects.(Kahn, 2016)


  • Highly Interactive – encourages user engagement
  • Can be used in multiple applications
  • Could make other AR and VR apps more immersive when used together
  • All in one development kit, tools and support.
  • Possibility to create multiple “objects” within 3D space.


  • In certain applications, physical buttons could be more appropriate
  • Users can still “push through” objects – they can be felt, but are not solid.
  • The platform can (and does!) create noise and vibrations, whilst work is being done to minimize this, it will most likely always be present.

Whilst this sort of technology is still in its infancy, it offers a promising insight into the future of interactive technologies. In future, it could be applied to uses such as 3D sculpt modelling and similar applications, or making much more immersive VR and AR experiences.



Digilens in-car HUD (Digilens, 2017)

Digilens combines AR and holographic technologies. They build AR screens for use in multiple applications, including inside car windshields and in aeroplanes. These screens can display real-time data, enhancing driver awareness and safety. (DigiLens, Inc., 2017)


  • Fully customisable displays.
  • Wide range of uses, both commercial and private.
  • Can enhance driver awareness & Road safety
  • Less bulky than tradition displays


  • Could be distracting for drivers by taking their view away from the road
  • Cost of building and adding to cars

Interactive Art

After Dark
The Workers, 2014


Robot from After Dark, Tate Britain, 2014 (The Workers, 2014)

After Dark is an exhibition piece built using Raspberry pi. It allows viewers to take control of & drive a robot, exploring the exhibitions of TATE Britain via live video feed after closing time. (, 2014)

It was created as a way to engage new audiences in art; allowing them to explore the exhibitions without even having to set foot inside the building. Whilst viewers were driving the robots, art experts provided live commentary, providing new insights and engagement into the pieces on display. (The Workers, 2014)

The robots were fitted with cameras and lighting, as well as sensors to ensure they could navigate the galleries without complication. (Tate, 2014)


  • Highly Interactive – encourages user engagement.
  • Acts as a platform for learning and exploration.
  • Live art expert commentary makes the experience more than just “driving a robot”.


  • Could be costly to build & run
  • Battery powered robots – battery life is always a concern, particularly when these robots are connected to the internet and streaming for multiple hours.
  • Special measures must be taken to ensure damage to museum exhibits doesn’t happen.

Whilst this is an interesting idea, it is important to note that virtual museum tours already exist (such as video tours or even VR tours, which also sometimes provide commentary), and the act of driving the robot could be considered nothing more than a gimmick.

Zach Gage, 2016


Installation View (Gage, 2016)

Glaciers is an installation piece built using 40 Raspberry Pi systems, exploring the interactions between digital platforms(In this case search engines) and humans. They are programmed to take the top 3 autocomplete suggestions that follow various phrases, and display them on a screen, creating odd poetry that reflects the nature of the modern age. (Bate, 2016)

Although the screens appear static, the phrases are updated once a day based on the most popular auto-completes. Due to the nature of this, the poems could change daily, but are unlikely to. (Gage, 2016)


“He Says” by Zach Gage, part of “Glaciers” (Gage, 2017)


  • Relatively cheap and simple to build – Technology behind it is relatively cheap and easy to come by.
  • Simplistic nature
  • Concept understandable to most viewers


  • Not interactive
  • Due to the nature of Google autocomplete, poems do not change often (sometimes not at all)

Further images of Glaciers can be seen here.



Virtualitics (2017). Virtualitics office space. [image] Available at: [Accessed 28 Oct. 2017].

WIRE, B. (2017). Virtualitics Launches as First Platform to Merge Artificial Intelligence, Big Data and Virtual/Augmented Reality. [online] Available at: [Accessed 28 Oct. 2017].

Virtualitics (2017). Virtualitics [online] Available at: [Accessed 28 Oct. 2017].

Siegel, J. (2017). How this Pasadena startup is using VR and machine learning to help companies analyze data. [online] Built In Los Angeles. Available at: [Accessed 28 Oct. 2017].

Takahashi, D. (2017). VR analytics startup Virtualitics raises $4.4 million. [online] VentureBeat. Available at: [Accessed 28 Oct. 2017].

Team, E. (2017). Virtualitics: Caltech & NASA Scientists Build VR/AR Analytics Platform using AI & Machine Learning – insideBIGDATA. [online] insideBIGDATA. Available at: [Accessed 28 Oct. 2017].



South West Business (2016). Ultrahaptics development kit. [image] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics. (2017). Ultrahaptics – A remarkable connection with technology. [online] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics (2015). Ultrahaptics diagram. [image] Available at: [Accessed 28 Oct. 2017].

Kevan, T. (2015). Touch Control with Feeling | Electronics360. [online] Available at: [Accessed 28 Oct. 2017].

Kahn, J. (2016). Meet the Man Who Made Virtual Reality ‘Feel’ More Real. [online] Available at: [Accessed 28 Oct. 2017].


Digilens (2017). Digilens car HUD. [image] Available at: [Accessed 29 Oct. 2017].

DigiLens, Inc. (2017). Home – DigiLens, Inc.. [online] Available at: [Accessed 29 Oct. 2017].


The Workers (2014). After Dark Robot. [image] Available at: [Accessed 28 Oct. 2017]. (2014). After Dark. [online] Available at: [Accessed 28 Oct. 2017].

The Workers. (2014). The Workers: After Dark. [online] Available at: [Accessed 28 Oct. 2017].

Tate. (2014). IK Prize 2014: After Dark – Special Event at Tate Britain | Tate. [online] Available at: [Accessed 28 Oct. 2017].


Gage, Z. (2016). Installation View. [image] Available at: [Accessed 28 Oct. 2017].

Bate, A. (2016). Using Raspberry Pi to Create Poetry. [online] Raspberry Pi. Available at: [Accessed 28 Oct. 2017].

Gage, Z. (2016). ZACH GAGE – Glaciers @ Postmasters: March 25 – May 7, 2016. [online] Available at: [Accessed 28 Oct. 2017].

Gage, Z. (2017). He Says. [image] Available at: [Accessed 28 Oct. 2017].

Setting up Eduroam on Raspberry Pi

Anyone who has ever used Eduroam on a Raspberry pi will know that it’s no easy task to set it up. Fortunately, it is possible, it just takes a lot of trial and error.

This has been tested on a Pi2, Pi3, and a model b+ with a WiFi adapter.

How to set up an Eduroam WiFi connection on Raspberry pi:

Firstly, you will need to find out your university’s network information – this will vary depending on which university you are at. As this guide is made (and tested) for Plymouth University, you may have to find your own university’s information. In this case, the information was readily available on the university’s website — you will need to look this up in case there are any differences (this part is up to you!).

Before you start, you may need to stop network connections:

sudo service networking stop

Warning:  This will disable any currently open network connections – if you are using your Raspi with SSH, this will disconnect it, so be sure to do this using a mouse/keyboard/screen.


If you have used WiFi on a Raspberry Pi before, you may have noticed your password is stored in plain text – this is not okay! We can combat this by hashing it. You can convert your password by opening a command prompt and typing in:

read -s input ; echo -n $input | iconv -t utf16le | openssl md4

then type in your password. It will feed back a hashed version of your password. This needs to be added to the the ‘wpa_supplicant.conf’ file as indicated later.


Editing the Config files

The two files we need to edit are ‘/etc/wpa_supplicant/wpa_supplicant.conf’ and ‘/etc/network/interfaces’. What you put into these files depends on your university’s network.

The first can be edited in the terminal by typing:

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

in ‘wpa_supplicant’:

   ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
scan_ssid=1 scan_ssid=1
pairwise=CCMP TKIP pairwise=CCMP TKIP

where <eduroam username> is your usual eduroam login and <eduroam password> is the hashed password.

Next, edit ‘interfaces’ by typing into the terminal:

sudo nano /etc/network/interfaces

and adding in:

 auto lo wlan0
     iface lo inet loopback
     iface eth0 inet dhcp
     iface wlan0 inet dhcp
        wpa-driver wext
        wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
     iface default inet dhcp


You may also need your university’s security certificate – this can usually be found with the other details for manually connecting to your university’s WiFi. Once you have found it, add it to the folder ‘/etc/ssl/certs/’ and then link back to it from within your ‘wpa_supplicant.conf’ file by adding:


where ‘/etc/certs/<NameofCert>’ is the name/location of the certificate needed.

Once this is done, you will need to run wpa_supplicant:

sudo wpa_supplicant -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf -B

You may need to reboot to get it to connect.


You may find that your Raspberry pi resets key_mgmt to “none” on connecting to Eduroam and lists as “disassociated from Eduroam” – if this is the case, you may find it easier to work on a copy and overwriting the original with the Eduroam version.

Useful links

Eduroam for RasPi at Bristol University

Eduroam for RasPi at Cambridge University

Digital Cities: The future of urban life

In the modern age, everything is becoming smart. From phones, televisions and even home appliances. But the ‘Smart movement’ is also taking place on a much bigger scale; whole cities are becoming smart.

But what is a smart city? Whilst there is no complete definition as to what a smart city is, they are based around using technology to create solutions to modern life problems. For example, Barcelona has introduced ‘smart traffic lights‘ that provide “Green light corridors” to emergency service vehicles, as well as introducing new bus services that use technology to ‘ensure the system is managed effectively’.

I created this short video to give a basic explanation of smart cities and their aims:

Creating my Digital City Visualizations

Because of the lack of local data available, I had to use mock data & data from other cities to test my app – In this case Bristol.

I made 2 visualizations using PHP – one takes percentages of residents happy with their local green areas and represents it with the number of living and dead flowers in a field, the other takes the number of shopping trolleys found in rivers and represents that as dead fish in a river.

Building the Automated Home


Raspberry Pi checking weather

Next I made a miniature model of a house fitted with a Raspberry pi & Arduino. The Arduino was wired up to a selection of sensors and servo motors, with a small screen on top. I programmed the Raspberry Pi to read in live online data, such as weather, sunset, and temperature. If the weather was bad, the servo motors would spin and the windows would shut, and if the weather was dry and sufficiently warm, the windows would open.

The Raspberry Pi was connected to a Unicorn Hat, which I setup to scroll text across according to the weather data, for example, if it was rainy, it would scroll the word “Rain” in blue.


Rain Sensor allowing for viewer interaction


The house was also wired up with sensors that would override the online data inputs, such as in the case of unexpected rain showers. This also allowed for viewer interaction during exhibition.

Related Links

News Report: Bristol UK’s leading Digital City outside London

Bristol Open Data