Netscapes: Insight – Critical Analysis & Reflection

Whilst some aspects of the project could have gone better, overall I consider the project to be a success.

We had many issues with settling on an idea to begin with, although we knew roughly the technologies we wanted to work with, it took a few weeks of discussion and planning to fully settle on an idea. We went from building small robots with limited user interaction to a fully fledged user-interaction based installation piece; as well as moving from small scale organically-inspired projection mapping to abstract visualisations within the IVT.

Naturally, the project was subject to many changes as time went on. This is a natural part of the process; although it does mean our project is quite different from the initial idea.

Below are some of the choices we made and why I feel they were effective:

  • Heavy concept/research basis: Our project had a strong background of research behind it – Every choice has reasoning
  • Immersive vision theater (IVT): We chose to use this as it offers full surround of visuals and soundscapes – much like your personality reflects the way you view the world. bring it into physical sense .etc We chose to use the IVT because it reflects the feeling of being “inside” the head of the user. We also made use of the surround sound system, adding a further dimension to the experience.
  • All in one interface:  Instead of using two interfaces ( Pi for input of user data, and Mobile app to change head colour) we decided to bundle this into one input (Pi). This works much better as it merges both sides of the project, helping to keep the immersion of the user.
  • Multiple wireless networks: The use of both WiFi connections and Bluetooth for one seamless connection. This helps to keep the piece as all-in-one. Whilst this could have been done in a serial connection (see previous post) we already had the Bluetooth framework in place, so we decided to make use of it rather than change the code again.

What could have gone better:

  • ‘Plan B’ for internet connection: internet access in the dome is unreliable and setting up Eduroam can be difficult on certain platforms. The only difficulty with this is finding a workaround that still satisfies the requirements of the brief.
  • More user inputs: Make the visualization take more user’s data inputs at display them at once. This means changing both the way the visualization works and how the database read works, but would be implemented if the project carried on longer.
  • Stronger visuals: Have much more organic and interesting visuals to watch that incorporate more inputs.

Although we had some issues with group dynamics and the overall flow of the process, we were able to work around this and effectively work together to create something we are all proud of!

Advertisements

Netscapes: Insight – IVT Testing

Today we did our final build and first live test in the Immersive Vision Theatre (IVT). We started by fitting the Raspberry Pi and touchscreen inside the plinth, then transporting the equipment to the dome ready for our presentation.

IMG_20180122_160933

Fitting Pi3 + Touchscreen

Chris added wooden beams to support the weight of the Pi, as it will be under a lot of pressure when the touchscreen is in use. This should prevent the touchscreen moving away from the pinth.

IMG_20180122_150137.jpg

Setting up in the IVT – Modifying Code

Whilst in the IVT, Gintare updated her code to work better within the shape of the screen. She moved some of the key elements of the visuals so they were more centered within the dome, bringing them to the viewer’s attention.

 

vizlaptop.png

Setting up the visualization

We transported the physical part of our project to the IVT and decided where to set it up. We then tested the project within the space to understand how it will look and feel to the viewers and how the colours will display in the dome.

head interface.png

Glass head with touchscreen interface

We took this as an opportunity to double-check our database connections were working. During this time we ran into issues with page refreshing (which I quickly resolved) and with internet connection, which we resolved by using a mobile access point.

headdemo.png

Glass head interface in front of the projection.

We even invited Luke to test out our user interface, and have a go at inputting his own data into the visualization!

head interaction.png

Luke testing out the user interface!

dometest

Head test with visualization within the dome.

Netscapes: Building Bluetooth connections

To bring together the visualisation and physical prototype, I started working on a Bluetooth connection to the MongoDB connection code I previously built.

IMG_20180113_141644

Physical prototype with HC-05 Bluetooth module

Since we already have the HC-05 Bluetooth module in place and working with the Bluetooth terminal input on mobile, I simply had to look up how to create an output system in our .js code to match the inputs we previously designed for the Arduino.

BSP design.jpg

Initial flow diagram of program

I looked into how this could be done and began researching into using Bluetooth-Serial-Port module for Node.js.

After getting to grips with how the library works, I experimented with creating a basic framework for opening a Bluetooth connection and sending a basic input.  This code will check for a connection with the correct name, find the matching address, open a connection, and if it is successful, and the character ‘a’. When hooked up to the glass head model, this should activate the LED ring, making it light up.

bluetooth serial build code

My experimentation with BSP within the previously made MongoDB connection code

 


Issues

  • Certain information missing from Bluetooth-Serial-Port NPM documentation – I had to work around this by searching for other uses of BSP to fill in the gaps
  • Method to call previously paired Bluetooth devices doesn’t work on linux systems, so a workaround has to be made (looping through available connections and matching a name)

Next Steps

  • Update Arduino-side code: Modify existing code to include more interesting light effects, such as those I previously created for my ‘Everyware’ project. These would not be direct copies, but modifications of this pre-existing code, for a unique lighting effect.
  • Thoroughly test this code to ensure a secure connection is made and maintained for the duration of the installation.

Code Referencing/Libraries Used

Below is a list of the code documentations I used as reference when building my code. Whilst code was not directly copied, it was heavily referenced from the documentation:

bluetooth-serial-port: https://www.npmjs.com/package/bluetooth-serial-port
JS express – https://expressjs.com/en/guide/routing.html
JS json body parser – https://www.npmjs.com/package/body-parser-json
JS path – https://nodejs.org/api/path.html
JS Mongo Client – https://mongodb.github.io/node-mongodb-native/api-generated/mongoclient.html

Netscapes: Making & MLabs

Today we worked further on bringing the project together, drawing together all our current work and making improvements where necessary.

MLabs/Visualization connection

I worked on building a connection to the mLab database, pulling data and using them for parameters for a circle. The code checks the database for a new entry every 15 seconds.

readdb

Reading values from Database

For example, I set up mapping for sliders to RGB: The slider takes a value of 0 to 8 for the user, which is mapped to a number between 0 and 255 for 3 of the values (in this case the vars kind, trust and help). I also applied this to the radius and speed of movement.

Next, Gintaré and Chris will take this to build into their visualisation in its current state.

User Interface Modifications

We then looked at Gintaré’s slider inputs and how they would look in the physical build.

IMG_20180116_114315

First slider test in plinth (without the glass head or diffuser)

After reviewing both its looks and ease of interaction, we decided to make a few changes, such as making the text/scrollbar larger and removing the numbers from the sliders (As they do not display properly on Raspberry Pi).

Gintaré made modifications based on these observations and we quickly reviewed it. We also decided to colour code each section of sliders to each section of the CANOE model. This not only breaks it up but makes it more visually appealing in a way that makes sense.

IMG_20180116_120335

Touchscreen with enlarged scroll bar for ease of use.

We decided it would still be best to display the touchscreen with the stylus for ease of use as the sliders can still be difficult to use at this size.

IMG_20180116_123645

Touch screen with colour coded sections (per canoe model)

Since the touchscreen has no enabled right-click function, once the app is full-screen it is very difficult to get out of – meaning the viewers won’t be able to (intentionally or accidentally!) exit it.

We decided to bevel the edges that surround the screen as they make it difficult for users to easily reach the screen. This will also make it look more inviting to a user by bringing it into their view.

Connecting MongoDB/mLab to front-end

I started working on code to input values to the database using Gintaré’s previously made slider interface. This was built using express, npm and node.js. On recommendation from Chris B, Express was used in place of PHP.

When run, the code hosts the necessary files (such as Gintaré’s sliders) on a local server, which sends the data to the remote server when “Generate” is pressed.

 

Since Node.js means the code is ‘modular’, we decided to put the login details in a separate .js file (rather than censor the mongoDB login details when on GitHub)

IMG_20180116_171936

Installing Node.js & npm to Raspberry Pi

Once this was up and running (and confirmed to work on mLab), I moved the files and installed the necessary npm packages on my Raspberry Pi. I then tested the connection to mLab to ensure the data was working.

IMG_20180116_185022

Running the local server (Hosting the sliders form) on Raspberry Pi

We then put this server connection together with Gintaré’s updated user interface.

data canoe test

Data inserted into mLab via Raspberry Pi

mlabs multi canoe

Multiple documents in MongoDB database.

Now that we have data both coming into and out of the database, we are ready to move onto the next steps!

Next Steps

  • Finish Visualization
  • Put together final physical prototype (Seat raspi, sort out power supplies .etc)
  • Preview in IVT – test visualisations before presentation
  • (If time allows) Make a system for colour of head based on last data entry.

Netscapes: Development Process

Creating a project is an organic process that contains many twists and turns. Below I will outline some of the changes we had to make during the development of our project.

Before Christmas break, Chris built a wooden plinth to mount the glass head on & house all the electronics. He also designed & 3D printed an inner diffuser for our lighting. This will be displayed in the IVT theatre, as the interactive front-end of our project.

IMG_20180109_164348.jpg

Glass head mounted on plinth (without diffuser). The gap at the front will house the Raspberry Pi & GPIO Touchscreen.

Modified Slider/LED control for Arduino

To further improve the LED lighting part of our piece, we decided to modify it by removing the serial connection and instead using a Bluetooth connection. Chris purchased a Bluetooth module and began to program it to take inputs from mobile.

Chris and I worked together to program the RGB LED code with Bluetooth. We tested the connection using Bluetooth terminal on our Android devices; sending simple “a” and “b” messages to turn an LED on and off remotely. We discovered that this will only work with one device at a time, so we will need to account for this when the model is on display.

We decided on making a mobile app to control the colour of the LEDs, which I will build in Processing over the next few days.

Resolving LED brightness issues 

We found that with the 3D printed inner diffuser in place, The RGB LEDs were not bright enough to light up the glass head.

IMG_20180109_151940

Original setup, with multiple RGB LEDs, Arduino & Bluetooth module.

img_20180121_200116.jpg

Neopixel 24 LED Ring

We tried an LED ring (that I have been using in another project) since it is considerably brighter than the individual LEDs. This worked much better; the colour was visible even in the brightly lit room, and the ring diameter was a perfect fit for the base of the diffuser!

IMG_20180112_132148

Glass head with diffuser and LED ring.

We purchased another LED ring and cut new holes in the mount to accommodate the wiring.

Switching and Setting up databases.

Due to issues connecting to our MongoDB database, we decided to switch from MongoDB to MySQL.

I set up a new database on my server with access to the necessary tables. I sent Gintaré a link to instructions on how to set it up, along with the necessary details, so she can get to work building the data visualization.

Next Steps
Our next steps are to:

  • Wire up the LED ring and program it to respond to Bluetooth messages (modifying earlier code)
  • Develop an android app
  • Connect the visualization and the slider inputs to my server/Database.

 

Netscapes: Building – MongoDB & Python

This week I have focused on the building stage. After helping my team members get started with p5.js, I got to work building my parts of the project: the back-end and LED control.


Emotion/colour Sliders – Python & Arduino LED control

Part of our project includes the representation of emotions in a visual sense. We decided on creating this using a basic slider input, so I got to work developing it.

I built this using:

  • Raspberry Pi 3
  • Arduino
  • 5″ GPIO touchscreen
  • Python

I created my app using libraries including PySerial (for serial connections using Python) and tkinter (For rapid production of basic user interfaces). I decided to use Python as I have previous experience with creating connections to Arduino using PySerial.

Building circuits

Firstly, I setup the Raspberry Pi 3 with the necessary drivers & fitted the touchscreen. I created a script on the desktop to open an on-screen keyboard (so I wouldn’t have to use a keyboard for setup later). I then built a basic circuit with an RGB LED and hooked it up to the Raspberry Pi.

IMG_20171208_003440 (1)

My Rasberry Pi with GPIO touchscreen.

Programming

I started off by building a basic slider interface using tkinter and Python. I made sure it was appropriately scaled to the screen and then worked out how to get the output of the slider in real time. In this case, I used 1 to 8 to match our data input app.

IMG_20171210_195700

Basic RGB LED wiring

Once the slider was working correctly, I set up a basic serial connection to the arduino using PySerial. Since PySerial needs data to be sent in bytes, I had to make sure the characters sent were encoded. I then built a basic script on the Arduino side to receive the characters and change the colour of the RGB LED based on how far the slider was moved (in this case blue to yellow for sad to happy).

Link to my code on GitHub: https://github.com/Mustang601/Arduino-Serial-LED-slider 

Sequence 01_1

My completed LED slider

My next steps are to further develop the user interface, and to figure out how to use this in conjunction with the other user inputs (for database connection).


MongoDB

I created a database in MongoDB, and hosted it on mLabs (due to a software conflict I couldn’t easily host it on my own server, so this was the next best thing!)

The database will hold all the input data from our app; and will be used in the creation of our visualization.

mongo input document

Document within MongoDB database

The next step is to connect this database up to the input app and visualization once they are both completed.


Related Links

tkinter: https://docs.python.org/3.0/library/tkinter.html 

PySerial: https://pythonhosted.org/pyserial/

mLabs: https://mlab.com/

MongoDB: https://www.mongodb.com/

Netscapes: Art from code – Inspirations

Since we are looking into creating web-based visualizations and animations from code, I decided to research some forms of web-based animations.

Modular SVG Animation Experiment – Mandala
By Dylan Cutler

 

 

Modular SVG animation pen (Cutler, 2017)

This example is made using HTML,CSS and Javascript. (Cutler, 2017)

I find this a particularly interesting example because of all the moving parts, how they all move in relation to each other and how they layer up.

Animated Background
By Marco Guglielmelli

 

 

 

 

Animated background in action (Guglielmelli, 2017)

This piece is created with CSS/Javascript/HTML. It is interactive; moving the mouse across the screen shows new areas of animated lines and points, growing brighter or darker as you move towards or away from them. (Guglielmelli, 2017)

Hexagon Fade
By Tina Anastopoulos

 

Hexagon fade example on codepen (Anastopoulos, 2017)

Created using HTML/CSS/Javascript and p5.js, Hexagon fade is an example of how p5.js can be used to create simple yet effective scaling visualizations. (Anastopoulos, 2017)

Rainbow Pinwheel – p5.js
By Tina Anastopoulos

 

 

 

Rainbow pinwheel interactive example on codepen (Anastopoulos, 2017)

Rainbow Pinwheel is a striking example of how interactive visualizations can be created using HTML/CSS/Js, and in this case, p5.js. In this example, you click and drag to create the effect of motion. (Anastopoulos, 2017)


Sources:

Cutler, D. (2017). Modular SVG Animation Experiment. [online] CodePen. Available at: https://codepen.io/DCtheTall/pen/KWpdRV [Accessed 26 Nov. 2017].

Guglielmelli, M. (2017). Animated Background. [online] CodePen. Available at: https://codepen.io/MarcoGuglielmelli/pen/lLCxy?limit=all&page=2&q=animation [Accessed 26 Nov. 2017].

Anastopoulos, T. (2017). Hexagon Fade. [online] CodePen. Available at: https://codepen.io/TWAIN/pen/RVjGYN?depth=everything&order=popularity&page=2&q=p5.js&show_forks=false [Accessed 26 Nov. 2017].

Anastopoulos, T. (2017). Rainbow Pinwheel – p5.js. [online] CodePen. Available at: https://codepen.io/TWAIN/pen/OpGayd?depth=everything&order=popularity&page=6&q=p5.js&show_forks=false [Accessed 26 Nov. 2017].

Netscapes – Planning & Production

Responsibilities breakdown

Here is the breakdown for our group responsibilities. This is subject to change as the project moves along.

My responsibilities are focused around programming & building the back end

Name
Tasks Chris Steph Gintare
  • Project Management
  • Tutor Mediations
  • UX Development
  • Research: Personalities
  • Research: Animations
  • 3D model for RasPi case
  • Immersive Dome Projections
  • Databases
  • Server
  • JavaScript
  • Sockets,
  • Animations Code
  • P5.JS
  • Node.JS
  • Immersive Dome Projection
  • Phonegap Front end
  • HTML/CSS for website
  • Visual Research
  • Javascript
  • P5.JS
  • Animations
  • Research: Animations
  • Immersive Dome Projection

Equipment Orders and collation, software/ hardware (Chris,Steph)
29th Nov 17

  • Source all production materials
  • Raspberry Pi 3’s
  • Touch screens
  • Finalise final form for interface container
  • OS systems
  • Server Hosting

Back End (Chris, Stephanie)
4th Dec 17

  • Find server to host MongoDB
  • Develop a MongoDB database to store parameter of user interface inputs
  • Connect simple html/ JS to database to check CRUD works
  • Client Side and interfaces (Gintare, Stephanie, Chris)

4th Dec 17

  • Simple PHP developed to ensure client is posting data externally/ correctly
  • Prototype phone gap front end for users to input personality data
  • Simple website to host visualisations.
  • Start to look at button interface to select background for the animations

Animations (Gintare, Chris)
11th December

  • P5.JS animations as prototype  forms with some or all parameters alterable though user interfacing.

Modelling for Interface controller (Chris)
18th December

  • A prototype of the interface container to be roughed out for both the primary and seconday input (buttons for scenery changes)

To be completed as working structure 9th Jan 2018


Time planning (group / individual)

Group Time Planning

Here is our group timetable, created by Chris.

netscapes group timeline

Group time schedule by Chris (enoodl.com)

 

Individual Time planning:

Below is my individual timeline for this project. Most parts of this timeline are subject to change as the project goes along; such as running into difficulties or in relation to other’s completion of tasks.

Blog updates marked with * may not be required.

Week Tasks
(27/11/17)
  • Blog Update
  • Decide on visualizations
  • Set up webpage for visualizations.
  • Set up server for database (Installation)
  • Begin to source equipment
  • Begin visualization development.
(04/12/17)
  • Blog Update
  • Further visualization development
  • Create database
  • create website inputs
  • begin to build connection between visualization/app/Database
(11/12/17)
  • Blog Update
  • Finish linking Database to app & visualization
  • Finish visualization
(18/12/17 to 01/12/17) – Christmas Break
  • Blog Update*
  • Finalization of project
(08/01/18)
  • Blog Update
  • Presentation of project (IVT)
(15/01/18) – Final 2 days
  • Blog Update*
  • Finalization of documentation
  • Refinement & finalization of submission
DEADLINE: 16/01/18
  • Submit by 23:59

Resources list & Budget

Physical Resources:

  • Raspberry Pi (Supplied by me)
  • Touchscreen for RasPi (Supplied by me)
  • 3D printer & related equipment (Supplied by university)

Budget

Our budget will have to cover building supplies as well as Arduino components. Our maximum budget is around £100 but we would like to stay below this if at all possible.

Netscapes: Art from code – Creation

After deciding on creating visualizations of butterflies as a reflection of personality, I experimented with code to create a simple web-based animated butterfly. The animation has the potential to change colours in response to inputted data from a server (but for now, i’m inputting the colours myself).

code butterfly

Basic butterfly I created in CSS

This animation was originally intended to work in Phonegap, so naturally it had to be limited to HTML/CSS/Javascript. The wings and body are made with simple CSS shapes, and complete a “flap” cycle every 1.5 seconds.

butterflycode.PNG

Early Butterfly Code – CSS animation

GitHub Link: https://github.com/Mustang601/CSS-butterfly-animation 

butterflygif

My completed CSS animation

Netscapes: Further Idea Development

After our session today on MongoDB databases and Node.JS, we took some time to further develop our idea and plan our next steps.

How the Visualisation will work:

  • Butterflies instead of Jellyfish – More original, not making a “copy”
    Wing & body colour determined by 5 different personality Traits
  • Total of 10 butterflies at a time maximum
  • Possibility for animation in background – such as clouds passing by.
  • Visualisation accessible on website and mobile app.
  • Real-time projection into Immersive Visual Theatre.

How the Data will be stored and Processed:

  • Names can be stored with butterflies and displayed in visualisation
  • Data will be stored with a Primary Key – easier to handle calls
  • 10 butterflies max – data is updated so it “loops” and replaces previous data as new data is added – keeps database small and lightweight

How we are going to build the app:

  • Create basic interface on phone gap – Needs to be short to keep interest (i.e. 3 to 5 questions)
  • Questions will be personality based and Inspired by the CANOE model.
  • Interface will contain sliders instead of radio buttons .etc – this allows for more variety of input (such as the body of the butterfly  being more yellow for extrovert and more purple for introvert)
  • Phone gap app on GitHub – easy collaboration & Version control

How we are going to distribute the app:

  • Cards, flyers or posters with QR code – These should have details about the display also and should follow the design/identity of the rest of the project.
  • QR links to a webpage with download links for different mobile OS’s – Must look professional and legitimate.