Netscapes: Art from code – Creation

After deciding on creating visualizations of butterflies as a reflection of personality, I experimented with code to create a simple web-based animated butterfly. The animation has the potential to change colours in response to inputted data from a server (but for now, i’m inputting the colours myself).

code butterfly

Basic butterfly I created in CSS

This animation was originally intended to work in Phonegap, so naturally it had to be limited to HTML/CSS/Javascript. The wings and body are made with simple CSS shapes, and complete a “flap” cycle every 1.5 seconds.

GitHub Link: https://github.com/Mustang601/CSS-butterfly-animation 

butterflygif

My completed CSS animation

Advertisements

Everyware: Programming AI & IOT

  1.  NodeMCU and MQTT
    First, I wired a NodeMCU board wired up to a rain sensor and an LED. I connected the board up to the MQTT broker, so whenever a certain amount of rain was detected, a message would be published.

    nodeMCUboard

    NodeMCU board, with Sensors & LED connected

    Next, I programmed the board to receive messages back from the broker. I connected the board up to a single LED, which would blink when a message with a specific payload was received.

  2.  Image Recognition AI
    I used IBM Watson to create an image recognition system capable of categorizing different dog breeds. Images of dogs (or non-dogs!) can be fed into the system, and the AI reads back information as to what kind of dog it is by percentage likeness.

    did2

    IBM Watson recognition

    Next, I connected this up to the MQTT broker, so when a dog breed is detected, it sends a message to the board detailing which breed of dog was seen.

  3.  Chatbots
    I made a basic chatbot capable of taking commands such as “switch on the lights” or “open the window”, and connected it to a personal Slack channel for testing. I then hooked it up to the MQTT broker, and later to the NodeMCU board

    dialoguebot

    Asking the bot to turn on the light using Slack.

    When you ask the chatbot to “turn on the lights”, the message gets sent to the MQTT broker back to the NodeMCU board, which then turns on an LED. This also works for other commands, such as “Open the window”, which spins a small servo.

    LED on

    LED comes on

Web Development: Building a Job website

I was tasked with creating the back-end for a typical job recruitment website, that caters to the needs of both recruiters & job seekers. Since I didn’t have to create the front end designs, I kept the UI work minimal, only building simple frameworks for tables.

Planning

I started by planning out how my website would work, by creating multiple design documents such as flowcharts of system operations, Use-case & Entity-Relationship diagrams. I looked carefully at how different types of users would expect the platform to behave and how to best cater to that, whilst maximising efficiency and security. I also considered the operation of the system, in terms of not allowing users who are not logged in to post applications .etc

Building & Operation (in brief) 

I created my job recruitment website using a mix of PHP, MySQL, Javascript, HTML & CSS. Below is a brief look at some of the functionalities of the website:

Firstly, users can create a login with a name, email and password. This is checked against the database, and if an account doesn’t already exist, it is then stored within the database.

Then users can login, opening a session that will last either until the browser is closed or the logout button is pressed. When a session is started, a user can upload a job listing or application (provided there are jobs already posted) and this will be linked to their account using the current session data. Sessions also work when it comes to applying for a job – once a user has selected a job from the job search, the primary key of that job is temporarily stored, meaning that the application is automatically linked to that listing for ease of viewing by a recruiter.

 

Files such as CV uploads are typically too large to be stored in a database – instead, I programmed it so that the location of the file is saved in the SQL database, allowing a recruiter to simply hit a download button to view it later.

Some information, such as which users posted which jobs or applications, is cross-referenced between tables. In these cases, primary keys are also used (for example to link applications to the appropriate job listings).

Security

When it comes to website security, there is a lot more to consider than just passwords. When building a website of this nature, there is often extra risks, such as stored user information & file uploads.

I used PDO (Php Data Objects) instead of MySQLi, as it uses prepared statements; making it more secure against SQL injection attacks. Unlike mySQLi, which only works on SQL databases, PDO works on many, meaning that switching to another database later on requires minimal effort.

File uploads, in this case CV uploads, create potential for attacks. This can be combated by conducting multiple checks before a file is allowed to be uploaded to the server. This includes file size limits & checking for file types. Since this is still not perfectly secure, extra steps can be taken, such as renaming a file and using a new extension — ensuring that uploaded files are non-executable.

All user data is stored hashed, so in the event of a data breach, user information is kept safe and secure. A separate username & password is used for database operations only (as opposed to full root access), which is hidden from viewing by site visitors.

 

Developing for Android: Sol.AR

Solar winds are storms of charged particles, ejected by our sun, that travel across the solar system and bombard the earth. The energy contained in these storms can cause significant interference to electronic communication systems, as well as being one of the causes of the Northern Lights.

I created a simple app for android devices to view information about this solar wind data in real time. The app features a clean minimal design with basic menu animations and functions, and scales according to screen size.

Screenshot_2017-10-25-20-15-48 Screenshot_2017-10-25-20-15-32 Screenshot_2017-10-25-20-39-43.png

I used regex to split up data from the input (json) file, and sort it into multidimensional arrays to be translated into graph plots. I made sure all data was checked when it was sorted, so that ‘bad data’ was left out without leaving gaps in the graph.

received_10213441190974523.png

Using regex to sort data

There are individual graphs for Proton Density, Bulk Speed & Ion Temperature data. The different types of data can be viewed by clicking the buttons below the graph.

Building for VR: Developing for Google Cardboard

For our Final Creative Coding project we had to create a program using Processing. This app could be for any platform. With such an open brief; I decided to take this as an opportunity to try something new. Having recently gotten a google cardboard of my own, I was eager to get to grips with how it worked and have a go at building for it myself.

 

I used the Ketai library as it has a range of tools for mobile – accelerometer, gyroscope, as well as Face Detection and a bunch of other cool features.

First things first i experimented with View ports. To do this, I built a basic video viewer. It uses two view ports, one for left and right, to simulate how my app will work. I used it to play a video – one at a slight delay – to simulate a (kind of) 3D effect and to test how the device would handle it!

doublevideo

Double Video Test, with a video of my ducks!

Next I made a basic app to read in data from the phone’s sensors. Using the Ketai library, I wrote a short sketch to read in and display the data on the device’s screen.

Next i made a very basic shape. A coloured cube created completely in processing. Nothing too exciting; just a starting point.

viewports

Cube test with viewports.

Then i brought it all together. Taking the Cube and the Ketai input code i made earlier, I made a tester app.

The Next step was to bring in a secondary viewport and set apart the two cameras to give the 3D illusion – not an easy task! Getting the distances and slight differences in angles between the two was quite difficult to achieve – it required a lot of trial and error!

The next step was to replace the cube with something a little more interesting – A quick low-poly model made in blender.

Although only a basic app, it does have a lot of potential for expansion, such as going on to create a much more advanced model viewer app or even a full VR game!