Inspirations: The Art of Randomness

Conversations on Chaos
By fito_segrera

Markov Chain poetry from Randomness (Segrera, 2016)

Conversations on Chaos is an artwork based on representation of randomness. It consists of two main parts: A pendulum suspended over multiple electromagnetic oscillators. The software also implements the use of Markov Chains, enabling the system to create a human-like ‘voice’, and bringing meaning back into chaos.  (Segrera, 2015) Together, this creates a system of ‘two machines that hold a dynamic conversation about chaos’. (Visnjic, 2018)

Codex Seraphinianus
By Luigi Serafini, 1981

Excerpt from Codex Seraphinianus (Serafini and Notini, 1981)

Codex Seraphinianus is a book written in an invented language with no translation. It also has a collection of visuals; some familiar, some not. The format of the book is reminiscent of a guide book or scientific text. (Jones, 2018) The book could be interpreted as an introduction to an alien or alternate reality with influences from our own.

Neural Network Critters
By Eddie Lee


Video: Neural Network Critters! by Eddie Lee (Lee, 2017)

Neural Network Critters is a visual example of how neural networks can be used to make art. In this free program, a series of ‘critters’ are created. (Visnjic, 2018) The ones that are fittest (i.e. make it furthest through the maze) are asexually reproduced until they make it to the end of the maze. (Lee, 2018)

School for Poetic Computation (SFPC)

The School for Poetic Computation is a small school based in New York, that aims to bring together art and computing.  (Sfpc.io, 2018)


References:

Jones, J. (2018). An Introduction to the Codex Seraphinianus, the Strangest Book Ever Published. [online] Open Culture. Available at: http://www.openculture.com/2017/09/an-introduction-to-the-codex-seraphinianus-the-strangest-book-ever-published.html [Accessed 11 Feb. 2018].

Lee, E. (2018). Neural Network Critters by Eddie Lee. [online] itch.io. Available at: https://eddietree.itch.io/neural-critters [Accessed 11 Feb. 2018].

Lee, E. (2017). Neural Network Critters – Vimeo. Available at: https://vimeo.com/225961685 [Accessed 11 Feb. 2018].

Serafini, L. and Notini, S. (1981). Codex seraphinianus. New York: Abbeville Press, p.98.

Segrera, F. (2015). Conversations on Chaos. [online] Fii.to. Available at: http://fii.to/pages/conversations-on-chaos.html [Accessed 10 Feb. 2018].

Segrera, F. (2016). Conversations on Chaos. [image] Available at: http://www.creativeapplications.net/linux/conversations-on-chaos-by-fito-segrera/ [Accessed 11 Feb. 2018].

Sfpc.io. (2018). SFPC | School for Poetic Computation. [online] Available at: http://sfpc.io [Accessed 11 Feb. 2018].

Visnjic, F. (2018). Neural Network Critters by Eddie Lee. [online] CreativeApplications.Net. Available at: http://www.creativeapplications.net/news/neural-network-critters-by-eddie-lee/ [Accessed 11 Feb. 2018].

Visnjic, F. (2018). Conversations On Chaos by Fito Segrera. [online] CreativeApplications.Net. Available at: http://www.creativeapplications.net/linux/conversations-on-chaos-by-fito-segrera/ [Accessed 11 Feb. 2018].

Advertisements

Interactive Artworks

Pool of Fingerprints
By Euclid Masahiko Sati, Takashi Kiriyama, 2010

pooloffingerprints.PNG

Fingerprint Scanner (Clauss, 2010)

In Pool of Fingerprints, users are invited to scan their own fingerprint into the piece. This mixes with all the fingerprints of other visitors, until it eventually returns to its owner. This piece is a reflection on individuality and their sense of presence. (Google Cultural Institute, 2010)

Transmart Miniascape
By Yasuaki Kakehi, 2012


Video: Transmart Miniascape by Yasuaki Kakehi (Kakehi, 2015)

Transmart Miniascape is an interactive and reactive artwork consisting of multiple glass panels containing pixels. These pixels are representative of the four seasons, and their appearance changes based on the surrounding area. (NTT InterCommunication Center [ICC], 2014)

Through the Looking Glass
By Yasuaki Kakehi, 2004


Video: Through the Looking Glass by Yasuaki Kakehi (Kakehi, 2015)

Through the Looking Glass invites visitors to play a game of tabletop hockey against your own reflection. The piece defies the logic of mirrors, as the screens both sides of the mirror display different images! (NTT InterCommunication Center [ICC], 2004)

Tablescape Plus
By Yasuaki Kakehi Takeshi Naemura and Mitsunori Matsushita, 2006


Video: Tablescape Plus, 2006 (Kakehi, 2016)

Tablescape Plus is a playful interface, allowing visitors to create their own stories with characters upon a screen. It blends physical objects with digital images. The physical objects can be manipulated by visitors, allowing them to move characters and objects together to form interactions or trigger movements. (Kakehi, 2016)


References:

Clauss, N. (2010). Pool of Fingerprints – Fingerprint Scanner. [image] Available at: https://www.google.com/culturalinstitute/beta/asset/pool-of-fingerprints-details/KwFE71waZ4_t1g [Accessed 7 Feb. 2018].

Google Cultural Institute. (2010). Pool of Fingerprints/details – Euclid Masahiko Sato (b.1954, Japan) Takashi Kiriyama (b.1964, Japan) (Photo : Nils Clauss) – Google Arts & Culture. [online] Available at: https://www.google.com/culturalinstitute/beta/asset/pool-of-fingerprints-details/KwFE71waZ4_t1g [Accessed 7 Feb. 2018].

Kakehi, Y. (2016). Tabletop Plus. Available at: https://vimeo.com/124536961 [Accessed 8 Feb. 2018].

Kakehi, Y. (2015). Transmart Miniascape. Available at: https://vimeo.com/124540477 [Accessed 7 Feb. 2018].

Kakehi, Y. (2015). Through the Looking Glass. Available at: https://vimeo.com/81712999 [Accessed 7 Feb. 2018].

NTT InterCommunication Center [ICC]. (2014). ICC | “Transmart miniascape” – KAKEHI Yasuaki (2014). [online] Available at: http://www.ntticc.or.jp/en/archive/works/transmart-miniascape/ [Accessed 7 Feb. 2018].

NTT InterCommunication Center [ICC]. (2004). ICC | “through the looking glass” (2004). [online] Available at: http://www.ntticc.or.jp/en/archive/works/through-the-looking-glass/ [Accessed 7 Feb. 2018].

Artworks from Code

Moon 

ai-weiwei-olafur-eliasson-moon-designboom-03

Moon up close (Designboom, 2014)

Moon is an interactive installation piece created by Olafur Eliasson and Ai Weiwei. It invites viewers from around the globe to draw and explore a digital “Moonscape”. (Feinstein, 2014)

Eliasson and Weiwei’s work is focused around community and the link between the online and offline world. (Austen, 2013)

Over the course of its 4 years of existence, Moon grew from simple doodles and drawings, to collaborations & clusters of work, such as the “Moon Elisa”, where multiple users came together to recreate the classic Mona Lisa painting. (Cembalest, 2013)

“The moon is interesting because it’s a not yet habitable space so it’s a fantastic place to put your dreams.” – Olafur Eliasson, on Moon (Feinstein, 2014)

Library of Babel
By Jonathan Basile

The Library of Babel is a website based on Borge’s “The Library of Babel” (Borges, 2018); a theoretical piece about a library containing every possible string of letters. It is theorized that the books contain every word that has ever been said and will ever be said, translations of every book ever written, and the true story of everyone’s death. (Basile, 2018)

babel

A section of the Library of Babel (Basile, 2018)

The actual workings of the Library of Babel are quite complex – using randomized characters with an algorithm complex enough to create the same block of text within the same place in the library every time it is viewed. When a search is made for a specific string within the library, the program works backwards to calculate its position based on the random seed that would produce that output.  (Basile, 2018)

Code Poetry
By Daniel Holden & Chris Kerr

Code Poetry is a collection of code-based pieces, each written in a different programming language with a different concept behind it. The collection was published into a book in 2016. (Holden and Kerr, 2018)

Some examples of the content of this book are as follows:

IRC (Markov Chain Poetry)
Markov chains are generated sequences based on probability. In this example, poetry is created by using strings generated from IRC logs. (Theorangeduck.com, 2018)
Similar: Create lyrics using markov chains

Water
Water is a piece written in c++ that is styled in such a way to resemble rain clouds. When run, the code generates raindrops. (Holden and Kerr, 2018) Water is an interesting piece as it challenges the way we traditionally view and approach code.

water

Machine Learning Art
By William Anderson

Using Markov Chains and a collection of training images from the Bauhaus art movement, an artist was able to create new artworks in this iconic style. (Anderson, 2017)

bauhaus genart

Bauhaus art generated by AI (Anderson, 2017)


References:

Anderson, W. (2017). Using Machine Learning to Make Art – Magenta. [online] Magenta. Available at: https://magenta.as/using-machine-learning-to-make-art-84df7d3bb911 [Accessed 7 Feb. 2018].

Anderson, W. (2017). Bauhaus Markov chain art. [image] Available at: https://magenta.as/using-machine-learning-to-make-art-84df7d3bb911 [Accessed 7 Feb. 2018].

Austen, K. (2013). Drawing on a moon brings out people’s best and worst. [online] New Scientist. Available at: https://www.newscientist.com/article/dn24702-drawing-on-a-moon-brings-out-peoples-best-and-worst/ [Accessed 30 Oct. 2017].

Basile, J. (2018). About. [online] Library of Babel. Available at: https://libraryofbabel.info/About.html [Accessed 6 Feb. 2018].

Basile, J. (2018). Library of Babel. [image] Available at: http://libraryofbabel.info/browse.cgi [Accessed 6 Feb. 2018].

Basile, J. (2018). Theory – Grains of Sand. [online] Library of Babel. Available at: https://libraryofbabel.info/theory4.html [Accessed 6 Feb. 2018].

Borges, J. (2018). The Library of Babel. [ebook] Available at: https://libraryofbabel.info/libraryofbabel.html [Accessed 6 Feb. 2018].

Designboom (2014). Moon close up. [image] Available at: https://www.designboom.com/art/ai-weiwei-olafur-eliasson-give-rise-to-moon-interactive-artwork-11-26-2013/ [Accessed 30 Oct. 2017].

Cembalest, R. (2013). How Ai Weiwei and Olafur Eliasson Got 35,000 People to Draw on the Moon | ARTnews. [online] ARTnews. Available at: http://www.artnews.com/2013/12/19/how-ai-weiwei-and-olafur-eliasson-got-35000-people-to-draw-on-the-moon/ [Accessed 30 Oct. 2017].

Feinstein, L. (2014). Make Your Mark On The Moon With Olafur Eliasson and Ai Weiwei. [online] Creators. Available at: https://creators.vice.com/en_uk/article/yp5zkj/make-your-mark-on-the-moon-with-olafur-eliasson-and-ai-weiwei [Accessed 30 Oct. 2017].

 

Holden, D. and Kerr, C. (2018). ./code –poetry. [online] Code-poetry.com. Available at: http://code-poetry.com/home [Accessed 6 Feb. 2018].

Holden, D. and Kerr, C. (2018). water.c. [online] Code-poetry.com. Available at: http://code-poetry.com/water [Accessed 6 Feb. 2018].

Holden, D. and Kerr, C. (2018). The code behind Water. [image] Available at: http://code-poetry.com/water [Accessed 6 Feb. 2018].

Theorangeduck.com. (2018). 17 Line Markov Chain. [online] Available at: http://theorangeduck.com/page/17-line-markov-chain [Accessed 6 Feb. 2018].

Inspirational Art 2 – Projection Mapping

Projection Mapping – Catan/D&D
By Silverlight/Roll20

DDBigTeaser

Projection mapping – D&D (Projection Mapping Central, 2018)

This projection mapping piece brings together tabletop gaming and projection mapping.This not only creates a more immersive enronment for players, it also provides tools for gamers, such as using real time tracking to calculate a character’s line of sight. (Sodhi, 2018)

Crystalline Chlorophyll
By Joseph Gray, 2009

 

Video: Crystalline Chlorophyll (Gray, 2009)

Crystalline Chlorophyll is an interactive sculpture that reacts to people in the space around it. During the course of an exhibition, the sculpture tracks motion in the room and transforms from an icy blue to a natural green.

The sculpture is built from card stock, but was originally designed in blender. The colour changing effects are achieved by two ceiling-mounted video projectors. (Gray, 2014)


 

Sources:

Gray, J. (2009). Crystalline Chlorophyll. Available at: https://vimeo.com/6886025 [Accessed 31 Jan. 2018].

Gray, J. (2014). Crystalline Chlorophyll. [online] Grauwald Creative. Available at: http://grauwald.com/art/crystallinechlorophyll/ [Accessed 31 Jan. 2018].

Projection Mapping Central (2018). D&D Projection mapping. [image] Available at: http://projection-mapping.org/dungeons-dragons-projection-mapping/ [Accessed 31 Jan. 2018].

Sodhi, R. (2018). Dungeons & Dragons and Settlers of Catan with Projection Mapping -…. [online] Projection Mapping Central. Available at: http://projection-mapping.org/dungeons-dragons-projection-mapping/ [Accessed 31 Jan. 2018].

Inspirational Artworks

 

EELS 3D Projection-Mapping game
By Leo Seeley, 2011

Video: EELS projection mapping multiplayer game (Seeley, 2011)

EELS is an interactive multiplayer game bringing together three-dimensional projection mapping and mobile application design. Users can control the movement of an eel as it moves across 3D space. (Casperson, 2018)

Ohne Titel (Hello World.) / Untitled (Hello World.)
By Valentin Ruhry, 2011 

Ohne Titel (Hello World) – Installation (Ruhry, 2018)

Reciprocal Space
By Ruari Glynn, 2005

Reciprocal Space challenges the perception of buildings as a solid and unchanging space. (We Make Money Not Art, 2005)


Video: Reciprocal Space in action. (Glynn, 2011)

The Agency at the End of Civilization.
By Stanza, 2014


Video: Agency at the End of Civilization (Stanza, 2014)

This installation uses real-time data from UK car number plate recognition systems across the South of England.

The piece includes 24 screens and multiple speakers and CCTV cameras, engaging the audience into the role of the observer. (Stanza.co.uk, 2014)


Sources

Seeley, L. (2011). EELS projection mapping multiplayer game. [Video] Available at: https://vimeo.com/32161590 [Accessed 31 Jan. 2018].

Casperson, M. (2018). Projection Mapping Multiplayer Game – Projection Mapping Central. [online] Projection Mapping Central. Available at: http://projection-mapping.org/projection-mapping-multiplayer-game/ [Accessed 31 Jan. 2018].

Ruhry, V. (2018). Ohne Titel (Hello world) – Installation. [image] Available at: http://ruhry.at/en/work/items/untitled-hello-world.html [Accessed 31 Jan. 2018].

We Make Money Not Art. (2005). Reciprocal Space. [online] Available at: http://we-make-money-not-art.com/reciprocal_spac/ [Accessed 31 Jan. 2018].

Glynn, R. (2011). Reciprocal Space. Available at: https://vimeo.com/27775272 [Accessed 31 Jan. 2018].

Stanza (2014). The Agency at the End of Civilization. Available at: https://vimeo.com/97613466 [Accessed 31 Jan. 2018].

Stanza.co.uk. (2014). The Agency At The End Of Civilisation. By Stanza. [online] Available at: http://www.stanza.co.uk/agency/index.html [Accessed 31 Jan. 2018].

 

Netscapes: Insight – Final Presentation

Insight: The Big Five is an interactive artwork inspired by psychology and personality. Users are invited to take a short personality test on the in-built touch screen, and see their results displayed in real-time within the Immersive Vision Theatre (IVT). The glass head also reacts to inputted data with a variety of lighting effects.

Here are some photos of our final presentation within the IVT.

IMG_20180123_161751.jpg

User inputs

IMG_20180123_162816.jpg

User input with changing visuals

insightfinalpres.gif

User inputs

 

 

Netscapes: Insight – Critical Analysis & Reflection

Whilst some aspects of the project could have gone better, overall I consider the project to be a success.

We had many issues with settling on an idea to begin with, although we knew roughly the technologies we wanted to work with, it took a few weeks of discussion and planning to fully settle on an idea. We went from building small robots with limited user interaction to a fully fledged user-interaction based installation piece; as well as moving from small scale organically-inspired projection mapping to abstract visualisations within the IVT.

Naturally, the project was subject to many changes as time went on. This is a natural part of the process; although it does mean our project is quite different from the initial idea.

Below are some of the choices we made and why I feel they were effective:

  • Heavy concept/research basis: Our project had a strong background of research behind it – Every choice has reasoning
  • Immersive vision theater (IVT): We chose to use this as it offers full surround of visuals and soundscapes – much like your personality reflects the way you view the world. bring it into physical sense .etc We chose to use the IVT because it reflects the feeling of being “inside” the head of the user. We also made use of the surround sound system, adding a further dimension to the experience.
  • All in one interface:  Instead of using two interfaces ( Pi for input of user data, and Mobile app to change head colour) we decided to bundle this into one input (Pi). This works much better as it merges both sides of the project, helping to keep the immersion of the user.
  • Multiple wireless networks: The use of both WiFi connections and Bluetooth for one seamless connection. This helps to keep the piece as all-in-one. Whilst this could have been done in a serial connection (see previous post) we already had the Bluetooth framework in place, so we decided to make use of it rather than change the code again.

What could have gone better:

  • ‘Plan B’ for internet connection: internet access in the dome is unreliable and setting up Eduroam can be difficult on certain platforms. The only difficulty with this is finding a workaround that still satisfies the requirements of the brief.
  • More user inputs: Make the visualization take more user’s data inputs at display them at once. This means changing both the way the visualization works and how the database read works, but would be implemented if the project carried on longer.
  • Stronger visuals: Have much more organic and interesting visuals to watch that incorporate more inputs.

Although we had some issues with group dynamics and the overall flow of the process, we were able to work around this and effectively work together to create something we are all proud of!

Netscapes: Insight – IVT Testing

Today we did our final build and first live test in the Immersive Vision Theatre (IVT). We started by fitting the Raspberry Pi and touchscreen inside the plinth, then transporting the equipment to the dome ready for our presentation.

IMG_20180122_160933

Fitting Pi3 + Touchscreen

Chris added wooden beams to support the weight of the Pi, as it will be under a lot of pressure when the touchscreen is in use. This should prevent the touchscreen moving away from the pinth.

IMG_20180122_150137.jpg

Setting up in the IVT – Modifying Code

Whilst in the IVT, Gintare updated her code to work better within the shape of the screen. She moved some of the key elements of the visuals so they were more centered within the dome, bringing them to the viewer’s attention.

 

vizlaptop.png

Setting up the visualization

We transported the physical part of our project to the IVT and decided where to set it up. We then tested the project within the space to understand how it will look and feel to the viewers and how the colours will display in the dome.

head interface.png

Glass head with touchscreen interface

We took this as an opportunity to double-check our database connections were working. During this time we ran into issues with page refreshing (which I quickly resolved) and with internet connection, which we resolved by using a mobile access point.

headdemo.png

Glass head interface in front of the projection.

We even invited Luke to test out our user interface, and have a go at inputting his own data into the visualization!

head interaction.png

Luke testing out the user interface!

dometest

Head test with visualization within the dome.

Netscapes: Building Bluetooth Connections – Part 2

Today we had access to the physical side of the project, so I tested my Bluetooth code (see my previous post) with the Arduino side. Luckily, after pairing with the HC-05 Bluetooth component, the code worked first time without need for debugging!

IMG_20180122_160927

The Arduino side, with HC-05 Bluetooth component & Neopixel ring

Chris and I modified the Arduino code to output different lighting effects based on the character sent across Bluetooth. We decided on the default being Red, with a breathing effect (which I created for a previous project) and a rainbow spin effect.

LEDbluetooth

Bluetooth message sent on tapping “Generate”

How it works

  • When the local server is started, it searches through paired devices to find the HC-05 module.
  • When it is found, it opens a connection and sends it the instruction to turn on.
  • When the generate button is pressed, a new message is sent across the connection instructing it to run the rainbow effect.

Critical analysis/Reflection

To begin with, we were going to use a separate mobile app to input user data across Bluetooth to the Arduino. Switching instead to using the same input as the user data adds a level of interactivity than we would have previously had from a separate phone app. It allows a user to instantly see the effect their inputs have had even before the visualization updates.

This also ties the piece together better, making it an all-in-one system rather than being split up.

Future Improvements

If we had more time, I would modify the code to react differently depending on some of the user inputted data, such as changing colours or effects based on values.

 

 

Netscapes: Building Bluetooth connections

To bring together the visualisation and physical prototype, I started working on a Bluetooth connection to the MongoDB connection code I previously built.

IMG_20180113_141644

Physical prototype with HC-05 Bluetooth module

Since we already have the HC-05 Bluetooth module in place and working with the Bluetooth terminal input on mobile, I simply had to look up how to create an output system in our .js code to match the inputs we previously designed for the Arduino.

BSP design.jpg

Initial flow diagram of program

I looked into how this could be done and began researching into using Bluetooth-Serial-Port module for Node.js.

After getting to grips with how the library works, I experimented with creating a basic framework for opening a Bluetooth connection and sending a basic input.  This code will check for a connection with the correct name, find the matching address, open a connection, and if it is successful, and the character ‘a’. When hooked up to the glass head model, this should activate the LED ring, making it light up.

bluetooth serial build code

My experimentation with BSP within the previously made MongoDB connection code

 


Issues

  • Certain information missing from Bluetooth-Serial-Port NPM documentation – I had to work around this by searching for other uses of BSP to fill in the gaps
  • Method to call previously paired Bluetooth devices doesn’t work on linux systems, so a workaround has to be made (looping through available connections and matching a name)

Next Steps

  • Update Arduino-side code: Modify existing code to include more interesting light effects, such as those I previously created for my ‘Everyware’ project. These would not be direct copies, but modifications of this pre-existing code, for a unique lighting effect.
  • Thoroughly test this code to ensure a secure connection is made and maintained for the duration of the installation.

Code Referencing/Libraries Used

Below is a list of the code documentations I used as reference when building my code. Whilst code was not directly copied, it was heavily referenced from the documentation:

bluetooth-serial-port: https://www.npmjs.com/package/bluetooth-serial-port
JS express – https://expressjs.com/en/guide/routing.html
JS json body parser – https://www.npmjs.com/package/body-parser-json
JS path – https://nodejs.org/api/path.html
JS Mongo Client – https://mongodb.github.io/node-mongodb-native/api-generated/mongoclient.html