Inspirations: AI and machine Creativity


AARON is a painting robot made by Harold Cohen, capable of using and mixing real paints to create works on canvas. AARON displays a level of unpredictability; with even it’s creator not knowing what it will make. AARON is, however, not technically artificial intelligence, lying somewhere closer to a form of autonomous code. (Cohen, 2018)

Microsoft’s Drawing AI

Microsoft have designed a creative machine capable of making images of what it is told. The machine takes inputs in the form of text, which it then uses to determine what to create. The result is pixel-by-pixel generated images, sitting somewhere between photograph and painting. (Roach, 2018)


Cohen, H. (2018). Harold Cohen Online Publication. [online] Available at: [Accessed 2 Feb. 2018].

Roach, J. (2018). Microsoft researchers build a bot that draws what you tell it to – The AI Blog. [online] The AI Blog. Available at: [Accessed 2 Feb. 2018].


Inspirations: The Art of Randomness

Conversations on Chaos
By fito_segrera

Markov Chain poetry from Randomness (Segrera, 2016)

Conversations on Chaos is an artwork based on representation of randomness. It consists of two main parts: A pendulum suspended over multiple electromagnetic oscillators. The software also implements the use of Markov Chains, enabling the system to create a human-like ‘voice’, and bringing meaning back into chaos.  (Segrera, 2015) Together, this creates a system of ‘two machines that hold a dynamic conversation about chaos’. (Visnjic, 2018)

Codex Seraphinianus
By Luigi Serafini, 1981

Excerpt from Codex Seraphinianus (Serafini and Notini, 1981)

Codex Seraphinianus is a book written in an invented language with no translation. It also has a collection of visuals; some familiar, some not. The format of the book is reminiscent of a guide book or scientific text. (Jones, 2018) The book could be interpreted as an introduction to an alien or alternate reality with influences from our own.

Neural Network Critters
By Eddie Lee

Video: Neural Network Critters! by Eddie Lee (Lee, 2017)

Neural Network Critters is a visual example of how neural networks can be used to make art. In this free program, a series of ‘critters’ are created. (Visnjic, 2018) The ones that are fittest (i.e. make it furthest through the maze) are asexually reproduced until they make it to the end of the maze. (Lee, 2018)

School for Poetic Computation (SFPC)

The School for Poetic Computation is a small school based in New York, that aims to bring together art and computing.  (, 2018)


Jones, J. (2018). An Introduction to the Codex Seraphinianus, the Strangest Book Ever Published. [online] Open Culture. Available at: [Accessed 11 Feb. 2018].

Lee, E. (2018). Neural Network Critters by Eddie Lee. [online] Available at: [Accessed 11 Feb. 2018].

Lee, E. (2017). Neural Network Critters – Vimeo. Available at: [Accessed 11 Feb. 2018].

Serafini, L. and Notini, S. (1981). Codex seraphinianus. New York: Abbeville Press, p.98.

Segrera, F. (2015). Conversations on Chaos. [online] Available at: [Accessed 10 Feb. 2018].

Segrera, F. (2016). Conversations on Chaos. [image] Available at: [Accessed 11 Feb. 2018]. (2018). SFPC | School for Poetic Computation. [online] Available at: [Accessed 11 Feb. 2018].

Visnjic, F. (2018). Neural Network Critters by Eddie Lee. [online] CreativeApplications.Net. Available at: [Accessed 11 Feb. 2018].

Visnjic, F. (2018). Conversations On Chaos by Fito Segrera. [online] CreativeApplications.Net. Available at: [Accessed 11 Feb. 2018].

Inspirational Art 2 – Projection Mapping

Projection Mapping – Catan/D&D
By Silverlight/Roll20


Projection mapping – D&D (Projection Mapping Central, 2018)

This projection mapping piece brings together tabletop gaming and projection mapping.This not only creates a more immersive enronment for players, it also provides tools for gamers, such as using real time tracking to calculate a character’s line of sight. (Sodhi, 2018)

Crystalline Chlorophyll
By Joseph Gray, 2009


Video: Crystalline Chlorophyll (Gray, 2009)

Crystalline Chlorophyll is an interactive sculpture that reacts to people in the space around it. During the course of an exhibition, the sculpture tracks motion in the room and transforms from an icy blue to a natural green.

The sculpture is built from card stock, but was originally designed in blender. The colour changing effects are achieved by two ceiling-mounted video projectors. (Gray, 2014)



Gray, J. (2009). Crystalline Chlorophyll. Available at: [Accessed 31 Jan. 2018].

Gray, J. (2014). Crystalline Chlorophyll. [online] Grauwald Creative. Available at: [Accessed 31 Jan. 2018].

Projection Mapping Central (2018). D&D Projection mapping. [image] Available at: [Accessed 31 Jan. 2018].

Sodhi, R. (2018). Dungeons & Dragons and Settlers of Catan with Projection Mapping -…. [online] Projection Mapping Central. Available at: [Accessed 31 Jan. 2018].

Netscapes: Code Inspirations

Open Processing

Open Processing is a library of P5.js examples created by users. (, 2018) I looked here for inspiration for our final visualization, as well as gaining insight into how they are created.

I mainly focused on code inspired by organic forms and movement, as we are aiming to create an abstract visualization.

Snakes by skizzm

Snakes is an example of animated pixel art created only with code.  It consists of a grid of squares each with their own animation. (, 2018)

A similar effect could be applied to our visualisation: The individual squares, differing colours and motions could be linked to different sections of the CANOE personality model


TREE by Ryan Chao

Tree is a simplistic animation of a tree swaying gently in the breeze. The shape is changed on click, randomly generating a new tree. The ‘blossom’ on the branches are generated circles. (Chao, 2018)

This is similar to our original idea of generating organic shapes or creatures from user inputted data.


Wobbly Swarm by Konstantin Makhmutov

Wobbly swarm is an interactive animation piece. Clicking and dragging generates new circles, which swarm together and interact with eachother, slowly grouping together to create a ball. (Makhmutov, 2018)


Easing Test by aadebdeb

Easing test is a circle-based animation that changes every time you click. This is similar to our idea of creating circles based on user inputs. The placement, colour and size could relate to the inputs from our CANOE model sliders. (, 2018)



Chao, R. (2018). tree150209 – OpenProcessing. [online] Available at: [Accessed 8 Jan. 2018].

Makhmutov, K. (2018). Wobbly Swarm – OpenProcessing. [online] Available at: [Accessed 10 Jan. 2018]. (2018). easing test – OpenProcessing. [online] Available at: [Accessed 10 Jan. 2018]. (2018). OpenProcessing – Algorithmic Designs Created with Processing. [online] Available at: [Accessed 15 Jan. 2018]. (2018). snakes – OpenProcessing. [online] Available at: [Accessed 10 Jan. 2018].


Netscapes: Art from code – Inspirations

Since we are looking into creating web-based visualizations and animations from code, I decided to research some forms of web-based animations.

Modular SVG Animation Experiment – Mandala
By Dylan Cutler



Modular SVG animation pen (Cutler, 2017)

This example is made using HTML,CSS and Javascript. (Cutler, 2017)

I find this a particularly interesting example because of all the moving parts, how they all move in relation to each other and how they layer up.

Animated Background
By Marco Guglielmelli





Animated background in action (Guglielmelli, 2017)

This piece is created with CSS/Javascript/HTML. It is interactive; moving the mouse across the screen shows new areas of animated lines and points, growing brighter or darker as you move towards or away from them. (Guglielmelli, 2017)

Hexagon Fade
By Tina Anastopoulos


Hexagon fade example on codepen (Anastopoulos, 2017)

Created using HTML/CSS/Javascript and p5.js, Hexagon fade is an example of how p5.js can be used to create simple yet effective scaling visualizations. (Anastopoulos, 2017)

Rainbow Pinwheel – p5.js
By Tina Anastopoulos




Rainbow pinwheel interactive example on codepen (Anastopoulos, 2017)

Rainbow Pinwheel is a striking example of how interactive visualizations can be created using HTML/CSS/Js, and in this case, p5.js. In this example, you click and drag to create the effect of motion. (Anastopoulos, 2017)


Cutler, D. (2017). Modular SVG Animation Experiment. [online] CodePen. Available at: [Accessed 26 Nov. 2017].

Guglielmelli, M. (2017). Animated Background. [online] CodePen. Available at: [Accessed 26 Nov. 2017].

Anastopoulos, T. (2017). Hexagon Fade. [online] CodePen. Available at: [Accessed 26 Nov. 2017].

Anastopoulos, T. (2017). Rainbow Pinwheel – p5.js. [online] CodePen. Available at: [Accessed 26 Nov. 2017].

Everyware: The Matter of the Immaterial

The brief for “Everyware” is entitled “The Matter of the Immaterial”, and is focused around ubiquitous computing and making the intangible tangible. I took this idea and used it as a starting point for some research into what is already available.




Ultrahaptics development kit (Ultrahaptics, 2015)

Ultrahaptics is a startup company focused on making the virtual world physical. Using an array of ultrasonic projectors and hand tracking, users can feel & interact with virtual environments, as well as feel real tactile feedback without the need for wearing or holding special equipment. (Ultrahaptics, 2017) Read more on my other blog post.


Ultrahaptics Diagram  (Ultrahaptics, 2015)

Ultrahaptics follows a similar concept to the Geomagic Touch X 3D pen (Previously known as Sensable Phantom Desktop), which I have used!



DaisyPi system (DaisyPi, 2017)

The Daisy Pi is a Raspberry Pi powered home monitoring system. It is fitted with multiple sensors including temperature, light intensity and humidity. It is also capable of capturing audio and video feeds, which can be accessed remotely by devices such as mobile phones or tablets. (Lopez, 2017)



Moon up close (Designboom, 2014)

Moon is an interactive installation piece created by Olafur Eliasson and Ai Weiwei. It invites viewers from around the globe to draw and explore a digital “Moonscape”. (Feinstein, 2014)

Eliasson and Weiwei’s work is focused around community and the link between the online and offline world. (Austen, 2013)

Over the course of its 4 years of existence, Moon grew from simple doodles and drawings, to collaborations & clusters of work, such as the “Moon Elisa”, where multiple users came together to recreate the classic Mona Lisa painting. (Cembalest, 2013)

“The moon is interesting because it’s a not yet habitable space so it’s a fantastic place to put your dreams.” – Olafur Eliasson, on Moon (Feinstein, 2014)

Illuminating Clay

Illuminating Clay is a platform for exploring 3D spatial models. Users can manipulate the clay into different shapes (even adding other objects), and using a laser scanner and projector, a height map is projected back onto the surface. It can also be used to work out data such as travel times and land erosion.  (Piper et al., 2002)

Physical Telepresence


Interaction through Physical Telepresence (Vice, 2015)

Physical Telepresence is a work created by students at MIT, based around shared workspaces and remote manipulation of physical objects. (Leithinger et al., 2014) The work consists of a pin-based surface that can be used to interact with physical objects. (Pick, 2015)

Near Field Creatures

Near Field Creatures is a game made by students as a part of the mubaloo annual appathon at Bristol Uni. Users scan NFC tags (such as in certain student cards) and collect different animals of differing values. These collected animals can then be used to compete with other users. (Mubaloo, 2015)

Pico is an interactive work that explores human-computer interaction, allowing people and computers to collaborate in physical space. Pico is interacted with by use of pucks, which can be used by both the computer and the user. (Patten, Alonso and Ishii, 2005)

PICO 2006 from Tangible Media Group on Vimeo. (Pico 2006, 2012)




Ultrahaptics (2015). Ultrahaptics Development Kit. [image] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics. (2017). Ultrahaptics – A remarkable connection with technology. [online] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics (2015). Ultrahaptics diagram. [image] Available at: [Accessed 28 Oct. 2017].


DaisyPi (2017). Daisy Pi Unit. [image] Available at: [Accessed 28 Oct. 2017].

Lopez, A. (2017). Daisy Pi | The home monitoring e-flower. [online] Available at: [Accessed 28 Oct. 2017].


Designboom (2014). Moon close up. [image] Available at: [Accessed 30 Oct. 2017].

Feinstein, L. (2014). Make Your Mark On The Moon With Olafur Eliasson and Ai Weiwei. [online] Creators. Available at: [Accessed 30 Oct. 2017].

Cembalest, R. (2013). How Ai Weiwei and Olafur Eliasson Got 35,000 People to Draw on the Moon | ARTnews. [online] ARTnews. Available at: [Accessed 30 Oct. 2017].

Austen, K. (2013). Drawing on a moon brings out people’s best and worst. [online] New Scientist. Available at: [Accessed 30 Oct. 2017].


Piper, B., Ratti, C., Wang, Y., Zhu, B., Getzoyan, S. and Ishii, H. (2002). Illuminating Clay. [online] Available at: [Accessed 30 Oct. 2017].


Vice (2015). Interaction with Physical Telepresence. [image] Available at: [Accessed 30 Oct. 2017].

Leithinger, D., Follmer, S., Olwal, A. and Ishii, H. (2014). Physical Telepresence. [online] Available at: [Accessed 30 Oct. 2017].

Pick, R. (2015). Watch a Robotic Floor Play with Blocks. [online] Motherboard. Available at: [Accessed 30 Oct. 2017].


Mubaloo. (2015). Mubaloo and Bristol University hold third annual Appathon. [online] Available at: [Accessed 28 Oct. 2017].


Patten, J., Alonso, J. and Ishii, H. (2005). PICO. [online] Available at: [Accessed 30 Oct. 2017].

Pico 2006. (2012). MIT: MIT Tangible Media Group. Available at:


Netscapes: Week 1 – Part 2: Inspirations

AI & Deep Learning



Virtualitics AR office space (Virtualitics, 2017)

Virtualitics is a cross-platform application that merges AI, Big Data & AR/VR. The program features deep learning to transform big data into easily understandable reports and data visualisations within a shared virtual office, helping companies to grow. (WIRE, 2017)

Typically, analysing big data is no easy task. When using large amounts of data, even with visualisation technology, it can be difficult to pick out useful information. The Virtualitics platform uses AI to manage this, by means of algorithms that determine which metrics matter depending on what you are most interested in learning from that data. (Siegel, 2017)

The Virtualitics platform acts as a base for presenting and analyzing big data, and can allow for up to 10 dimensions of data to be shared, giving companies a competitive edge.  (Takahashi, 2017)

The platform could be applied to many different applications, ranging from industries such as Universities or Hospitals, and has already been successfully applied to finance and scientific research applications. (Team, 2017)


  • Highly interactive environment
  • Can be used in multiple business applications and settings
  • Makes big data accessible to everyone – even those who are untrained can easily access data.
  • Simple and easy to use, automatically turns data into useful graphs based on what you want to learn from it.


  • 3D VR office space may not be be appropriate for all applications.
  • VR headsets can be expensive – If the platform requires multiple headsets (such as the shared office space) this could end up being quite costly for a company.

Augmented Reality



Ultrahaptics development kit (South West Business, 2016)

Ultrahaptics is a startup company based around allowing users to feel virtual objects in a physical sense. By using ultrasonic projections and hand tracking, users can feel & interact with virtual environments, as well as feel real tactile feedback without the need for wearing or holding special equipment. (Ultrahaptics, 2017)

  • rsz_142
    Ultrahaptics diagram (Ultrahaptics, 2015)

The system is built using an array of ultrasound emitters in conjunction with motion sensors. Haptic feedback is created by first defining a a space in which to model the acoustic field. Within this field, focus points are created, that have differing types & intensities of feedback. (Kevan, 2015) This can allow for users to use both hands simultaneously or to interact with multiple objects.(Kahn, 2016)


  • Highly Interactive – encourages user engagement
  • Can be used in multiple applications
  • Could make other AR and VR apps more immersive when used together
  • All in one development kit, tools and support.
  • Possibility to create multiple “objects” within 3D space.


  • In certain applications, physical buttons could be more appropriate
  • Users can still “push through” objects – they can be felt, but are not solid.
  • The platform can (and does!) create noise and vibrations, whilst work is being done to minimize this, it will most likely always be present.

Whilst this sort of technology is still in its infancy, it offers a promising insight into the future of interactive technologies. In future, it could be applied to uses such as 3D sculpt modelling and similar applications, or making much more immersive VR and AR experiences.



Digilens in-car HUD (Digilens, 2017)

Digilens combines AR and holographic technologies. They build AR screens for use in multiple applications, including inside car windshields and in aeroplanes. These screens can display real-time data, enhancing driver awareness and safety. (DigiLens, Inc., 2017)


  • Fully customisable displays.
  • Wide range of uses, both commercial and private.
  • Can enhance driver awareness & Road safety
  • Less bulky than tradition displays


  • Could be distracting for drivers by taking their view away from the road
  • Cost of building and adding to cars

Interactive Art

After Dark
The Workers, 2014


Robot from After Dark, Tate Britain, 2014 (The Workers, 2014)

After Dark is an exhibition piece built using Raspberry pi. It allows viewers to take control of & drive a robot, exploring the exhibitions of TATE Britain via live video feed after closing time. (, 2014)

It was created as a way to engage new audiences in art; allowing them to explore the exhibitions without even having to set foot inside the building. Whilst viewers were driving the robots, art experts provided live commentary, providing new insights and engagement into the pieces on display. (The Workers, 2014)

The robots were fitted with cameras and lighting, as well as sensors to ensure they could navigate the galleries without complication. (Tate, 2014)


  • Highly Interactive – encourages user engagement.
  • Acts as a platform for learning and exploration.
  • Live art expert commentary makes the experience more than just “driving a robot”.


  • Could be costly to build & run
  • Battery powered robots – battery life is always a concern, particularly when these robots are connected to the internet and streaming for multiple hours.
  • Special measures must be taken to ensure damage to museum exhibits doesn’t happen.

Whilst this is an interesting idea, it is important to note that virtual museum tours already exist (such as video tours or even VR tours, which also sometimes provide commentary), and the act of driving the robot could be considered nothing more than a gimmick.

Zach Gage, 2016


Installation View (Gage, 2016)

Glaciers is an installation piece built using 40 Raspberry Pi systems, exploring the interactions between digital platforms(In this case search engines) and humans. They are programmed to take the top 3 autocomplete suggestions that follow various phrases, and display them on a screen, creating odd poetry that reflects the nature of the modern age. (Bate, 2016)

Although the screens appear static, the phrases are updated once a day based on the most popular auto-completes. Due to the nature of this, the poems could change daily, but are unlikely to. (Gage, 2016)


“He Says” by Zach Gage, part of “Glaciers” (Gage, 2017)


  • Relatively cheap and simple to build – Technology behind it is relatively cheap and easy to come by.
  • Simplistic nature
  • Concept understandable to most viewers


  • Not interactive
  • Due to the nature of Google autocomplete, poems do not change often (sometimes not at all)

Further images of Glaciers can be seen here.



Virtualitics (2017). Virtualitics office space. [image] Available at: [Accessed 28 Oct. 2017].

WIRE, B. (2017). Virtualitics Launches as First Platform to Merge Artificial Intelligence, Big Data and Virtual/Augmented Reality. [online] Available at: [Accessed 28 Oct. 2017].

Virtualitics (2017). Virtualitics [online] Available at: [Accessed 28 Oct. 2017].

Siegel, J. (2017). How this Pasadena startup is using VR and machine learning to help companies analyze data. [online] Built In Los Angeles. Available at: [Accessed 28 Oct. 2017].

Takahashi, D. (2017). VR analytics startup Virtualitics raises $4.4 million. [online] VentureBeat. Available at: [Accessed 28 Oct. 2017].

Team, E. (2017). Virtualitics: Caltech & NASA Scientists Build VR/AR Analytics Platform using AI & Machine Learning – insideBIGDATA. [online] insideBIGDATA. Available at: [Accessed 28 Oct. 2017].



South West Business (2016). Ultrahaptics development kit. [image] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics. (2017). Ultrahaptics – A remarkable connection with technology. [online] Available at: [Accessed 28 Oct. 2017].

Ultrahaptics (2015). Ultrahaptics diagram. [image] Available at: [Accessed 28 Oct. 2017].

Kevan, T. (2015). Touch Control with Feeling | Electronics360. [online] Available at: [Accessed 28 Oct. 2017].

Kahn, J. (2016). Meet the Man Who Made Virtual Reality ‘Feel’ More Real. [online] Available at: [Accessed 28 Oct. 2017].


Digilens (2017). Digilens car HUD. [image] Available at: [Accessed 29 Oct. 2017].

DigiLens, Inc. (2017). Home – DigiLens, Inc.. [online] Available at: [Accessed 29 Oct. 2017].


The Workers (2014). After Dark Robot. [image] Available at: [Accessed 28 Oct. 2017]. (2014). After Dark. [online] Available at: [Accessed 28 Oct. 2017].

The Workers. (2014). The Workers: After Dark. [online] Available at: [Accessed 28 Oct. 2017].

Tate. (2014). IK Prize 2014: After Dark – Special Event at Tate Britain | Tate. [online] Available at: [Accessed 28 Oct. 2017].


Gage, Z. (2016). Installation View. [image] Available at: [Accessed 28 Oct. 2017].

Bate, A. (2016). Using Raspberry Pi to Create Poetry. [online] Raspberry Pi. Available at: [Accessed 28 Oct. 2017].

Gage, Z. (2016). ZACH GAGE – Glaciers @ Postmasters: March 25 – May 7, 2016. [online] Available at: [Accessed 28 Oct. 2017].

Gage, Z. (2017). He Says. [image] Available at: [Accessed 28 Oct. 2017].

Venture Culture – Week 1: Inspirations

Inspirational company: Bot & Dolly – Art and technology intertwined

When looking at the film industry, no company has worked outside the box quite like Bot & Dolly have. Bot & Dolly was created by Jeff Linnell and Randy Stowell, as a spin off from their production company, Autofuss, in 2009. B&D utilise industrial robots as the film-making tool from the future. (Shea, 2017)

Image result for bot & dolly

IRIS & Scout, two of the Robots at Bot & Dolly, pictured in the piece “Box” (2013) – (Yellowtrace, 2013)

The KUKA robotic arm is an instrument designed for the the production line – mainly car production – and is capable of  a wide range of movement and carrying heavy loads. B&D recycled these retired robots and used them in new purposes, such as film production and installation art. (Shea, 2017)


Kuka robotic arms on the production line – before re-purposing –(Robotiq, 2016)

B&D created a platform to integrate their robotic systems into the film-making world. Known as BDMove, the software allows the camera rigs to be controlled using Autodesk Maya, a software widely used within the industry. This allows translation of animations to real life camera movements. (Shea, 2017) This is particularly exciting, as control interfaces no longer require designers to learn a whole new skill set in order to use the robots. (Staff, 2013)

Inside the Bot & Dolly Studio – (Bloomberg, 2014)

Breaking into the film industry: Gravity


IRIS camera rig & light box, during filming for Gravity (2013 film)  – (dedeceblog, 2014)

B&D’s most noteable work was on the 2013 film Gravity, where their robotic rigs were responsible for creating the groundbreaking dynamic lighting and camera effects.(Engelen, 2014)

“We built a system that could shoot a feature film, and actually shoot the majority of that film, so it had to be very malleable, very quick, and get into places you wouldn’t expect” – Jeff Linell, creator of Bot & Dolly (YouTube, 2017)

Traditionally, space scenes would be created by suspending actors on wires to make them appear weightless, supported by post-production methods.  (Shea, 2017) Instead, B&D created a light box with environment projections. Both this and the camera rig could be moved, creating realistic lighting and motion.. The IRIS robotic camera rig could, for example, be made to move rapidly towards the actor, producing the effect of falling (Engelen, 2014)

(A.UD IDEAS Lecture Series 2013-2014: Bot and Dolly – Movement and Precision, 2014)


“Box” by Bot & Dolly, 2013. video source : (Box, 2017)

“Box” is a film created by B&D in 2013, combining their robotic systems with projection mapping. (Engelen, 2014) The piece was inspired by the “principles of stage magic”. (Munkowitz, 2013) To make the camerawork feel much more natural, they motion-captured someone watching the performance, and translated it into a camera path for a bot to follow. (Creators, 2013)

Making of “Box”:

“The process for making the piece was quite involved, combining conventional graphic design and animation tools with robotics animation, projection mapping, automated cinematography and a grip of other technologies unique to the studio.” –Bradley G Munkowitz, Lead graphic designer for “Box” (Munkowitz, 2013)

Image result for bot & dolly

Behind the scenes: Creation of “box” –(Reyneri, 2013)


B&D have not just created these robots for film, however. In 2012, they created an interactive installation for Google, named “Kinetisphere“, built using their Scout robot. Kinetisphere was a model of Google’s Nexus Q streaming device which could be controlled by viewers with Nexus gadgets. (Shea, 2017)

Video of Kinetisphere in action (Rodholm, 2012)

B&D was purchased by Google in 2013, along with several other robotics companies. (Google Acquisitions, n.d.) Some members have gone onto further projects such as the Lightform projection system.  (Lightform, 2017)

How they are Inspirational

B&D is inspirational to me because they look at the way we can re-purpose objects that wouldn’t usually get a second thought outside of their everyday uses; and are a prime example of how taking a risk can lead to new heights.

Their film “Box” inspired my interest into new forms of film-making, even trying out projection mapping for myself.

They are successful because they arose as a spin-off from a small film company, gained popularity; leading to them climbing the ladder of company growth. They expanded from a small, humble production team to being the masterminds behind a major Hollywood blockbuster; proving their worth to not only the robotics industry, but to the film making industry.


Shea, C. (2017). The Robot Afterlife: An Exciting Story About the Post Factory Years. [online] Available at: [Accessed 9 Oct. 2017].

Staff, R. (2013). Bot & Dolly Fuses 3D Animation and Industrial Automation – Robotics Business Review. [online] Robotics Business Review. Available at: [Accessed 10 Oct. 2017].

YouTube. (2017). Bot & Dolly’s Iris, World’s most advanced Robotic motion control camera system. [online] Available at: [Accessed 10 Oct. 2017].

Pescovitz, D. (2014). Bot & Dolly and the Rise of Creative Robots. [online] Available at: [Accessed 11 Oct. 2017].

Engelen, J. (2014). Bot & Dolly – a small company with BIG Robots – Dedece Blog. [online] Dedece Blog. Available at: [Accessed 10 Oct. 2017].

A.UD IDEAS Lecture Series 2013-2014: Bot and Dolly – Movement and Precision. (2014). UCLA: UCLAArchitecture. Available at:

Munkowitz, B. (2013). Box Demo. [online] Available at: [Accessed 9 Oct. 2017].

Box. (2013). Directed by T. Abdel-Gawad. San Francisco: Bot & Dolly.  [online] Available at:

Yellowtrace. (2014). ‘Box’ | Projection Mapping on Moving Surfaces by Bot & Dolly.. [online] Available at: [Accessed 11 Oct. 2017].

Reyneri, P. (2013). Box. [online] Phil Reyneri. Available at: [Accessed 10 Oct. 2017].

Creators. (2013). Behind The Scenes Of Box By Bot & Dolly. [online] Available at:–video [Accessed 10 Oct. 2017].

Kinetisphere: An Interactive Installation for Google IO 2012. (2012). Directed by A. Rodholm. San Francisco: Bot & Dolly. Available at:

Google Acquisitions. (n.d.). Bot & Dolly. [online] Available at: [Accessed 11 Oct. 2017].

Lightform. (2017). Lightform: Projection Mapping Evolved. [online] Available at: [Accessed 11 Oct. 2017].

dedeceblog (2014). Gravity IRIS camera rig. [image] Available at: [Accessed 11 Oct. 2017].

Robotiq (2016). KUKA robotic arms. [image] Available at: [Accessed 19 Oct. 2017].

Yellowtrace (2013). IRIS & Scout in “Box”. [image] Available at: [Accessed 24 Oct. 2017].

Rodholm, A. (2012). Kinetisphere: An Interactive Installation for Google IO 2012. Available at: [Accessed 27 Oct. 2017].

Reyneri, P. (2013). Making of Box. [image] Available at: [Accessed 28 Oct. 2017].