I spent Monday and Tuesday of this week at EVA, the Electronic Visualisation and the Arts conference for which I was lucky enough to get a bursary to attend. It’s a multidisciplinary event, which describes itself as being about “electronic visualisation technologies in art, music, dance, theatre, the sciences and more…” It was certainly diverse. Some presentations were technical or academic, others were more practical, and they were on a range of subjects from a wide variety of institutions and individuals. “Electronic visualisation” covers an lot. I left quite inspired, if wondering on earth to do with all the ideas now whirling around my brain. I highly recommend it.
Here are some of my highlights from the two days I spent there. For the full proceedings go to http://ewic.bcs.org/category/17656#.
Creating Magic on Mobile
My paper kicked off the day, so let’s get that out of the way. I presented (with Alex Butterworth of Amblr) on “Creating Magic On Mobile”. You can read the full paper here, which discusses our experiences creating the Magic in Modern London app for Wellcome Collection (when I worked there as Multimedia Producer). It’s a geolocated treasure hunt set in Edwardian London so the paper covers the use of GPS, audio-visual content and a map based interface to create a historical semi-fictionalised narrative.
It also contains a call for more imaginative use of mobile in cultural organisations, and shows how we attempted to do this with Magic in Modern London, but also some of the challenges we faced along the way (such as trying to overlay a 1908 map onto google maps). It was a pretty experimental project, but also very ambitious, and we learnt a huge amount along the way. It played a large part in my drive to help set up ME:CA (Mobile Experiences: Cultural Audiences).
Keynote: Steve DiPaola “Future trends: Adding Computational Intelligence, Knowledge and Creativity to Interactive Exhibits and Visualization Systems”
Steve DiPaola (a “computer based cognitive scientist, artist and researcher”) gave a fascinating keynote that covered several of his projects. The notes from his talk are here http://dipaola.org/eva13/. These included.
- A beluga whale pod interactive simulation at the Vancouver aquarium being used both in gallery and as oart of scientific research. This installation allows visitors to observe particular behaviours and test out scenarios.
- Teaching evolution through art: using “evolved” artworks to help people really get what evolution means, particularly challenging in America, where 60% of people don’t believe in it (apparently). This was done as part of Darwins 200th birthday celebrations.
- Analysis of Rembrandt’s works to understand his compositions and techniques and their affect on the eye path of the viewer. This work came to the conclusion that Rembrandt had an impressive understanding of vision science 200 years before anyone else.
- A project to create life-like (ish) 3D avatars that people can speak through to address people remotely, and perhaps anonymously (e.g. scientists having a digital conversation with the public from another country).
- An analysis of Picasso’s “Guernica” 40 day period of creativity. This mapped all the works created during this prolific period onto a timeline to try and understand the nature of his creative process, how ideas evolved, and how he was influenced by other works.
Timelines
Timelines were also feature of other talks, such as this about “representing uncertainty in historical time”, I didn’t see this presentation but did see the demo later, which looked at turning the works of a composer into a timeline that mapped them against their first performances. Can’t find a link for this, but I was impressed, as with the Guernica project above, at the way a putting data into a timeline can reveal new insight.
More museums and mobile (with added AR)
There were a couple of other museum mobile projects, both of which used augmented reality (AR) in different ways. Timeline Newcastle looked interesting, and quite similar to Streetmuseum in overlaying archive photos onto the modern city (if I understood it correctly). I was particularly interested to see that the Smithsonian Natural History Museum have a very ambitious app in the works called Skin and Bones. It will focus on 14 objects, giving different levels of engagement with each, based on their framework for visitor preferences called IPOP (ideas, people, objects and physical). So for visitors who want to hear about people, they have a meet the scientist option, for physical – an activity, for ideas – an exploration of a scientific concept, and for object – a detailed study of the skeleton some of which used AR to overlay the skeleton with an animation.
I was interested in their concept of using it to raise “visual literacy” ie, helping visitors to understand and interpret what they were seeing and in doing so increase dwell time in the galleries. They had recognised that users were spending little time in this gallery, except for at a few “hero” skeletons, and surmised that people weren’t finding it interesting because they didn’t know how to make sense of what they were seeing. So in the app, for example, it will show you how a particular venomous snake’s jaws hinge back into their mouth and how you can see that in the skeleton.
They have an impressive (and expensive sounding?!) user testing plan. They are creating two apps, one without AR and one with to see how that affects visitor interaction, and will be using a beta app path analyis tool called Look Back to help with this (although this feature is also apart of GA mobile analytics as well).
Keynote: Linda Candy “Creativity and Evaluation: Supporting Practice and Research in the Interactive Arts”
This was an impassioned call for making evaluation a key part of the creative process in a keynote from writer researcher Linda Candy. It’s not just about evaluating impact, but also creating new knowledge and new works, she said (but impact also still important, I would add!) and should be fully embedded in practice. For artists creating digital interactive works there are also usability issues they should be testing (do people understand how to interact with it, for example? Is it within reach of children/wheelchair users?).
Obviously, as a huge fan of evaluation, I am very much down with this. Evaluation isn’t just about fulfilling the tedious criteria of funding bodies, but is more about understanding and improving your own work. The evaluation work I’ve done on games has been some of those more interesting and thought provoking work I’ve done.
Other assorted interesting presentations: critical robots, doomsday clocks and more
There were several other interesting demos and presentations.
- GPS visualisations “The poetry of data in collaborative GPS visualisations” about the comob net app (for co-mobility) which shows people’s position in relation to each other and creates visualisations based on that.
- The Neurotic Armageddon Indicator, which introduced me to the concept of “data sculpture” and is a representation of the doomsday clock, giving the impression that it is based on hard data rather than the deliberation of a board of scientists (paper here).
- Kulturbot 1.0 is a robot that roams galleries and critiques the works on the wall via tweets (and projection I think)
- The Olympic Audience Pixel project was described by Ed Cookson of Crystal CG, who took us through the mind-blowing scale of the work to create the panels in the audience for the various Olympic ceremony, that became the worlds largest video panel. The idea was to make sure that the audience were fully immersed in the ceremonies. It was a monumental undertaking (part of an even more monumental undertaking, of course) and done in just a few months. As I said on twitter at the time, if it was my responsibility to try and pull this off, I’d wake up screaming every night.
- Converting CNC (robotic) routers for painting. Hadn’t come across CNC routers before, look how adorable/creepy the little ones are!
- Visualising Museum Stories was about the Decipher project which has led to the production of the open source Storyscope curation tool that is currently being trialled. It seemed to be trying to create a novel way of putting together online exhibitions.
- Irida Ntalla presented her research at the Museum of London about touch screen interfaces and haptic interactions, in particular I noted the observation that the screens encouraged interaction between the people using them. As an aside, if you haven’t checked out Irida’s brilliant Participations journal on audience research and reception, I recommend it.
- There were several papers on more performance related subjects – sound generating imagery (including cymatics, which I had to look up), being incorporated into dance, dance being turned into digital visualisations, use of holographics and so on. If you are interested in these try http://ewic.bcs.org/content/ConWebDoc/50975 (about Kima http://analemagroup.wordpress.com/) or http://ewic.bcs.org/content/ConWebDoc/50978 (about EMViz, which visualises movement according to the Laban movement types).
- There were also several about the analysis of music and using that for various purposes, including as a visualisation as part of a performance http://ewic.bcs.org/content/ConWebDoc/50980 and http://ewic.bcs.org/content/ConWebDoc/51005 on multimodal music composition and http://ewic.bcs.org/content/ConWebDoc/51004 on a new piece of software for music analysis).
Stop, my brain is full
There was a whole other day of this that I missed, probably for the best, as by the end of it my head was spinning.