There are various attempts by science museums to bring to life some of the hidden ways in which the Internet works. When I visited the Science Museum in Chicago about 10 years ago there was an exhibit where I took a photograph of myself that then was transmitted to the other end of the gallery and displayed on a screen; the transmission of the picture was visualised on a wall where small packages moved along to where the screen was.

A few months ago in late March the National Media Museum’s Internet Galleries in Bradford opened together with Life Online that pursue a similar goal; making the development and functioning of the Internet intelligible.

Now in late July 2012, the Science Museum in London together with Google launched Chrome Weblab, “a series of interactive Chrome Experiments made by Google that bring the extraordinary workings of the internet to life”. The exhibition is in the basement of the Wellcome Wing. When I visited the gallery had just opened to the public and was already heaving with people.

Weblab is comprised of five ‘experiments’ people can engage with by using a Lab Tag and the various interfaces and systems displayed in the space. On entering the gallery each visitor can draw a Lab Tag from a computer system that is used as an identifier through which visitors’ engagement with the individual experiments is recorded and made retrievable from home. From here on the route took me into the gallery and a first large screen, the Data Tracer.

On entering the gallery I heard musical sounds which apparently came from the centre of the space but I had no idea who or what produced them and why. On closer look I saw a number of machines that looked like musical instruments that made sounds without anybody in particular playing them. I was intrigued but before I got to move to one of those instruments a person at the exhibit in front of me left the computer system and I engaged with the Data Tracer.

  

Data Tracer is comprised of three or four small screens connected to a large display showing a map of the world. On arrival I waited for a few minutes until a small screen become available and then fed my Lab Tag into an interface. I then was confronted with a number of thumbnail images showing objects and photographs of faces; on selecting one of the thumbnails a large copy of the image appeared on the large screen opposite locating the physical place where the image is stored and then drawing lines from there back to the Science Museum; thus, the exhibit visualizes the transformation of the image into data packages and their ‘journey’ to the Science Museum. Like the old exhibit at the Chicago Science Museum this Weblab experiment makes visible the process of using Google search engine. 

Having experimented with the exhibit for a while by tapping on two or three of the thumbnails I noticed other visitors waiting behind me and moved on to the next experiment, the Sketchbotswhere robots draw faces captured by a webcam of physical visitors in the gallery and online visitors in sand.

Only few people stopped for longer than a minute or so at the robots and often moved on when noticing that at the next lot of robots they can have their own faces or those of their children drawn.

http://www.youtube.com/watch?v=CkzXSZnDs1E&feature=player_embedded

The process fascinates people. Having taken a picture they observe the robot at work and their image appearing. They take pictures on their cameras or film the process with their mobile phones, commenting on the delicate strokes the machine makes in the sand. People also exploit the possibility to take pictures of others as a means to engage their (small) children with the exhibit who otherwise may not stay with the experiment for long. They lift children up in front of the camera, take the picture of their face and then show them that the robot is drawing that picture of their face in the sand; the activity keeps the children engaged with the exhibit for considerable time.

From the robots my visitor journey took me to the Teleporter, an exhibit that uses periscopes connected to the web to look at location around the world pre-determined by the designers. For somebody on their own the use of the periscope can feel a bit strange, as you pull the system in front of your eyes and loose awareness of what is happening around you.

Looking through the periscope I saw the inside of an aquarium located in Cape Town and could turn around to get a 360 degrees view of the space. On occasions I pressed a button at the top of the periscope to take a photograph that with the help of the Lab Tag was saved on my account. As I discovered when leaving the periscope on the wall behind the exhibit my picture was displayed on small digital photo frames together with those taken by others. The picture bears a time-stamp and can be discussed with others who had no access to what I was looking at while using the system.

One of the potentially most exciting exhibits is the Universal Orchestra, a robotic orchestra made up of eight instruments simultaneously operated by people in the gallery and on the Internet. The instruments are located in the centre of the gallery, each equipped with a computer system that people can use to create sounds. You touch different notes on the screen, the information is fed to the robot that then creates the sound.

Arriving here helped explain the soundscape I had been hearing on entering the gallery. As with some of the other exhibits I was a secondary user of the exhibit, experiencing how to use the systems and what they do before I gained access to one of the instruments. The interaction with the system kept me busy for a while, as I tried to figure out how my actions on the computer screen relate to the sounds made by instruments. Also, the exhibit is described as a “real-time collaboration with people across the world” but because it is difficult to make out who creates what sound the use of the notion of “collaboration” to describe the events is problematic.

http://www.youtube.com/watch?v=jCXX02dFbIM&feature=player_embedded

Finally, I went to a workstation where the Lab Tag is used to retrieve information about the activities a visitor has engaged with during their visit to the Weblab. The Lab Tag is slotted into the system and the computer screen shows what exhibits the visitor has been at and what they have accomplished there; for example, the photograph taken with the periscope or the sounds produced as part of the Universal Orchestra can be revisited. Seeing on the screen what I had done and what I had missed doing encouraged me to return to the gallery and conduct some further experiments with the Universal Orchestra before then leaving the exhibition.

Having arrived back home I booted my computer to visit the Online Chrome Weblab. I typed in the web address given on the back of the Lab Tag, scanned in the tag and immediately arrived at my Lab Report. The site shows my activities in the galleries on at the Science Museum, and allowed me to conduct the same experiments online. When opening for example, Online Sketchrobot, a site opens that shows live footage from the gallery before opening a screen that looks very similar to the one in the gallery. I took a picture of myself which then was processed ready for the robot to draw in the sand.

I then typed in my email address through which the system later notified me that the robot had completed its job.

The other exhibits work in a similar way. The Online Data Tracer invites visitors to ask the system to use for a physical location of an image file. I typed n my Twitter handle and the system located the associated picture in Isenburg, a small city in the German federal state of Hesse. TheOnline Teleporter allows the user to click on an image and obtain a live view into the bakery in North Carolina, the miniature exhibition in Hamburg and the aquarium in Cape Town. And the Online Universal Orchestra facilitates access to the eight instruments; one can view events in the gallery and play the instruments in the gallery from a remote location, audible to visitors in the museum and remotely. The played music can be recorded and then like the activities at the other exhibits, is retrievable from the Online Lab Tag Explorer.

Chrome Weblab is a fascinating experiment of an exhibition. It tries to make intelligible that the Internet connects remote locations on the planet. And this connectedness involves much more than the accessibility of information through search engines and web browsers but also allows for the possibility to act and interact with machines and people across the world in real-time.

The exhibition invites visitors to engage and participate with exhibits in the gallery and remotely and discover for themselves the relationship between the Internet and the social world in the gallery and remotely. It is successful in engaging people for considerable time with the topic of the Internet and creates an awareness for the connected world we are now living in; robots can be operated remotely, people in remote locations can “collaboratetively” make music, we can have a peek into the world of others from remote locations.

Over the past 10 years or so I had the opportunity to study visitors participating with technology in museums, including the Science Museum and the Wellcome Wing. Therefore, for me visiting Chrome Weblab was interesting also to see how features of  exhibits in Who am I? and Digitopolishave been further developed by the design team of Chrome Weblab. For example, the replacement of the flaky fingerprinting mechanism to save visitors’ activities with exhibit on a server by the physical Lab Tag is a huge improvement.  The tag works well and without problems with webcams at home (and at work) and also is a nice memorabilia from the visit. However I could imagine that in the future the Lab Tag is transferred to a mobile phone as people tend to loose or forget about items they take away from visits to museums. Also, the taking of photographs of people’s faces that has been a critical feature of exhibits in Who am I? has been improved. The interface is much more flexible and adaptable to use pictures visitors take.

There are three aspects of the exhibition that I believe might be worthwhile exploring further for the design team and google when revising the galleries. First, I think the key message of Weblab, i.e. the interconnectedness, is not coming through clearly enough. The relationship between people’s action in the gallery and remotely need to be made more intelligible and obvious. For example, at the moment it is unclear who plays what note at the instruments of the Universal Orchestra; at Data Tracer the actions on the small screens could be made visible, and at Sketchrobot more needs to be done to make the activities by the remote participant visible to give this part of the exhibit more prominence in the gallery.

Second, as the gallery is described as a laboratory the design team and their research staff might use it not only as a laboratory to experiment with technology but also as a space where they can experiment with human behaviour in technology-rich spaces. For example, it has been a common problem for museums that display a large number of computer-based exhibits that the number of interfaces is often much lower than the number of visitors who wish to participate with the exhibits at any one time. This leads to long waiting-times and queues at exhibits, people being secondary users rather than experiencing exhibits first hand, and unfortunately also people leaving disappointed because they did not get a chance to use an exhibit first-hand. Being setup as an experimental space the gallery would allow the design team to experiment with different ways to manage the flow in the galleries and to mange access to exhibits.

And third and maybe most importantly, considering that many visitors come with friends and family the design team could use the space to experiment with the provision of resources that facilitate and encourage collaboration at computer-based exhibits. The observations at the Sketchrobots where parents provide their children with access to the exhibit illustrate that visitors are interested in experiencing the exhibits together, yet the interfaces often prioritise individual users over collaboration. It would be fascinating to see experiments with novel interfaces that encourage visitors to collaborate with others in the gallery, and also with people in remote locations.

References

Heath, C., & vom Lehn, D. (2008). Configuring Interactivity: Enhancing Engagement in Science Centres and Museums. Social Studies of Science38(1), 63-91.

Heath, C., & vom Lehn, D. (2004). Configuring Reception: (Dis-)Regarding the “Spectator” in Museums and Galleries. Theory, Culture & Society21(6), 43-65.

Heath, C., Luff, P., vom Lehn, D., Hindmarsh, J., & Cleverly, J. (2002). Crafting participation: designing ecologies, configuring experience. Visual Communication1(1), 9-33.

Hindmarsh, J., Heath, C., vom Lehn, D., & Cleverly, J. (2005). Creating Assemblies in Public Environments: Social interaction, interactive exhibits and CSCWJournal of Computer Supported Collaborative Work (JCSCW)14(1), 1-41.

vom Lehn, D., Hindmarsh, J., Luff, P., & Heath, C. (2007). Engaging constable: revealing art with new technology. Proceedings of the SIGCHI conference on HumanComputer Interaction (pp. 1485-1494). San Jose,CA: ACM Press.

vom Lehn, D. (2010). Generating experience from ordinary activity: new technology and the museum experience. In D. O’Reilly & F. Kerrigan (Eds.), Marketing the Arts. A fresh approach (pp. 104-120). Abingdon: Routledge.

vom Lehn, D., & Heath, C. (2005). Accounting for new technology in museum exhibitions.International Journal of Arts Management7(6), 11-21.

vom Lehn, D., Heath, C., & Hindmarsh, J. (2001). Exhibiting interaction: Conduct and collaboration in museums and galleries. Symbolic Interaction24(2), 189–216.

@dirkvl

http://www.vom-lehn.net

 

interaction, interactivity, museums

Google and Academic Research

notes on books

 

The other day I was reading an academic paper  on an iPad; the paper had a number of references to Jack Kerouac’s ‘On the Road’. Halfway through the paper my desktop signalled the arrival of an email. On opening the email I had to look twice – the publisher Penguin had sent me a message saying that Jack Kerouac’s ‘On the Road’ was available as ebook now. Coincidence? Most probably. But the crawling and searching of our computer screens for activities makes such events increasingly possible and likely to occur.

Image

One company that engages in such actinides to monitor people’s online activities is Google. Recent publications have critically discussed these activities and pointed to the pitfalls for users and customers. Eli Pariser (2011) explicates how Google, Facebook and other companies use the tracking of online behaviour to reduce the amount of information made available to us presenting us with personalised information. Finds on Google Search are tailored to our search behaviour and our interaction on Facebook tailors the News Feed to show information posted by those we interact with, whilst other of our ‘friends’ don’t appear in the news feed anymore. The result is what Pariser calls ‘Filter Bubble’ that makes us to read, watch and listen to more of the same.

Image

Pariser’s book has had considerable coverage in the media’s review sections and on blogs. Whilst its principal argument is appreciated it is has been criticised for not taking into account the complexity of recommendation engines and the practices of people’s search behaviour. If an initial search result is dissatisfying we continue our search without taking for granted Google as an authority that shall not be withstood. We might even try Yahoo or Bing to see what finds they produce. Yet, on the first glance the way in which Google presents its finds suggest that there is an authority at work that provides us with comprehensive, objective and unbiased search results.

For academics therefore Google Scholar often seems to the first and best point of address to search for academic articles. Thus, Google Scholar has made access to scholarly research easy and convenient. You type in keywords into the search engine and it returns a list of finds ordered by relevance. The results link to academic journals that with the appropriate access can be downloaded immediately. Again, the impression given is that the finds are comprehensive and unbiased. No indication is made that over (more) relevant research might be out there than what is presented on the screen.

Siva Vaidhyanathan’s ‘The Googlization of Everything’ powerfully dismantles the view of Google search as providing unbiased results. Without discounting the benefits Google offers us all Vidhyanathan explicates the logics that drive Google Search and the implications they have on how we see the world. Like Pariser he explains how Google Search tailors its finds to our online activities. In producing search results Google not only looks at our past searches but also takes into account what we are currently doing in any of the Google Apps including Google Docs or Gmail. Moreover Google Search and Google Scholar only can find information from sources that makes it available to them.

In terms of Google Scholar this means that the search engine only finds articles from publisher who have a contract with Google to make information from their publications available. For example when I recently looked for literature on German sociology via Google.de I was struck by the fact that I was provided with information from amazon.de and self-publishing sites that hold student coursework but not from the major German publishers disseminating the key German texts in the subject.

All this considered it would seem that whilst Google Scholar and Search might be a good first site to start research it then is advisable to move to more reliable sources like ISI’s web of knowledge and other scientific Citation Indexes. Otherwise it would seems scientific/social scientific research also will be caught in the filter bubble; referring and cross-referring to publications only that Google provides it with.

Some References
Eli Pariser 2011. The Filter Bubble. Viking.
http://www.thefilterbubble.com/

Siva Vaidhyanathan. 2011. The Googlization of Everything. University of California Press
http://www.googlizationofeverything.com/

Neal Lathia 2011. Blogpost. Blowing Filter bubbles
http://urbanmining.wordpress.com/2011/06/20/blowing-filter-bubbles/#entry