Off Grasshoppers and other Types

exhibitions, interaction, interaction, Marketing, museums, public places, visitors

The design of systems to support people’s navigation of exhibitions often draws on concepts and theories about visitors’ movement through exhibitions. In reference to relevant literature it makes inferences about people’s interests in exhibits by the ways in which they navigate galleries and at which exhibits they stop and for how long. Thereby, designers and museum managers often talk about “visiting styles” and refer to a French paper by Veron and Levasseur (1991). Therein, the authors apparently, I haven’t read the paper, use an analogy from the animal world to describe four types of visiting style: ants, fishes, butterflys and grasshoppers. These types are seen as ideal types and it is argued that mixed styles of navigation are common. In fact, as Opperman and Specht (2000) suggest in reference to Bianchi and Zancanaro’s (1999) conference paper “the classification of a visitor is no longer made stereotypically by describing a visitor uniquely as one of the four animals, but as an estimation of the ‘degree of compatibility between the user’s movement pattern and the four stereotypes’ at a given point in time” (Bianchi and Zancanaro (1999) in Opperman and Specht 2000: p.132). From this typology probabilities are derived regarding people’s navigation pattern. This allows for the fact that visitors might change their visiting style ‘mid-fly’, i.e. as they navigate and exhibition. For example, a fish who has spent relatively little or no time with exhibits in one gallery, may encounter a gallery with objects s/he is more interested in and therefore spends more time with, thus turning into an ant.

This concept of visiting style links the  way and speed in which people navigate exhibitions to their level of engagement with exhibits. Underlying this concept of museum visiting are conventional measures of visitor research, i.e. the stopping and holder power of exhibits, coupled with theories of learning, such as the late Chan Screven’s (1976) goal-referenced approach that link assumptions about ‘learning from exhibits’ to the time people spend with exhibits. Using this approach it is possible to argue for technologies that promise to extend the time of people’s engagement with exhibits because according to theory, it leads to cognitive development.

A different but related kind of typology has been developed by John Falk (2009) in his book “Identity and the Museum Visitor Experience”. Here, Falk proposes to link visitor behaviour to people’s motivations  grounded in the identity. His argument is more complex than the typology discussed above. It can be seen as an expansion of earlier work by the same author where he together with colleagues investigated visitors’ agenda for museum visiting.

As Veron and Levasseur’s (1991) typology Falks differentiation of visitors in types represents a classification scheme that in reality cannot be found in this way. It is an attempt to bring order to a messy social world and seems very useful for museum managers and marketing managers because of this lack of messiness. They can use such typologies to make decisions about exhibition programmes or technologies to be deployed in their galleries.

Such theories about museum visiting however largely ignore the reality of visitors’ experience of museums. They neglect what people actually do in museums, how they approach, examine and depart from exhibits, and how they make experiences of exhibits and generate experiences for others. This neglect is grounded on related research that is primarily interested in the individual visitor or in groups and families that are considered as social entities rather than as dynamic social processes. Researchers see the origin of actions, such as the approach to an exhibit or the departure from an exhibit, in either the visitor’s motivation or in the design of the exhibit. Yet, save for very few exceptions these researchers rarely look at how people draw each other to examine exhibits, how they encourage each other to inspect objects in particular ways, how they generate experiences for each other and how they occasion each other to move on.

By investigating the details of people’s action at the “point of experience” where the action is and where the action can be observed, researchers see how people produce experiences of exhibits in interaction with others. Whilst on the surface these details appear to ‘messy’ a closer look reveals that they are systematically produced and intelligibly orderly. Visitors in galleries behave in intelligible ways and their action becomes observable and reportable as museum visiting, without them requiring theoretical typologies to make sense of each other’s action.

It would seem that basing decisions on detailed knowledge about what people are actually doing in museums would provide decision makers in museums with a safer footing than theories about visitors’ actions. Are there any museum managers or designers out there who use detailed observational or video-based research to inform their decision making?

 

For related research go here

 

References

Bianchi, A. and M. Zancanaro, Tracking Users’ Movements in an Artistic Physical Space, in Proceedings of the i3 Annual Conference: Community of the Future, Octo- ber 20 – 22, 1999 in Siena, M. Caenepeel, D. Benyon, and D. Smith, Editors. 1999, The Human Communication Research Centre, The University of Edinburgh: Edin- burgh. p. 103 – 106.

Falk, J. H. (2009). Identity and the Museum Visitor Experience. Walnut Creek, CA: Left Coast Press Inc. Retrieved from http://www.amazon.co.uk/Museums-Identity-John-H-Falk/dp/1598741632

Heath, C., & Vom Lehn, D. (2004). Configuring Reception: (Dis-)Regarding the “Spectator” in Museums and Galleries. Theory, Culture & Society, 21(6), 43–65. doi:10.1177/0263276404047415

Oppermann, R., & Specht, M. (2000). A Context-Sensitive Nomadic Exhibition Guide, 127–142.

Screven, C. G. (1976). Exhibit Evaluation: A goal-referenced approach. Curator, 52(9), 271–290.

Véron, E. and M. Levasseur, Ethnographie de l’exposition: L’espace, le corps et le sens. 1991, Paris: Centre Georges Pompidou Bibliothèque Publique d’Information.

vom Lehn, D. (2006). Embodying experience: A video-based examination of visitors’ conduct and interaction in museums. European Journal of Marketing, 40(11/12), 1340–1359. doi:10.1108/03090560610702849

vom Lehn, D. (2012). Configuring standpoints: Aligning perspectives in art exhibitions. Bulletin suisse de linguistique appliquée, 96, 69–90.

vom Lehn, D. (2013). Withdrawing from Exhibits: the interactional organisation of museum visits. In P. Haddington, L. Mondada, & M. Nevile (Eds.), Interaction and Mobility: Language and the Body in Motion. Berlin: de Gryter.

Apple Maps – as conversation starter?

analysis, interaction, Social Media, Twitter

Lots has been written about Apple’s problems with their Maps application. Apparently, motorists stranded in a National Park in Australia after relying on the app had to be rescued and many people complain or joke about problems with the app.

This morning, I received a Tweet via @CityJohn who used the app after arriving at Clapham South Tube station (South London). He opened the app and triggered the locate function only to be shown this map.

Image

In his tweet @CityJohn writes: Image

I don’t know what possessed me but I opened up my Apple Maps app and search for Clapham Common and was shown this map.

Image

As far as I can tell the map accurately locates Clapham Common and I decided to pass a picture of the map on to  @CityJohn. I have no idea about or interest in the technical workings of Apple Maps but found it interesting how Apple Maps, not only in this case, has become a conversation starter on Twitter. We all know by now that the app is anything but perfect and there is no need to post more examples of its shortfalls. But by posting curious examples one is almost certain to receive a response from others.

So, not surprisingly, when checking on @CityJohn’s Twitter Stream there now is at least one other short sequence of a ‘Twitter conversations’, just like the one I had with him. Maybe it’s worthwhile creating a collection of such instances. Maybe, this is not everybody’s cup o tea though….

There are various attempts by science museums to bring to life some of the hidden ways in which the Internet works. When I visited the Science Museum in Chicago about 10 years ago there was an exhibit where I took a photograph of myself that then was transmitted to the other end of the gallery and displayed on a screen; the transmission of the picture was visualised on a wall where small packages moved along to where the screen was.

A few months ago in late March the National Media Museum’s Internet Galleries in Bradford opened together with Life Online that pursue a similar goal; making the development and functioning of the Internet intelligible.

Now in late July 2012, the Science Museum in London together with Google launched Chrome Weblab, “a series of interactive Chrome Experiments made by Google that bring the extraordinary workings of the internet to life”. The exhibition is in the basement of the Wellcome Wing. When I visited the gallery had just opened to the public and was already heaving with people.

Weblab is comprised of five ‘experiments’ people can engage with by using a Lab Tag and the various interfaces and systems displayed in the space. On entering the gallery each visitor can draw a Lab Tag from a computer system that is used as an identifier through which visitors’ engagement with the individual experiments is recorded and made retrievable from home. From here on the route took me into the gallery and a first large screen, the Data Tracer.

On entering the gallery I heard musical sounds which apparently came from the centre of the space but I had no idea who or what produced them and why. On closer look I saw a number of machines that looked like musical instruments that made sounds without anybody in particular playing them. I was intrigued but before I got to move to one of those instruments a person at the exhibit in front of me left the computer system and I engaged with the Data Tracer.

  

Data Tracer is comprised of three or four small screens connected to a large display showing a map of the world. On arrival I waited for a few minutes until a small screen become available and then fed my Lab Tag into an interface. I then was confronted with a number of thumbnail images showing objects and photographs of faces; on selecting one of the thumbnails a large copy of the image appeared on the large screen opposite locating the physical place where the image is stored and then drawing lines from there back to the Science Museum; thus, the exhibit visualizes the transformation of the image into data packages and their ‘journey’ to the Science Museum. Like the old exhibit at the Chicago Science Museum this Weblab experiment makes visible the process of using Google search engine. 

Having experimented with the exhibit for a while by tapping on two or three of the thumbnails I noticed other visitors waiting behind me and moved on to the next experiment, the Sketchbotswhere robots draw faces captured by a webcam of physical visitors in the gallery and online visitors in sand.

Only few people stopped for longer than a minute or so at the robots and often moved on when noticing that at the next lot of robots they can have their own faces or those of their children drawn.

http://www.youtube.com/watch?v=CkzXSZnDs1E&feature=player_embedded

The process fascinates people. Having taken a picture they observe the robot at work and their image appearing. They take pictures on their cameras or film the process with their mobile phones, commenting on the delicate strokes the machine makes in the sand. People also exploit the possibility to take pictures of others as a means to engage their (small) children with the exhibit who otherwise may not stay with the experiment for long. They lift children up in front of the camera, take the picture of their face and then show them that the robot is drawing that picture of their face in the sand; the activity keeps the children engaged with the exhibit for considerable time.

From the robots my visitor journey took me to the Teleporter, an exhibit that uses periscopes connected to the web to look at location around the world pre-determined by the designers. For somebody on their own the use of the periscope can feel a bit strange, as you pull the system in front of your eyes and loose awareness of what is happening around you.

Looking through the periscope I saw the inside of an aquarium located in Cape Town and could turn around to get a 360 degrees view of the space. On occasions I pressed a button at the top of the periscope to take a photograph that with the help of the Lab Tag was saved on my account. As I discovered when leaving the periscope on the wall behind the exhibit my picture was displayed on small digital photo frames together with those taken by others. The picture bears a time-stamp and can be discussed with others who had no access to what I was looking at while using the system.

One of the potentially most exciting exhibits is the Universal Orchestra, a robotic orchestra made up of eight instruments simultaneously operated by people in the gallery and on the Internet. The instruments are located in the centre of the gallery, each equipped with a computer system that people can use to create sounds. You touch different notes on the screen, the information is fed to the robot that then creates the sound.

Arriving here helped explain the soundscape I had been hearing on entering the gallery. As with some of the other exhibits I was a secondary user of the exhibit, experiencing how to use the systems and what they do before I gained access to one of the instruments. The interaction with the system kept me busy for a while, as I tried to figure out how my actions on the computer screen relate to the sounds made by instruments. Also, the exhibit is described as a “real-time collaboration with people across the world” but because it is difficult to make out who creates what sound the use of the notion of “collaboration” to describe the events is problematic.

http://www.youtube.com/watch?v=jCXX02dFbIM&feature=player_embedded

Finally, I went to a workstation where the Lab Tag is used to retrieve information about the activities a visitor has engaged with during their visit to the Weblab. The Lab Tag is slotted into the system and the computer screen shows what exhibits the visitor has been at and what they have accomplished there; for example, the photograph taken with the periscope or the sounds produced as part of the Universal Orchestra can be revisited. Seeing on the screen what I had done and what I had missed doing encouraged me to return to the gallery and conduct some further experiments with the Universal Orchestra before then leaving the exhibition.

Having arrived back home I booted my computer to visit the Online Chrome Weblab. I typed in the web address given on the back of the Lab Tag, scanned in the tag and immediately arrived at my Lab Report. The site shows my activities in the galleries on at the Science Museum, and allowed me to conduct the same experiments online. When opening for example, Online Sketchrobot, a site opens that shows live footage from the gallery before opening a screen that looks very similar to the one in the gallery. I took a picture of myself which then was processed ready for the robot to draw in the sand.

I then typed in my email address through which the system later notified me that the robot had completed its job.

The other exhibits work in a similar way. The Online Data Tracer invites visitors to ask the system to use for a physical location of an image file. I typed n my Twitter handle and the system located the associated picture in Isenburg, a small city in the German federal state of Hesse. TheOnline Teleporter allows the user to click on an image and obtain a live view into the bakery in North Carolina, the miniature exhibition in Hamburg and the aquarium in Cape Town. And the Online Universal Orchestra facilitates access to the eight instruments; one can view events in the gallery and play the instruments in the gallery from a remote location, audible to visitors in the museum and remotely. The played music can be recorded and then like the activities at the other exhibits, is retrievable from the Online Lab Tag Explorer.

Chrome Weblab is a fascinating experiment of an exhibition. It tries to make intelligible that the Internet connects remote locations on the planet. And this connectedness involves much more than the accessibility of information through search engines and web browsers but also allows for the possibility to act and interact with machines and people across the world in real-time.

The exhibition invites visitors to engage and participate with exhibits in the gallery and remotely and discover for themselves the relationship between the Internet and the social world in the gallery and remotely. It is successful in engaging people for considerable time with the topic of the Internet and creates an awareness for the connected world we are now living in; robots can be operated remotely, people in remote locations can “collaboratetively” make music, we can have a peek into the world of others from remote locations.

Over the past 10 years or so I had the opportunity to study visitors participating with technology in museums, including the Science Museum and the Wellcome Wing. Therefore, for me visiting Chrome Weblab was interesting also to see how features of  exhibits in Who am I? and Digitopolishave been further developed by the design team of Chrome Weblab. For example, the replacement of the flaky fingerprinting mechanism to save visitors’ activities with exhibit on a server by the physical Lab Tag is a huge improvement.  The tag works well and without problems with webcams at home (and at work) and also is a nice memorabilia from the visit. However I could imagine that in the future the Lab Tag is transferred to a mobile phone as people tend to loose or forget about items they take away from visits to museums. Also, the taking of photographs of people’s faces that has been a critical feature of exhibits in Who am I? has been improved. The interface is much more flexible and adaptable to use pictures visitors take.

There are three aspects of the exhibition that I believe might be worthwhile exploring further for the design team and google when revising the galleries. First, I think the key message of Weblab, i.e. the interconnectedness, is not coming through clearly enough. The relationship between people’s action in the gallery and remotely need to be made more intelligible and obvious. For example, at the moment it is unclear who plays what note at the instruments of the Universal Orchestra; at Data Tracer the actions on the small screens could be made visible, and at Sketchrobot more needs to be done to make the activities by the remote participant visible to give this part of the exhibit more prominence in the gallery.

Second, as the gallery is described as a laboratory the design team and their research staff might use it not only as a laboratory to experiment with technology but also as a space where they can experiment with human behaviour in technology-rich spaces. For example, it has been a common problem for museums that display a large number of computer-based exhibits that the number of interfaces is often much lower than the number of visitors who wish to participate with the exhibits at any one time. This leads to long waiting-times and queues at exhibits, people being secondary users rather than experiencing exhibits first hand, and unfortunately also people leaving disappointed because they did not get a chance to use an exhibit first-hand. Being setup as an experimental space the gallery would allow the design team to experiment with different ways to manage the flow in the galleries and to mange access to exhibits.

And third and maybe most importantly, considering that many visitors come with friends and family the design team could use the space to experiment with the provision of resources that facilitate and encourage collaboration at computer-based exhibits. The observations at the Sketchrobots where parents provide their children with access to the exhibit illustrate that visitors are interested in experiencing the exhibits together, yet the interfaces often prioritise individual users over collaboration. It would be fascinating to see experiments with novel interfaces that encourage visitors to collaborate with others in the gallery, and also with people in remote locations.

References

Heath, C., & vom Lehn, D. (2008). Configuring Interactivity: Enhancing Engagement in Science Centres and Museums. Social Studies of Science38(1), 63-91.

Heath, C., & vom Lehn, D. (2004). Configuring Reception: (Dis-)Regarding the “Spectator” in Museums and Galleries. Theory, Culture & Society21(6), 43-65.

Heath, C., Luff, P., vom Lehn, D., Hindmarsh, J., & Cleverly, J. (2002). Crafting participation: designing ecologies, configuring experience. Visual Communication1(1), 9-33.

Hindmarsh, J., Heath, C., vom Lehn, D., & Cleverly, J. (2005). Creating Assemblies in Public Environments: Social interaction, interactive exhibits and CSCWJournal of Computer Supported Collaborative Work (JCSCW)14(1), 1-41.

vom Lehn, D., Hindmarsh, J., Luff, P., & Heath, C. (2007). Engaging constable: revealing art with new technology. Proceedings of the SIGCHI conference on HumanComputer Interaction (pp. 1485-1494). San Jose,CA: ACM Press.

vom Lehn, D. (2010). Generating experience from ordinary activity: new technology and the museum experience. In D. O’Reilly & F. Kerrigan (Eds.), Marketing the Arts. A fresh approach (pp. 104-120). Abingdon: Routledge.

vom Lehn, D., & Heath, C. (2005). Accounting for new technology in museum exhibitions.International Journal of Arts Management7(6), 11-21.

vom Lehn, D., Heath, C., & Hindmarsh, J. (2001). Exhibiting interaction: Conduct and collaboration in museums and galleries. Symbolic Interaction24(2), 189–216.

@dirkvl

http://www.vom-lehn.net

 

interaction, interactivity, museums

Human-robot interaction at the Computer Lab in Cambridge – visiting Laurel Riek

interaction, Robots, Technology

In Star Trek Next Generation the android Data is on the constant search for techniques that make him more human. His creator, Dr Soong, has made him look human, if a little pale, but what the particular techniques and what the particular rationale of actions are that would make him human, he has to explore and find out by living with human beings.

Yesterday, I spend some time at the Computer Laboratory in Cambridge where a group of scientists conducts research with human-looking robots. I was invited by Dr Laurel Riek – congratulations, Laurel, on passing the viva early in the week! –  to give a short talk and then have a look at the humanoid robots she has been working with over the past few years.

The robots are realistic looking busts that are equipped with a complex system of motors underneath their skulls. They have been created by a US-American company called Hanson Robotics.

Laurel used Charles and other robots of a similar kind for her research on natural human-robot interaction. Drawing on the growing body of studies concerned with social interaction, including gesture studies, the study of emotion and such like, she strives to improve the communication techniques of robots in order to enable their use in interaction with humans, in particular people in need of help, such as the elderly and disabled people.

Whilst in Star Trek Data discovers the human world by interacting within it, I found in my short encounter with Charles that human-robot interaction may provide us with resources to learn about ourselves and our actions. I think this is something Laurel is working towards when confronting people in healthcare settings with humanoid robots. Thereby, Laurel addresses current debates about how to improve the lives of those living alone or in care homes by deploying robots as companions or at least as other beings they can talk to and interact with.

Publications by Laurel Riek can be found here:

http://www.laurelriek.org/

I found her paper “Cooperative Gestures: Effective Signaling for Humanoid Robots” very interesting but the papers on emotional displays in human-android interaction, I suppose, are where Laurel’s interest lies these days.