At the entrance to the exhibit is a turtle from the Galapagos Islands, a seminal object in the development of evolutionary theory. The turtle rests in its cage, utterly still. "They could have used a robot," comments my daughter. It was a shame to bring the turtle all this way and put it in a cage for a performance that draws so little on the turtle's "aliveness." I am startled by her comments, both solicitous of the imprisoned turtle because it is alive and unconcerned by its authenticity. The museum has been advertising these turtles as wonders, curiosities, marvels — among the plastic models of life at the museum, here is the life that Darwin saw. I begin to talk with others at the exhibit, parents and children. It is Thanksgiving weekend. The line is long, the crowd frozen in place. My question, "Do you care that the turtle is alive?" is welcome diversion. A ten year old girl would prefer a robot turtle because aliveness comes with aesthetic inconvenience: "Its water looks dirty. Gross." More usually, the votes for the robots echo my daughter's sentiment that in this setting, aliveness doesn't seem worth the trouble. A twelve-year-old girl opines: "For what the turtles do, you didn't have to have the live ones." Her father looks at her, uncomprehending: "But the point is that they are real, that's the whole point." … "If you put in a robot instead of the live turtle, do you think people should be told that the turtle is not alive?" I ask. Not really, say several of the children. Data on "aliveness" can be shared on a "need to know" basis, for a purpose. But what are the purposes of living things? When do we need to know if something is alive? Sherry Turkle — Edge, 2006: What is Your Dangerous Idea?
Recent years have seen the development of computational entities - I call them relational artifacts - some of them are software agents and some of them are robots - that present themselves as having states of mind that are affected by their interactions with human beings. These are objects designed to impress not so much through their 'smarts' as through their sociability, their capacity to draw people into cyberintimacy. This presentation comments on their emerging role in our psychological, spiritual and moral lives. They are poised to be the new 'uncanny' in the culture of computing - something known of old and long familiar - yet become strangely unfamiliar. As uncanny objects, they are evocative. They compel us to ask such questions as, 'What kinds of relationships are appropriate to have with machines?' And more generally, 'What is a relationship?'
This was a sceptical talk in the best sense, questioning the cyberpresent and the imminent cyberfuture ('this is very difficult for me — I'm not a Luddite'). The broad thrust of the talk was born of a desire to 'put robots in their place': the debate about machines and AI was once a debate about the machines; now, Professor Turkle believes, the debate is increasingly about our vulnerabilities. Something new is happening in human culture, for robots are not (simply) a kind of doll on to which we project feelings but are produced with "embedded psychology": they appear to be attentive, they look us in the eye, they gesture at us. Human beings are very cheap dates: we ascribe intentionality very quickly. Consequently, we are engaging with these robots, not (just) projecting feelings on to them.
She calls this change in culture the 'robotic moment'. Our encounter with robots crystallises how the larger world of digital technology is affecting our sense of self, our habits of mind. (In turn, software, virtual worlds and devices are preparing us, at times through nothing more than superficiality, for a life with robots.) The earlier, romantic ('essentialist') reaction to the coming of robots ("Why should I talk to a computer about my problems? How can I talk about sibling rivalry to a machine that doesn't have a mother? How could a machine possibly understand?" — 1999 interview) no longer holds sway. Now, she says, 'I hear that humans are faking it and robots are more honest.'
When we're thinking about robots we're thinking, then, about how we conceptualise the self. Narcissism and pragmatism combine and self-objects in perfect tune with our fragile self confirm our sense of who we are. If you have trouble with intimacies, cyberintimacies are useful because they are at the same time cybersolitudes.
Consider the elderly — this is Sherry Turkle writing in Forbes earlier this year:
Twenty-five years ago the Japanese realized that demography was working against them and there would never be enough young people to take care of their aging population. Instead of having foreigners take care of their elderly, they decided to build robots and put them in nursing homes. Doctors and nurses like them; so do family members of the elderly, because it is easier to leave your mom playing with a robot than to leave her staring at a wall or a TV. Very often the elderly like them, I think, mostly because they sense there are no other options. Said one woman about Aibo, Sony's household-entertainment robot, "It is better than a real dog. … It won't do dangerous things, and it won't betray you. … Also, it won't die suddenly and make you feel very sad."
Consider, alternatively, the paralysed man who said that robots can be kinder than nurses but went on to say that even an unpleasant nurse has a story — and 'I can find out about that story'.
For me, the best part of her OII/Saïd talk was her listing of the five points she considers key (also in the Forbes article, 'five troubles that try my tethered soul'). From my notes:
- Is anybody listening? What people mostly want from their public space is to be alone with their personal networks, to stay tethered to the objects that confirm their sense of self.
- We are losing the time to take our time. We're learning to see ourselves as cyborgs, at one with our devices.
- Does speed-dialing bring new dependencies? Children are given mobiles by their parents but the deal is that they then must answer their parents' calls. Tethered children feel different about themselves.
- The political consequences of online/virtual life — an acceptance of surveillance, loss of privacy, etc. People learn to become socialised, accepting surveillance as affirmation rather than intrusion.
- Do we know the purpose of living things? Authenticity is to us what sex was to the Victorians, threat and obsession, taboo and fascination. "Data on aliveness can be shared on a need-to-know basis."
(On tethering, this from a piece in the New Scientist, 20 September, 2006, is helpful: 'Our new intimacies with our machines create a world where it makes sense to speak of a new state of the self. When someone says "I am on my cell", "online", "on instant messaging" or "on the web", these phrases suggest a new placement of the subject, a subject wired into social existence through technology, a tethered self. I think of tethering as the way we connect to always-on communication devices and to the people and things we reach through them.')
There were good questions from (in particular) Steve Woolgar: just how new is this robotic "threat" (think of the eighteenth century panic about mechanical puppets) and what of our ability to adapt successfully to "new" challenges ('we learn new repertoires and relate differently to different kinds of "robots"')? I was also glad that someone mentioned E M Forster's 'The Machine Stops' (1909; Wikipedia — which links to online texts; the text can also be found here). Other than that, there was insufficient time for discussion. This was disappointing and so, too, was the caricature of hackers (a 'group of people for whom the computer is the best they can do') from one section of the audience (complete with careless remarks about autism).
Much food for thought, but I came away wishing we could have talked for much longer. I note amongst the students I teach the emergence of good questions about digital technology and a well-established desire to do more with the tools it gives them than sustain a narrow, narcissistic self. Many of them are, of course, using the web in inspiring ways, and the ingenuity of the young in escaping from being tethered (to parents, to authority) is not in doubt.
I want to give the floor to Sherry Turkle and link to other material of hers that I've found useful in thinking about this talk. In the course of a review of her Life on the Screen: Identity in the Age of the Internet, Howard Rheingold fired three questions at Sherry Turkle (I think this is all from 1997). Here are excerpts from her replies:
As human beings become increasingly intertwined with the technology and with each other via the technology, old distinctions about what is specifically human and specifically technological become more complex. Are we living life on the screen or in the screen? Our new technologically enmeshed relationships oblige us to ask to what extent we ourselves have become cyborgs, transgressive mixtures of biology, technology, and code. The traditional distance between people and machines has become harder to maintain....The computer is an evocative object that causes old boundaries to be renegotiated. Mind to Mind
We have grown accustomed to thinking of our minds in unitary images. Even those psychodynamic theories that stress that within us there are unconscious as well as conscious aspects, have tended to develop ways of describing the final, functioning "self" in which it acts "as if" it were one. I believe that the experience of cyberspace, the experience of playing selves in various cyber-contexts, perhaps even at the same time, on multiple windows, is a concretization of another way of thinking about the self, not as unitary but as multiple. In this view, we move among various self states, various aspects of self. Our sense of one self is a kind of illusion . . . one that we are able to sustain because we have learned to move fluidly among the self states. What good parenting provides is a relational field in which we become increasingly expert at transitions between self states. Psychological health is not tantamount to achieving a state of oneness, but the ability to make transitions among the many and to reflect on our-selves by standing in a space between states. Life on the screen provides a new context for this psychological practice. One has a new context for negotiating the transitions. One has a new space to stand on for commenting on the complexities and contradictions among the selves. So, experiences in cyberspace encourage us to discover and find a new way to talk about the self as multiple and about psychological health not in terms of constructing a one but of negotiating a many. Mind to Mind
At one level, the computer is a tool. It helps us write, keep track of our accounts, and communicate with others. Beyond this, the computer offers us both new models of mind and a new medium on which to project our ideas and fantasies. Most recently, the computer has become even more than tool and mirror: We are able to step through the looking glass. We are learning to live in virtual worlds. We may find ourselves alone as we navigate virtual oceans, unravel virtual mysteries, and engineer virtual skyscrapers. But increasingly, when we step through the looking glass, other people are there as well. In the story of constructing identity in the culture of simulation, experiences on the Internet figure prominently, but these experiences can only be understood as part of a larger cultural context. That context is the story of the eroding boundaries between the real and the virtual, the animate and the inanimate, the unitary and the multiple self which is occurring in both advanced scientific fields of research and the patterns of everyday life. From scientists trying to create artificial life to children "morphing" through a series of virtual personae, we shall see evidence of fundamental shifts in the way we create and experience human identity. But it is on the Internet that our confrontations with technology as it collides with our sense of human identity are fresh, even raw. In the real-time communities of cyberspace, we are dwellers on the threshold between the real and virtual, unsure of our footing, inventing ourselves as we go along. As players participate, they become authors not only of text but of themselves, constructing new selves through social interaction. Mind to Mind
And this is from the Edge piece (2006) quoted at the start:
Do plans to provide relational robots to attend to children and the elderly make us less likely to look for other solutions for their care? People come to feel love for their robots, but if our experience with relational artifacts is based on a fundamentally deceitful interchange, can it be good for us? Or might it be good for us in the "feel good" sense, but bad for us in our lives as moral beings? Relationships with robots bring us back to Darwin and his dangerous idea: the challenge to human uniqueness. When we see children and the elderly exchanging tendernesses with robotic pets the most important question is not whether children will love their robotic pets more than their real life pets or even their parents, but rather, what will loving come to mean?
Also worth looking up: the MIT Initiative on Technology and Self and Evocative Objects, a new book edited by Sherry Turkle. The talk was filmed, so I assume there'll be a webcast and that it will appear here.