History of Ideas

The enchanted loom

Symphony of Science has recently posted ‘Ode to the Brain!’:

 

‘Ode to the Brain’ is the ninth episode in the Symphony of Science music video series. Through the powerful words of scientists Carl Sagan, Robert Winston, Vilayanur Ramachandran, Jill Bolte Taylor, Bill Nye, and Oliver Sacks, it covers different aspects [of] the brain including its evolution, neuron networks, folding, and more. The material sampled for this video comes from Carl Sagan’s Cosmos, Jill Bolte Taylor’s TED Talk, Vilayanur Ramachandran’s TED Talk, Bill Nye’s Brain episode, BBC’s ‘The Human Body’, Oliver Sacks’ TED Talk, Discovery Channel’s ‘Human Body: Pushing the Limits’, and more.

Carl Sagan:

What we know is encoded in cells called neurons
And there are something like a hundred trillion neural connections
This intricate and marvelous network of neurons has been called
An enchanted loom

Wikipedia — Enchanted Loom:

The enchanted loom is a famous metaphor for the brain invented by the pioneering neuroscientist Charles S. Sherrington in a passage from his 1942 book Man on his nature, in which he poetically describes his conception of what happens in the cerebral cortex during arousal from sleep:

The great topmost sheet of the mass, that where hardly a light had twinkled or moved, becomes now a sparkling field of rhythmic flashing points with trains of traveling sparks hurrying hither and thither. The brain is waking and with it the mind is returning. It is as if the Milky Way entered upon some cosmic dance. Swiftly the head mass becomes an enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns.

The “loom” he refers to was undoubtedly meant to be a Jacquard loom, used for weaving fabric into complex patterns. The Jacquard loom, invented in 1801, was the most complex mechanical device of the 19th century. It was controlled by a punch card system that was a forerunner of the system used in computers until the 1970s. With as many as thousands of independently movable shuttles, a Jacquard loom in operation must have appeared very impressive. If Sherrington had written a decade later, however, he might perhaps have chosen the flashing lights on the front panel of a computer as his metaphor instead.

According to the neuroscience historian Stanley Finger, Sherrington probably borrowed the loom metaphor from an earlier writer, the psychologist Fredric Myers, who asked his readers to “picture the human brain as a vast manufactory, in which thousands of looms, of complex and differing patterns, are habitually at work”. Perhaps in part because of its slightly cryptic nature, the “enchanted loom” has been an attractive metaphor for many writers about the brain …

Oliver Sacks:

We see with the eyes
But we see with the brain as well
And seeing with the brain
Is often called imagination

‘Whole orchestras play inside our heads’ (Sagan).


Auden: aspects of our present Weltanschauung

Looking for something in Auden, I hit another passage, about human nature, art, tradition and originality (below), that I couldn’t put my finger on when I last needed it a few months ago. We’re edging towards the World Brain, but it can’t come fast enough:

It seems possible that in the near future, we shall have microscopic libraries of record, in which a photograph of every important book and document in the world will be stowed away and made easily available for the inspection of the student…. The general public has still to realize how much has been done in this field and how many competent and disinterested men and women are giving themselves to this task. The time is close at hand when any student, in any part of the world, will be able to sit with his projector in his own study at his or her convenience to examine any book, any document, in an exact replica. — H G Wells, ‘The Brain Organization of the Modern World’ (1937)

Auden. I’ve often referred to this passage and am very happy to make it ready to hand through pinning it here:

3) The loss of belief in a norm of human nature which will always require the same kind of man-fabricated world to be at home in. … until recently, men knew and cared little about cultures far removed from their own in time or space; by human nature, they meant the kind of behaviour exhibited in their own culture. Anthropology and archaeology have destroyed this provincial notion: we know that human nature is so plastic that it can exhibit varieties of behaviour which, in the animal kingdom, could only be exhibited by different species.

The artist, therefore, no longer has any assurance, when he makes something, that even the next generation will find it enjoyable or comprehensible.

He cannot help desiring an immediate success, with all the danger to his integrity which that implies.

Further, the fact that we now have at our disposal the arts of all ages and cultures, has completely changed the meaning of the word tradition. It no longer means a way of working handed down from one generation to the next; a sense of tradition now means a consciousness of the whole of the past as present, yet at the same time as a structured whole the parts of which are related in terms of before and after. Originality no longer means a slight modification in the style of one’s immediate predecessors; it means a capacity to find in any work of any date or place a clue to finding one’s authentic voice. The burden of choice and selection is put squarely upon the shoulders of each individual poet and it is a heavy one.

It’s from ‘The Poet and The City’, which I think appeared first in the Massachusetts Review in 1962 and was then included in The Dyer’s Hand (1963). Lots in this essay. ‘There are four aspects of our present Weltanschauung which have made an artistic vocation more difficult than it used to be.’ The others:

1) The loss of belief in the eternity of the physical universe. … Physics, geology and biology have now replaced this everlasting universe with a picture of nature as a process in which nothing is now what it was or what it will be.

We live now among ‘sketches and improvisations’.

2) The loss of belief in the significance and reality of sensory phenomena. … science has destroyed our faith in the naive observation of our senses: we cannot … ever know what the physical universe is really like; we can only hold whatever subjective notion is appropriate to the particular purpose we have in view. This destroys the traditional conception of art as mimesis …

4) The disappearance of the Public Realm as the sphere of revelatory personal deeds. To the Greeks the Private Realm was the sphere of life ruled by the necessity of sustaining life, and the Public Realm the sphere of freedom where a man could disclose himself to others. Today, the significance of the terms private and public has been reversed; public life is the necessary impersonal life, the place where a man fulfils his social function, and it is in his private life that he is free to be his personal self.


Alchemical futures

Sometimes, I’m lucky enough to have the chance to teach a short course about science fiction to a group of 17 year-olds. I’m always intrigued to find out what ‘science fiction’ means to them. This week, kicking off, one lad went straight for super-powers. As it happens, I’ve never had this answer before, but what made me take note was how well he explained what he meant, quickly but thoughtfully: science fiction giving us access to other possible worlds, possible futures … what if … maybe … perhaps … one day … then I could … dream that … build that … I should add, he was the same student who homed in on science fiction and dystopian futures, so he wasn’t sitting there being idly optimistic.

I went through a phase in my teens of reading lots of Jung and, a little later, Freud, considering medicine and psychiatrist / psychoanalyst as a possible future. I still have many of the books I bought then. Jung led me off on curious paths. Alchemy was in there, of course, and has endured as an interest — morphing along the way. I went off certain Jungians at some deep level after a conference (held in Windsor Great Park!), which struck my 18 year-old self as pretty bonkers and anti-science, and I used to get my Jungian books from a very odd bookshop in the middle of nowhere (deep, rural Gloucestershire) which the friends I persuaded to come along (or give me a lift there) ended up calling ‘the magic bookshop’. New Age, though we didn’t know it.

But alchemy’s never gone away. It couldn’t, could it? I loved that Royal Institution talk I went to back in 2006, ‘Alchemy, the occult beginnings of science: Paracelsus, John Dee and Isaac Newton’. The dream of a very special super-power, transforming both matter (world) and self.

Alchemy, originally derived from the Ancient Greek word khemia (Χημεία - in Modern Greek) meaning "art of transmuting metals", later arabicized as the Arabic word al-kimia (الكيمياء, ALA-LCal-kīmiyā’), is both a philosophy and an ancient practice focused on the attempt to change base metals into gold, investigating the preparation of the “elixir of longevity”, and achieving ultimate wisdom, involving the improvement of the alchemist as well as the making of several substances described as possessing unusual properties. The practical aspect of alchemy can be viewed as a protoscience, having generated the basics of modern inorganic chemistry, namely concerning procedures, equipment and the identification and use of many current substances.

Alchemy has been practiced in ancient Egypt, Mesopotamia (modern Iraq), India, Persia (modern Iran), China, Japan, Korea, the classical Greco-Roman world, the medieval Islamic world, and then medieval Europe up to the 20th and 21st centuries, in a complex network of schools and philosophical systems spanning at least 2,500 years.  Wikipedia

And given a background in zoology and theology, I’ve not been able to get this out of my head since stumbling across it the other week:

Once, he called himself a “biologian”, merging the subject matter of life with the method of a theologian. More recently, he told me that he is an alchemist. In Defense of the Memory Theater

Isn’t that great? What a way to think of what you’re engaged on. The work.

It is, by the way, well worth reading all of Nathan Schneider’s post about his uncle, the “alchemist”:

The most remarkable memory theater I’ve ever known is on a computer. It is the work of my uncle, once a biologist at the National Institutes of Health, a designer of fish farms, a nonprofit idealist, and a carpenter. Now he has devoted himself full-time to his theater … [a] single, searchable, integrated organism. When he tells me about it, he uses evolutionary metaphors cribbed from his years researching genetics. The creature mutates and adapts. It learns and grows.


We are all Bayesians now

Intent on not being late for an evening session at Tinker.it! last week, I dropped by Bunhill Fields for too short a time, the light beginning to fail and a hurriedly printed off, crumpled map for guide.

image

Easy to find the memorials to Blake and his wife and Defoe. But I was on a quest for Thomas Bayes:

Bayes, Thomas (b. 1702, London - d. 1761, Tunbridge Wells, Kent), mathematician who first used probability inductively and established a mathematical basis for probability inference (a means of calculating, from the number of times an event has not occurred, the probability that it will occur in future trials). He set down his findings on probability in "Essay Towards Solving a Problem in the Doctrine of Chances" (1763), published posthumously in the Philosophical Transactions of the Royal Society of London.

It took me too long to find his resting place, railed off and not in a great state of repair, and my rushed photos weren’t worth posting, but here’s one from the ISBA site (taken by Professor Tony O'Hagan of Sheffield University and seemingly not copyright):

Bayes 1 

The famous essay is online (PDF).

I need to spend more time in and around Bunhill Fields, but what prompted me to try to take it in as I sped across London was reading in Chris Frith’s book, Making up the Mind, how important Bayes is to neuroscience:

… is it possible to measure prior beliefs and changes in beliefs? … The importance of Bayes’ theorem is that it provides a very precise measure of how much a new piece of evidence should make us change our ideas about the world. Bayes’ theorem provides a yardstick by which we can judge whether we are using new evidence appropriately. This leads to the concept of the ideal Bayesian observer: a mythical being who always uses evidence in the best possible way. … Our brains are ideal observers when making use of the evidence from our senses. For example, one problem our brain has to solve is how to combine evidence from our different senses. … When combining this evidence, our brain behaves just like an ideal Bayesian observer. Weak evidence is ignored; strong evidence is emphasised. … But there is another aspect of Bayes’ theorem that is even more important for our understanding of how the brain works. … on the basis of its belief about the world, my brain can predict the pattern of activity that should be detected by my eyes, ears and other senses … So what happens if there is an error in this prediction? These errors are very important because my brain can use them to update its belief about the world and create a better belief … Once this update has occurred, my brain has a new belief about the world and it can repeat the process. It makes another prediction about the patterns of activity that should be detected by my senses. Each time my brain goes round this loop the prediction error will get smaller. Once the error is sufficiently small, my brain “knows” what is out there. And this all happens so rapidly that I have no awareness of this complex process. … my brain never rests from this endless round of prediction and updating.

… our brain is a Bayesian machine that discovers what is in the world by making predictions and searching for the causes of sensations.


Ted Nelson @ St Paul's II

The Bush years have not been kind to those Americans living abroad and dependent on the dollar exchange rate. Out of necessity, then, Ted and Marlene, who first came to St Paul's in July 2007 (see here), are soon to return to the States — but, before leaving, they revisited St Paul's. Today, Ted spoke about his work, his current book-in-progress (Geeks Bearing Gifts) and Xanadu.

Farhan's blogged Ted's talk so well that there's little left for me to add. Thanks!

There were some lovely glimpses into Ted's childhood — a boy who loved reading and words and knew, by ten, who had coined tintinnabulatechortleserendipity; growing up in Greenwich Village without realising it, then reading about it and longing to see this Bohemian paradise; experiencing Mrs Roosevelt as a near neighbour. He was (as expected) both amusing and savage about the blackhole which is the clipboard. His father had taught him that writing is mostly re-writing, and re-writing is mostly rearrangement — so why devise writing tools that are so bad they hide the very material you're cutting? (CTRL+C, CTRL+V: cram and vomit.) By the time he went to college, he'd written a lot by linking cut and pasted pieces of writing.

Graduate school in 1960 and a computing course saw him suddenly quite sure that personal computers would come and that his job was to design the documents of the future: make it possible to see the parts and compare the versions, to visualise the origins of quotations, to expose deep rearrangements. Hypertext was first used by Ted in 1963, but it was 1986 before it was used outside his immediate circle.

It was great to have Ted and Marlene here again. I was particularly pleased that a number of our 13/14 year-old students came along: Ted has been a name to them in their ICT course — and here he was.

Technorati tags: , ,

Re-echoing that Mac/DOS piece

When I read Stephen Fry's first Saturday Guardian column (previous post), I took in the cross-reference to Umberto Eco's piece about the Mac/DOS:Catholic/Protestant parallelism but didn't follow it as I recalled having read it before. Then I saw friends bookmarking it and something made me check it out. What I recall reading (in October, 2005, it turns out — see Labyrinths and Internet) was something fuller — short of a full-length newspaper column but more than a clip.

I found it on the web in The Modern World and I see from the same site's page of Eco's writings that it says of this, the Mac/DOS piece: 'This ubiquitous work has, by now, found its way all across the Internet'. So there we are. And here it is, again.

The Holy War: Mac vs. DOS
by Umberto Eco

The following excerpts are from an English translation of Umberto Eco's back-page column, La bustina di Minerva, in the Italian news weekly Espresso, September 30, 1994.

A French translation may be seen here.


Friends, Italians, countrymen, I ask that a Committee for Public Health be set up, whose task would be to censor (by violent means, if necessary) discussion of the following topics in the Italian press. Each censored topic is followed by an alternative in brackets which is just as futile, but rich with the potential for polemic. Whether Joyce is boring (whether reading Thomas Mann gives one erections). Whether Heidegger is responsible for the crisis of the Left (whether Ariosto provoked the revocation of the Edict of Nantes). Whether semiotics has blurred the difference between Walt Disney and Dante (whether De Agostini does the right thing in putting Vimercate and the Sahara in the same atlas). Whether Italy boycotted quantum physics (whether France plots against the subjunctive). Whether new technologies kill books and cinemas (whether zeppelins made bicycles redundant). Whether computers kill inspiration (whether fountain pens are Protestant).

One can continue with: whether Moses was anti-semitic; whether Leon Bloy liked Calasso; whether Rousseau was responsible for the atomic bomb; whether Homer approved of investments in Treasury stocks; whether the Sacred Heart is monarchist or republican.

I asked above whether fountain pens were Protestant. Insufficient consideration has been given to the new underground religious war which is modifying the modern world. It's an old idea of mine, but I find that whenever I tell people about it they immediately agree with me.

The fact is that the world is divided between users of the Macintosh computer and users of MS-DOS compatible computers. I am firmly of the opinion that the Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counter-reformist and has been influenced by the ratio studiorum of the Jesuits. It is cheerful, friendly, conciliatory; it tells the faithful how they must proceed step by step to reach -- if not the kingdom of Heaven -- the moment in which their document is printed. It is catechistic: The essence of revelation is dealt with via simple formulae and sumptuous icons. Everyone has a right to salvation.

DOS is Protestant, or even Calvinistic. It allows free interpretation of scripture, demands difficult personal decisions, imposes a subtle hermeneutics upon the user, and takes for granted the idea that not all can achieve salvation. To make the system work you need to interpret the program yourself: Far away from the baroque community of revelers, the user is closed within the loneliness of his own inner torment.

You may object that, with the passage to Windows, the DOS universe has come to resemble more closely the counter-reformist tolerance of the Macintosh. It's true: Windows represents an Anglican-style schism, big ceremonies in the cathedral, but there is always the possibility of a return to DOS to change things in accordance with bizarre decisions: When it comes down to it, you can decide to ordain women and gays if you want to.

Naturally, the Catholicism and Protestantism of the two systems have nothing to do with the cultural and religious positions of their users. One may wonder whether, as time goes by, the use of one system rather than another leads to profound inner changes. Can you use DOS and be a Vande supporter? And more: Would Celine have written using Word, WordPerfect, or Wordstar? Would Descartes have programmed in Pascal?

And machine code, which lies beneath and decides the destiny of both systems (or environments, if you prefer)? Ah, that belongs to the Old Testament, and is talmudic and cabalistic. The Jewish lobby, as always. ...

Technorati tags: , , ,

Sherry Turkle: 'what will loving come to mean?'

At the entrance to the exhibit is a turtle from the Galapagos Islands, a seminal object in the development of evolutionary theory. The turtle rests in its cage, utterly still. "They could have used a robot," comments my daughter. It was a shame to bring the turtle all this way and put it in a cage for a performance that draws so little on the turtle's "aliveness." I am startled by her comments, both solicitous of the imprisoned turtle because it is alive and unconcerned by its authenticity. The museum has been advertising these turtles as wonders, curiosities, marvels — among the plastic models of life at the museum, here is the life that Darwin saw. I begin to talk with others at the exhibit, parents and children. It is Thanksgiving weekend. The line is long, the crowd frozen in place. My question, "Do you care that the turtle is alive?" is welcome diversion. A ten year old girl would prefer a robot turtle because aliveness comes with aesthetic inconvenience: "Its water looks dirty. Gross." More usually, the votes for the robots echo my daughter's sentiment that in this setting, aliveness doesn't seem worth the trouble. A twelve-year-old girl opines: "For what the turtles do, you didn't have to have the live ones." Her father looks at her, uncomprehending: "But the point is that they are real, that's the whole point." … "If you put in a robot instead of the live turtle, do you think people should be told that the turtle is not alive?" I ask. Not really, say several of the children. Data on "aliveness" can be shared on a "need to know" basis, for a purpose. But what are the purposes of living things? When do we need to know if something is alive? Sherry Turkle — Edge, 2006: What is Your Dangerous Idea?

Last Thursday evening I was at the Saïd Business School for an OII event, Sherry Turkle talking about Cyberintimacies/Cybersolitudes:

Recent years have seen the development of computational entities - I call them relational artifacts - some of them are software agents and some of them are robots - that present themselves as having states of mind that are affected by their interactions with human beings. These are objects designed to impress not so much through their 'smarts' as through their sociability, their capacity to draw people into cyberintimacy. This presentation comments on their emerging role in our psychological, spiritual and moral lives. They are poised to be the new 'uncanny' in the culture of computing - something known of old and long familiar - yet become strangely unfamiliar. As uncanny objects, they are evocative. They compel us to ask such questions as, 'What kinds of relationships are appropriate to have with machines?' And more generally, 'What is a relationship?'

This was a sceptical talk in the best sense, questioning the cyberpresent and the imminent cyberfuture ('this is very difficult for me — I'm not a Luddite'). The broad thrust of the talk was born of a desire to 'put robots in their place': the debate about machines and AI was once a debate about the machines; now, Professor Turkle believes, the debate is increasingly about our vulnerabilities. Something new is happening in human culture, for robots are not (simply) a kind of doll on to which we project feelings but are produced with "embedded psychology": they appear to be attentive, they look us in the eye, they gesture at us. Human beings are very cheap dates: we ascribe intentionality very quickly. Consequently, we are engaging with these robots, not (just) projecting feelings on to them.

She calls this change in culture the 'robotic moment'. Our encounter with robots crystallises how the larger world of digital technology is affecting our sense of self, our habits of mind. (In turn, software, virtual worlds and devices are preparing us, at times through nothing more than superficiality, for a life with robots.) The earlier, romantic ('essentialist') reaction to the coming of robots ("Why should I talk to a computer about my problems? How can I talk about sibling rivalry to a machine that doesn't have a mother? How could a machine possibly understand?" — 1999 interview) no longer holds sway. Now, she says, 'I hear that humans are faking it and robots are more honest.'

When we're thinking about robots we're thinking, then, about how we conceptualise the self. Narcissism and pragmatism combine and self-objects in perfect tune with our fragile self confirm our sense of who we are. If you have trouble with intimacies, cyberintimacies are useful because they are at the same time cybersolitudes.

Consider the elderly — this is Sherry Turkle writing in Forbes earlier this year:

Twenty-five years ago the Japanese realized that demography was working against them and there would never be enough young people to take care of their aging population. Instead of having foreigners take care of their elderly, they decided to build robots and put them in nursing homes. Doctors and nurses like them; so do family members of the elderly, because it is easier to leave your mom playing with a robot than to leave her staring at a wall or a TV. Very often the elderly like them, I think, mostly because they sense there are no other options. Said one woman about Aibo, Sony's household-entertainment robot, "It is better than a real dog. … It won't do dangerous things, and it won't betray you. … Also, it won't die suddenly and make you feel very sad."

Consider, alternatively, the paralysed man who said that robots can be kinder than nurses but went on to say that even an unpleasant nurse has a story — and 'I can find out about that story'.

For me, the best part of her OII/Saïd talk was her listing of the five points she considers key (also in the Forbes article, 'five troubles that try my tethered soul'). From my notes:

  1. Is anybody listening? What people mostly want from their public space is to be alone with their personal networks, to stay tethered to the objects that confirm their sense of self.
  2. We are losing the time to take our time. We're learning to see ourselves as cyborgs, at one with our devices.
  3. Does speed-dialing bring new dependencies? Children are given mobiles by their parents but the deal is that they then must answer their parents' calls. Tethered children feel different about themselves.
  4. The political consequences of online/virtual life — an acceptance of surveillance, loss of privacy, etc. People learn to become socialised, accepting surveillance as affirmation rather than intrusion.
  5. Do we know the purpose of living things? Authenticity is to us what sex was to the Victorians, threat and obsession, taboo and fascination. "Data on aliveness can be shared on a need-to-know basis."

(On tethering, this from a piece in the New Scientist, 20 September, 2006, is helpful: 'Our new intimacies with our machines create a world where it makes sense to speak of a new state of the self. When someone says "I am on my cell", "online", "on instant messaging" or "on the web", these phrases suggest a new placement of the subject, a subject wired into social existence through technology, a tethered self. I think of tethering as the way we connect to always-on communication devices and to the people and things we reach through them.')

There were good questions from (in particular) Steve Woolgar: just how new is this robotic "threat" (think of the eighteenth century panic about mechanical puppets) and what of our ability to adapt successfully to "new" challenges ('we learn new repertoires and relate differently to different kinds of "robots"')? I was also glad that someone mentioned E M Forster's 'The Machine Stops' (1909; Wikipedia — which links to online texts; the text can also be found here). Other than that, there was insufficient time for discussion. This was disappointing and so, too, was the caricature of hackers (a 'group of people for whom the computer is the best they can do') from one section of the audience (complete with careless remarks about autism).

Much food for thought, but I came away wishing we could have talked for much longer. I note amongst the students I teach the emergence of good questions about digital technology and a well-established desire to do more with the tools it gives them than sustain a narrow, narcissistic self. Many of them are, of course, using the web in inspiring ways, and the ingenuity of the young in escaping from being tethered (to parents, to authority) is not in doubt.

I want to give the floor to Sherry Turkle and link to other material of hers that I've found useful in thinking about this talk. In the course of a review of her Life on the Screen: Identity in the Age of the Internet, Howard Rheingold fired three questions at Sherry Turkle (I think this is all from 1997). Here are excerpts from her replies:

As human beings become increasingly intertwined with the technology and with each other via the technology, old distinctions about what is specifically human and specifically technological become more complex. Are we living life on the screen or in the screen? Our new technologically enmeshed relationships oblige us to ask to what extent we ourselves have become cyborgs, transgressive mixtures of biology, technology, and code. The traditional distance between people and machines has become harder to maintain....The computer is an evocative object that causes old boundaries to be renegotiated. Mind to Mind

We have grown accustomed to thinking of our minds in unitary images. Even those psychodynamic theories that stress that within us there are unconscious as well as conscious aspects, have tended to develop ways of describing the final, functioning "self" in which it acts "as if" it were one. I believe that the experience of cyberspace, the experience of playing selves in various cyber-contexts, perhaps even at the same time, on multiple windows, is a concretization of another way of thinking about the self, not as unitary but as multiple. In this view, we move among various self states, various aspects of self. Our sense of one self is a kind of illusion . . . one that we are able to sustain because we have learned to move fluidly among the self states. What good parenting provides is a relational field in which we become increasingly expert at transitions between self states. Psychological health is not tantamount to achieving a state of oneness, but the ability to make transitions among the many and to reflect on our-selves by standing in a space between states. Life on the screen provides a new context for this psychological practice. One has a new context for negotiating the transitions. One has a new space to stand on for commenting on the complexities and contradictions among the selves. So, experiences in cyberspace encourage us to discover and find a new way to talk about the self as multiple and about psychological health not in terms of constructing a one but of negotiating a many. Mind to Mind 

At one level, the computer is a tool. It helps us write, keep track of our accounts, and communicate with others. Beyond this, the computer offers us both new models of mind and a new medium on which to project our ideas and fantasies. Most recently, the computer has become even more than tool and mirror: We are able to step through the looking glass. We are learning to live in virtual worlds. We may find ourselves alone as we navigate virtual oceans, unravel virtual mysteries, and engineer virtual skyscrapers. But increasingly, when we step through the looking glass, other people are there as well. In the story of constructing identity in the culture of simulation, experiences on the Internet figure prominently, but these experiences can only be understood as part of a larger cultural context. That context is the story of the eroding boundaries between the real and the virtual, the animate and the inanimate, the unitary and the multiple self which is occurring in both advanced scientific fields of research and the patterns of everyday life. From scientists trying to create artificial life to children "morphing" through a series of virtual personae, we shall see evidence of fundamental shifts in the way we create and experience human identity. But it is on the Internet that our confrontations with technology as it collides with our sense of human identity are fresh, even raw. In the real-time communities of cyberspace, we are dwellers on the threshold between the real and virtual, unsure of our footing, inventing ourselves as we go along. As players participate, they become authors not only of text but of themselves, constructing new selves through social interaction. Mind to Mind 

And this is from the Edge piece (2006) quoted at the start:

Do plans to provide relational robots to attend to children and the elderly make us less likely to look for other solutions for their care? People come to feel love for their robots, but if our experience with relational artifacts is based on a fundamentally deceitful interchange, can it be good for us? Or might it be good for us in the "feel good" sense, but bad for us in our lives as moral beings? Relationships with robots bring us back to Darwin and his dangerous idea: the challenge to human uniqueness. When we see children and the elderly exchanging tendernesses with robotic pets the most important question is not whether children will love their robotic pets more than their real life pets or even their parents, but rather, what will loving come to mean?

Also worth looking up: the MIT Initiative on Technology and Self and Evocative Objects, a new book edited by Sherry Turkle. The talk was filmed, so I assume there'll be a webcast and that it will appear here.


Dymaxion cubicle

A week ago today, I was in the Design Museum (enjoying the Zaha Hadid exhibition — a few photos here, though sadly I couldn't do her wonderful project paintings justice). A surprise to me was the Buckminster Fuller cubicle door drawing — in the Gents. I seemed to have the room to myself, so I took a couple of photos of the door (wondering what I'd say if someone came in or, worse by far, if the cubicle turned out not to be empty after all).

P1012145

So that was that, and then Stowe noticed that Dopplr had used Buckminster Fuller's Dymaxion Map in their Dopplr 100 launch. From Wikipedia:

Wikipedia: Unfolded Dymaxion map with nearly-contiguous land masses

The Dymaxion map of the Earth is a projection of a global map onto the surface of a polyhedron, which can then be unfolded to a net in many different ways and flattened to form a two-dimensional map which retains most of the relative proportional integrity of the globe map. It was created by Buckminster Fuller, and patented by him in 1946, the patent application showing a projection onto a cuboctahedron. The 1954 version published by Fuller under the title The AirOcean World Map used a slightly modified but mostly regular icosahedron as the base for the projection, and this is the version most commonly referred to today. The name Dymaxion was applied by Fuller to several of his inventions.

Unlike most other projections, the Dymaxion is intended purely for representations of the entire globe. Each face of the polyhedron is a gnomonic projection, so zooming in on one such face renders the Dymaxion equivalent to such a projection.

Dymaxion map folded into an icosahedron
Dymaxion map folded into an icosahedron

Fuller claimed his map had several advantages over other projections for world maps. It has less distortion of relative size of areas, most notably when compared to the Mercator projection; and less distortion of shapes of areas, notably when compared to the Gall-Peters projection. Other compromise projections attempt a similar trade-off.

More unusually, the Dymaxion map has no 'right way up'. Fuller frequently argued that in the universe there is no 'up' and 'down', or 'north' and 'south': only 'in' and 'out'. Gravitational forces of the stars and planets created 'in', meaning 'towards the gravitational center', and 'out', meaning 'away from the gravitational center'. He linked the north-up-superior/south-down-inferior presentation of most other world maps to cultural bias. Note that there are some other maps without north at the top.

There is no one 'correct' view of the Dymaxion map. Peeling the triangular faces of the icosahedron apart in one way results in an icosahedral net that shows an almost contiguous land mass comprising all of earth's continents - not groups of continents divided by oceans. Peeling the solid apart in a different way presents a view of the world dominated by connected oceans surrounded by land.

Which set me thinking: that Buckminster Fuller is someone we ought to be teaching in schools, of course (and I can see how we might start doing that easily enough — and soon), and about Dopplr and good design. For another very cool Dopplr ... er ... effect, if you've not seen their sparkline stack and read Matt's post about it, you really should.


Life, the web — all a tangle

The interview Tim Berners-Lee gave last year (IBM developerWorks) was widely reported. I blogged it, Web 2.0: 'what the Web was supposed to be all along', in August 2006, shortly after it was posted. What most struck me about it was expressed, pithily and succinctly, by Sir Tim in a remark about the web made on an earlier occasion — the MIT Technology Review Emerging Technologies conference in 2005, as reported by Andy Carvin in Tim Berners-Lee: Weaving a Semantic Web:

The original thing I wanted to do was to make it a collaborative medium, a place where we (could) all meet and read and write.

At the MIT conference, Sir Tim talked about Marc Andreessen and the emergence of a commercial web browser. In the IBM developerWorks interview he said of his web browser,

… the original World Wide Web browser of course was also an editor. … I really wanted it to be a collaborative authoring tool. And for some reason it didn't really take off that way.  And we could discuss for ages why it didn't. … I've always felt frustrated that most people don't...didn't have write access.

Just a couple of weeks ago, I came across a 1997 Time magazine piece about Tim Berners-Lee by Robert Wright, The Man Who Invented The Web:

Berners-Lee considers the Web an example of how early, random forces are amplified through time. "It was an accident of fate that all the first [commercially successful] programs were browsers and not editors," he says. To see how different things might have been, you have to watch him gleefully wield his original browser--a browser and editor--at his desk. He's working on one document and--flash--in a few user-friendly keystrokes, it is linked to another document. One document can be on his computer "desktop"--for his eyes only--another can be accessible to his colleagues or his family, and another can be public. A seamless neural connection between his brain and the social brain. … he is grateful that Andreessen co-authored a user-friendly browser and thus brought the Web to the public, even if in non-ideal form. Yet it can't have been easy watching Andreessen become the darling of the media after writing a third-generation browser that lacked basic editing capabilities.

Now the web is 'finally starting' to follow 'the technological lines he envisioned (… as software evolves)':

Berners-Lee, standing at a blackboard, draws a graph, as he's prone to do. It arrays social groups by size. Families, workplace groups, schools, towns, companies, the nation, the planet. The Web could in theory make things work smoothly at all of these levels, as well as between them. That, indeed, was the original idea--an organic expanse of collaboration. … "At the end of the day, it's up to us: how we actually react, and how we teach our children, and the values we instill." He points back to the graph. "I believe we and our children should be active at all points along this."

So, a fundamental deviation from Tim Berners-Lee's vision for the web occurred in the form taken by popular, commercially viable web browsers. In Dave Winer's view, this early deviation was heavily reinforced by Microsoft:

Since the re-rollout of Office in 1996, it's been really clear why Microsoft was so hell-bent at first owning and then suffocating the web browser, along with the web. … Because for them, writing was not something that would be done in a web browser, if they improved their browser as a writing tool, that would be the end of Word, and with it, a big reason for using Office. … If instead, Microsoft had embraced the web, and with it the shift in their product line and economics, in 1995, we'd have a much richer writing environment today. Blogging would have happened sooner, in a bigger way. It's hard to imagine how much the sins of Microsoft cost all of us.

What a tangled thing technology is (Berners-Lee — 'The Web is a tangle, your life is a tangle – get used to it'). I hope very much that Ted Nelson has brought Geeks Bearing Gifts nearer to publication. Meanwhile, with the Time 1997 piece a little bit more of the road map of the web's evolution became clearer to me. The read-only nature of the successful web browsers that came after Sir Tim's explains a great deal about how many an adult of a certain age perceives the web. I think of John Naughton's sketch of how today's 22 year-old conceives of the web, and of Andrew McAfee's comment,

Evidence is mounting that younger people don’t think of the Internet as a collection of content that other people produce for them to consume. Instead, they think about it as a dynamic, emergent, and peer-produced repository to which they’re eager to contribute.

And I think back to Bradley Horowitz's talk at the London March 2007 FOWA meeting, which I wrote about here — and Twittered at the time: 'from a hierarchy of creator(s)/synthesisers/consumers (1:10:100) towards a web world of participation (100)'.

If the history of the web browser had itself been different, would we have suffered the misalignment of perceptions about the essentially social, creative nature of the web ('I believe we and our children should be active at all points along this') that often exists now between today's different generations of users?

Well, and in any case, 'The Web is no longer the future of computing, computing is now about the Web' (Dare Obasanjo).

***

Footnote. Two other things from the Time piece (I pass them on as given there): ' … contrary to the mythology surrounding Netscape, it was he [Berners-Lee], not Andreessen, who wrote the first "graphical user interface" Web browser. (Nor was Andreessen's browser the first to feature pictures; but it was the first to put pictures and text in the same window, a key innovation.)'

Wikipedia's timeline of web browsers is available here.