Previous month:
September 2007
Next month:
November 2007

October 2007

Re-echoing that Mac/DOS piece

When I read Stephen Fry's first Saturday Guardian column (previous post), I took in the cross-reference to Umberto Eco's piece about the Mac/DOS:Catholic/Protestant parallelism but didn't follow it as I recalled having read it before. Then I saw friends bookmarking it and something made me check it out. What I recall reading (in October, 2005, it turns out — see Labyrinths and Internet) was something fuller — short of a full-length newspaper column but more than a clip.

I found it on the web in The Modern World and I see from the same site's page of Eco's writings that it says of this, the Mac/DOS piece: 'This ubiquitous work has, by now, found its way all across the Internet'. So there we are. And here it is, again.

The Holy War: Mac vs. DOS
by Umberto Eco

The following excerpts are from an English translation of Umberto Eco's back-page column, La bustina di Minerva, in the Italian news weekly Espresso, September 30, 1994.

A French translation may be seen here.


Friends, Italians, countrymen, I ask that a Committee for Public Health be set up, whose task would be to censor (by violent means, if necessary) discussion of the following topics in the Italian press. Each censored topic is followed by an alternative in brackets which is just as futile, but rich with the potential for polemic. Whether Joyce is boring (whether reading Thomas Mann gives one erections). Whether Heidegger is responsible for the crisis of the Left (whether Ariosto provoked the revocation of the Edict of Nantes). Whether semiotics has blurred the difference between Walt Disney and Dante (whether De Agostini does the right thing in putting Vimercate and the Sahara in the same atlas). Whether Italy boycotted quantum physics (whether France plots against the subjunctive). Whether new technologies kill books and cinemas (whether zeppelins made bicycles redundant). Whether computers kill inspiration (whether fountain pens are Protestant).

One can continue with: whether Moses was anti-semitic; whether Leon Bloy liked Calasso; whether Rousseau was responsible for the atomic bomb; whether Homer approved of investments in Treasury stocks; whether the Sacred Heart is monarchist or republican.

I asked above whether fountain pens were Protestant. Insufficient consideration has been given to the new underground religious war which is modifying the modern world. It's an old idea of mine, but I find that whenever I tell people about it they immediately agree with me.

The fact is that the world is divided between users of the Macintosh computer and users of MS-DOS compatible computers. I am firmly of the opinion that the Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counter-reformist and has been influenced by the ratio studiorum of the Jesuits. It is cheerful, friendly, conciliatory; it tells the faithful how they must proceed step by step to reach -- if not the kingdom of Heaven -- the moment in which their document is printed. It is catechistic: The essence of revelation is dealt with via simple formulae and sumptuous icons. Everyone has a right to salvation.

DOS is Protestant, or even Calvinistic. It allows free interpretation of scripture, demands difficult personal decisions, imposes a subtle hermeneutics upon the user, and takes for granted the idea that not all can achieve salvation. To make the system work you need to interpret the program yourself: Far away from the baroque community of revelers, the user is closed within the loneliness of his own inner torment.

You may object that, with the passage to Windows, the DOS universe has come to resemble more closely the counter-reformist tolerance of the Macintosh. It's true: Windows represents an Anglican-style schism, big ceremonies in the cathedral, but there is always the possibility of a return to DOS to change things in accordance with bizarre decisions: When it comes down to it, you can decide to ordain women and gays if you want to.

Naturally, the Catholicism and Protestantism of the two systems have nothing to do with the cultural and religious positions of their users. One may wonder whether, as time goes by, the use of one system rather than another leads to profound inner changes. Can you use DOS and be a Vande supporter? And more: Would Celine have written using Word, WordPerfect, or Wordstar? Would Descartes have programmed in Pascal?

And machine code, which lies beneath and decides the destiny of both systems (or environments, if you prefer)? Ah, that belongs to the Old Testament, and is talmudic and cabalistic. The Jewish lobby, as always. ...

Technorati tags: , , ,

Evocative Objects

From Stephen Fry's new, weekly Guardian technology column:

Apple gets plenty of small things wrong, but one big thing it gets right: when you use a device every day, you cannot help, as a human being, but have an emotional relationship with it. It's true of cars and cookers, and it's true of computers. It's true of office blocks and houses, and it's true of mobiles and satnavs. A grey box is not good enough, clunky and ugly is not good enough. Sick building syndrome exists, and so does sick hand-held device syndrome. Fiddly buttons, blocky icons, sickeningly stupid nested menus - these are the enemy. They waste time, militate against function and lower the spirits. They make the user feel frustrated and (quite wrongly) dense. Mechanisms so devilishly, stunningly, jaw-dropping clever as the kind our world can now furnish us with are No Good Whatsoever if they don't also bring a smile to our face, if they don't make us want to stroke, touch, fondle, fiddle, gurgle, purr and coo. Interacting with a digital device should be like interacting with a baby.

Made me think of Sherry Turkle — see previous post — and Evocative Objects:

For Sherry Turkle, "We think with the objects we love; we love the objects we think with." In Evocative Objects, Turkle collects writings by scientists, humanists, artists, and designers that trace the power of everyday things. These essays reveal objects as emotional and intellectual companions that anchor memory, sustain relationships, and provoke new ideas.

(I can't resist quoting Fry's last paragraph: "If I had a grain of rice for every minute I have spent watching a progress bar over the years, I would be able to make you all a bowl of kedgeree. As it is, I shall cook you all up a weekly article instead. I do hope you'll be able to join me. See you next Saturday.")


Sherry Turkle: 'what will loving come to mean?'

At the entrance to the exhibit is a turtle from the Galapagos Islands, a seminal object in the development of evolutionary theory. The turtle rests in its cage, utterly still. "They could have used a robot," comments my daughter. It was a shame to bring the turtle all this way and put it in a cage for a performance that draws so little on the turtle's "aliveness." I am startled by her comments, both solicitous of the imprisoned turtle because it is alive and unconcerned by its authenticity. The museum has been advertising these turtles as wonders, curiosities, marvels — among the plastic models of life at the museum, here is the life that Darwin saw. I begin to talk with others at the exhibit, parents and children. It is Thanksgiving weekend. The line is long, the crowd frozen in place. My question, "Do you care that the turtle is alive?" is welcome diversion. A ten year old girl would prefer a robot turtle because aliveness comes with aesthetic inconvenience: "Its water looks dirty. Gross." More usually, the votes for the robots echo my daughter's sentiment that in this setting, aliveness doesn't seem worth the trouble. A twelve-year-old girl opines: "For what the turtles do, you didn't have to have the live ones." Her father looks at her, uncomprehending: "But the point is that they are real, that's the whole point." … "If you put in a robot instead of the live turtle, do you think people should be told that the turtle is not alive?" I ask. Not really, say several of the children. Data on "aliveness" can be shared on a "need to know" basis, for a purpose. But what are the purposes of living things? When do we need to know if something is alive? Sherry Turkle — Edge, 2006: What is Your Dangerous Idea?

Last Thursday evening I was at the Saïd Business School for an OII event, Sherry Turkle talking about Cyberintimacies/Cybersolitudes:

Recent years have seen the development of computational entities - I call them relational artifacts - some of them are software agents and some of them are robots - that present themselves as having states of mind that are affected by their interactions with human beings. These are objects designed to impress not so much through their 'smarts' as through their sociability, their capacity to draw people into cyberintimacy. This presentation comments on their emerging role in our psychological, spiritual and moral lives. They are poised to be the new 'uncanny' in the culture of computing - something known of old and long familiar - yet become strangely unfamiliar. As uncanny objects, they are evocative. They compel us to ask such questions as, 'What kinds of relationships are appropriate to have with machines?' And more generally, 'What is a relationship?'

This was a sceptical talk in the best sense, questioning the cyberpresent and the imminent cyberfuture ('this is very difficult for me — I'm not a Luddite'). The broad thrust of the talk was born of a desire to 'put robots in their place': the debate about machines and AI was once a debate about the machines; now, Professor Turkle believes, the debate is increasingly about our vulnerabilities. Something new is happening in human culture, for robots are not (simply) a kind of doll on to which we project feelings but are produced with "embedded psychology": they appear to be attentive, they look us in the eye, they gesture at us. Human beings are very cheap dates: we ascribe intentionality very quickly. Consequently, we are engaging with these robots, not (just) projecting feelings on to them.

She calls this change in culture the 'robotic moment'. Our encounter with robots crystallises how the larger world of digital technology is affecting our sense of self, our habits of mind. (In turn, software, virtual worlds and devices are preparing us, at times through nothing more than superficiality, for a life with robots.) The earlier, romantic ('essentialist') reaction to the coming of robots ("Why should I talk to a computer about my problems? How can I talk about sibling rivalry to a machine that doesn't have a mother? How could a machine possibly understand?" — 1999 interview) no longer holds sway. Now, she says, 'I hear that humans are faking it and robots are more honest.'

When we're thinking about robots we're thinking, then, about how we conceptualise the self. Narcissism and pragmatism combine and self-objects in perfect tune with our fragile self confirm our sense of who we are. If you have trouble with intimacies, cyberintimacies are useful because they are at the same time cybersolitudes.

Consider the elderly — this is Sherry Turkle writing in Forbes earlier this year:

Twenty-five years ago the Japanese realized that demography was working against them and there would never be enough young people to take care of their aging population. Instead of having foreigners take care of their elderly, they decided to build robots and put them in nursing homes. Doctors and nurses like them; so do family members of the elderly, because it is easier to leave your mom playing with a robot than to leave her staring at a wall or a TV. Very often the elderly like them, I think, mostly because they sense there are no other options. Said one woman about Aibo, Sony's household-entertainment robot, "It is better than a real dog. … It won't do dangerous things, and it won't betray you. … Also, it won't die suddenly and make you feel very sad."

Consider, alternatively, the paralysed man who said that robots can be kinder than nurses but went on to say that even an unpleasant nurse has a story — and 'I can find out about that story'.

For me, the best part of her OII/Saïd talk was her listing of the five points she considers key (also in the Forbes article, 'five troubles that try my tethered soul'). From my notes:

  1. Is anybody listening? What people mostly want from their public space is to be alone with their personal networks, to stay tethered to the objects that confirm their sense of self.
  2. We are losing the time to take our time. We're learning to see ourselves as cyborgs, at one with our devices.
  3. Does speed-dialing bring new dependencies? Children are given mobiles by their parents but the deal is that they then must answer their parents' calls. Tethered children feel different about themselves.
  4. The political consequences of online/virtual life — an acceptance of surveillance, loss of privacy, etc. People learn to become socialised, accepting surveillance as affirmation rather than intrusion.
  5. Do we know the purpose of living things? Authenticity is to us what sex was to the Victorians, threat and obsession, taboo and fascination. "Data on aliveness can be shared on a need-to-know basis."

(On tethering, this from a piece in the New Scientist, 20 September, 2006, is helpful: 'Our new intimacies with our machines create a world where it makes sense to speak of a new state of the self. When someone says "I am on my cell", "online", "on instant messaging" or "on the web", these phrases suggest a new placement of the subject, a subject wired into social existence through technology, a tethered self. I think of tethering as the way we connect to always-on communication devices and to the people and things we reach through them.')

There were good questions from (in particular) Steve Woolgar: just how new is this robotic "threat" (think of the eighteenth century panic about mechanical puppets) and what of our ability to adapt successfully to "new" challenges ('we learn new repertoires and relate differently to different kinds of "robots"')? I was also glad that someone mentioned E M Forster's 'The Machine Stops' (1909; Wikipedia — which links to online texts; the text can also be found here). Other than that, there was insufficient time for discussion. This was disappointing and so, too, was the caricature of hackers (a 'group of people for whom the computer is the best they can do') from one section of the audience (complete with careless remarks about autism).

Much food for thought, but I came away wishing we could have talked for much longer. I note amongst the students I teach the emergence of good questions about digital technology and a well-established desire to do more with the tools it gives them than sustain a narrow, narcissistic self. Many of them are, of course, using the web in inspiring ways, and the ingenuity of the young in escaping from being tethered (to parents, to authority) is not in doubt.

I want to give the floor to Sherry Turkle and link to other material of hers that I've found useful in thinking about this talk. In the course of a review of her Life on the Screen: Identity in the Age of the Internet, Howard Rheingold fired three questions at Sherry Turkle (I think this is all from 1997). Here are excerpts from her replies:

As human beings become increasingly intertwined with the technology and with each other via the technology, old distinctions about what is specifically human and specifically technological become more complex. Are we living life on the screen or in the screen? Our new technologically enmeshed relationships oblige us to ask to what extent we ourselves have become cyborgs, transgressive mixtures of biology, technology, and code. The traditional distance between people and machines has become harder to maintain....The computer is an evocative object that causes old boundaries to be renegotiated. Mind to Mind

We have grown accustomed to thinking of our minds in unitary images. Even those psychodynamic theories that stress that within us there are unconscious as well as conscious aspects, have tended to develop ways of describing the final, functioning "self" in which it acts "as if" it were one. I believe that the experience of cyberspace, the experience of playing selves in various cyber-contexts, perhaps even at the same time, on multiple windows, is a concretization of another way of thinking about the self, not as unitary but as multiple. In this view, we move among various self states, various aspects of self. Our sense of one self is a kind of illusion . . . one that we are able to sustain because we have learned to move fluidly among the self states. What good parenting provides is a relational field in which we become increasingly expert at transitions between self states. Psychological health is not tantamount to achieving a state of oneness, but the ability to make transitions among the many and to reflect on our-selves by standing in a space between states. Life on the screen provides a new context for this psychological practice. One has a new context for negotiating the transitions. One has a new space to stand on for commenting on the complexities and contradictions among the selves. So, experiences in cyberspace encourage us to discover and find a new way to talk about the self as multiple and about psychological health not in terms of constructing a one but of negotiating a many. Mind to Mind 

At one level, the computer is a tool. It helps us write, keep track of our accounts, and communicate with others. Beyond this, the computer offers us both new models of mind and a new medium on which to project our ideas and fantasies. Most recently, the computer has become even more than tool and mirror: We are able to step through the looking glass. We are learning to live in virtual worlds. We may find ourselves alone as we navigate virtual oceans, unravel virtual mysteries, and engineer virtual skyscrapers. But increasingly, when we step through the looking glass, other people are there as well. In the story of constructing identity in the culture of simulation, experiences on the Internet figure prominently, but these experiences can only be understood as part of a larger cultural context. That context is the story of the eroding boundaries between the real and the virtual, the animate and the inanimate, the unitary and the multiple self which is occurring in both advanced scientific fields of research and the patterns of everyday life. From scientists trying to create artificial life to children "morphing" through a series of virtual personae, we shall see evidence of fundamental shifts in the way we create and experience human identity. But it is on the Internet that our confrontations with technology as it collides with our sense of human identity are fresh, even raw. In the real-time communities of cyberspace, we are dwellers on the threshold between the real and virtual, unsure of our footing, inventing ourselves as we go along. As players participate, they become authors not only of text but of themselves, constructing new selves through social interaction. Mind to Mind 

And this is from the Edge piece (2006) quoted at the start:

Do plans to provide relational robots to attend to children and the elderly make us less likely to look for other solutions for their care? People come to feel love for their robots, but if our experience with relational artifacts is based on a fundamentally deceitful interchange, can it be good for us? Or might it be good for us in the "feel good" sense, but bad for us in our lives as moral beings? Relationships with robots bring us back to Darwin and his dangerous idea: the challenge to human uniqueness. When we see children and the elderly exchanging tendernesses with robotic pets the most important question is not whether children will love their robotic pets more than their real life pets or even their parents, but rather, what will loving come to mean?

Also worth looking up: the MIT Initiative on Technology and Self and Evocative Objects, a new book edited by Sherry Turkle. The talk was filmed, so I assume there'll be a webcast and that it will appear here.


Dymaxion cubicle

A week ago today, I was in the Design Museum (enjoying the Zaha Hadid exhibition — a few photos here, though sadly I couldn't do her wonderful project paintings justice). A surprise to me was the Buckminster Fuller cubicle door drawing — in the Gents. I seemed to have the room to myself, so I took a couple of photos of the door (wondering what I'd say if someone came in or, worse by far, if the cubicle turned out not to be empty after all).

P1012145

So that was that, and then Stowe noticed that Dopplr had used Buckminster Fuller's Dymaxion Map in their Dopplr 100 launch. From Wikipedia:

Wikipedia: Unfolded Dymaxion map with nearly-contiguous land masses

The Dymaxion map of the Earth is a projection of a global map onto the surface of a polyhedron, which can then be unfolded to a net in many different ways and flattened to form a two-dimensional map which retains most of the relative proportional integrity of the globe map. It was created by Buckminster Fuller, and patented by him in 1946, the patent application showing a projection onto a cuboctahedron. The 1954 version published by Fuller under the title The AirOcean World Map used a slightly modified but mostly regular icosahedron as the base for the projection, and this is the version most commonly referred to today. The name Dymaxion was applied by Fuller to several of his inventions.

Unlike most other projections, the Dymaxion is intended purely for representations of the entire globe. Each face of the polyhedron is a gnomonic projection, so zooming in on one such face renders the Dymaxion equivalent to such a projection.

Dymaxion map folded into an icosahedron
Dymaxion map folded into an icosahedron

Fuller claimed his map had several advantages over other projections for world maps. It has less distortion of relative size of areas, most notably when compared to the Mercator projection; and less distortion of shapes of areas, notably when compared to the Gall-Peters projection. Other compromise projections attempt a similar trade-off.

More unusually, the Dymaxion map has no 'right way up'. Fuller frequently argued that in the universe there is no 'up' and 'down', or 'north' and 'south': only 'in' and 'out'. Gravitational forces of the stars and planets created 'in', meaning 'towards the gravitational center', and 'out', meaning 'away from the gravitational center'. He linked the north-up-superior/south-down-inferior presentation of most other world maps to cultural bias. Note that there are some other maps without north at the top.

There is no one 'correct' view of the Dymaxion map. Peeling the triangular faces of the icosahedron apart in one way results in an icosahedral net that shows an almost contiguous land mass comprising all of earth's continents - not groups of continents divided by oceans. Peeling the solid apart in a different way presents a view of the world dominated by connected oceans surrounded by land.

Which set me thinking: that Buckminster Fuller is someone we ought to be teaching in schools, of course (and I can see how we might start doing that easily enough — and soon), and about Dopplr and good design. For another very cool Dopplr ... er ... effect, if you've not seen their sparkline stack and read Matt's post about it, you really should.


Sharing, Privacy and Trust in Our Networked World

— the title of a very long report by Harris Interactive on behalf of the OCLC, available for download (pdf) here. (You can also download sections of the report from here.) In its conclusion it poses the question, 'what are the services and incentives that online libraries could offer users to entice them to come back or to visit more often or even devote some of their own time to help create a social library site?'.

This OCLC membership report explores this web of social participation and cooperation on the Internet and how it may impact the library’s role, including:

  • The use of social networking, social media, commercial and library services on the Web
  • How and what users and librarians share on the Web and their attitudes toward related privacy issues
  • Opinions on privacy online
  • Libraries’ current and future roles in social networking

Any report this long is going to take time to read and digest, but a look at some of the conclusions should whet the appetite:

The drive to participate, to build, to seek out communities is certainly nothing new. “Connect with friends,” “be part of group,” “have fun” and “express myself” are the top motives for using social networks according to our research. We could as easily be describing the motives behind the rise of the telephone, civic associations or, more recently, the cell phone, or the motivations that drew e-mail from the office into the home. The motives that are driving the rise of social networking are not unique. And yet, this particular Internet innovation, the social networking craze, feels different. It doesn’t seem to be playing out like the digital revolutions that preceded it. Social networking is doing something more than advancing communications between individuals, driving commerce or speeding connectivity. It is redefining roles, muddying the waters between audience and creator, rules and relationships, trust and security, private and public. And the roles are changing, not just for a few but for everyone, and every service, on the Web. Whether one views this new social landscape as a great opportunity for improved information creation and exchange or as a messy playground to be tidied up to restore order, depends on one’s point of view. …

We see a social Web developing in an environment where users and librarians have dissimilar, perhaps conflicting, views on sharing and privacy. There is an imbalance. Librarians view their role as protectors of privacy; it is their professional obligation. They believe their users expect this of them. Users want privacy protection, but not for all services. They want the ability to control the protection, but not at the expense of participation. …

… librarians have pioneered many of the digital services we now see in broad use on the Web: intranets to share resources, electronic information databases and “ask-an-expert” services. And although it took some librarians a while to embrace the use of search engines as hubs for information access, librarians are now Googling more frequently than their users and teaching users how to maximize the potential of this powerful tool. But, unfortunately, librarians are not pioneering the social Web.

And from the final section of the conclusion, 'Open the Doors':

Our perceptions become our realities, and often, also our limitations. This was clearly the case for the authors of this report when we began our research on social networks a year ago. There is no doubt that our initial perceptions of social networks influenced our approach to this study. Handicapped by only limited personal experiences with sites, we began our study as we had every study before it—by looking at social networks as a service or set of services to be studied, learned and implemented. We conceived of a social library as a library of traditional services enhanced by a set of social tools—wikis, blogs, mashups and podcasts. Integrated services, of course, user-friendly for sure and offering superior self-service. We were wrong. Our view, after living with the data, struggling with the findings, listening to experts and creating our own social spaces, is quite different. Becoming engaged in the social Web is not about learning new services or mastering new technologies. To create a checklist of social tools for librarians to learn or to generate a “top ten” list of services to implement on the current library Web site would be shortsighted. Such lists exist. Resist the urge to use them.

The social Web is not being built by augmenting traditional Web sites with new tools. And a social library will not be created by implementing a list of social software features on our current sites. The social Web is being created by opening the doors to the production of the Web, dismantling the current structures and inviting users in to create their content and establish new rules. Open the library doors, invite mass participation by users and relax the rules of privacy. It will be messy. The rules of the new social Web are messy. The rules of the new social library will be equally messy. But mass participation and a little chaos often create the most exciting venues for collaboration, creativity, community building—and transformation.


Primary clues

Economist, my bold: 

… Frith Manor in Barnet, a London suburb, one of England's biggest primary schools. … When the new buildings opened earlier this year, each classroom was equipped with an “interactive whiteboard” (IWB)—a screen on the wall that talks wirelessly to a laptop tucked away to one side. There is even one in the school nursery, beside the climbing frame and set low enough for three-year-olds to reach. This technology is so useful that it would be cost-effective to kit out every primary classroom in the country with a screen, according to an independent evaluation of IWBs in primary schools, published on October 9th. Teachers were able to monitor children's progress more effectively, and spent less time planning lessons and marking papers. Difficult tasks, such as using a ruler or a thermometer, were easier to demonstrate. Children paid more attention, behaved better and, most importantly, learned more.

Until recently, it was not clear that the oodles of money the government has been spending on school computers was paying off. Too many schools put the equipment in separate rooms that had to be booked in advance, rather than integrating it into every lesson. And teachers hated taking classes where every child faced the wall and stared at a screen. An evaluation in January of the use of IWBs in secondary schools found no clear benefits.

But primary schools, it seems, may be different. Technology fits well with the sort of participatory whole-class teaching that predominates in the early years, the study found; in many secondary schools it is consigned to the odd power-point presentation, passively received. In primary classrooms, teachers who have used the technology for longest are seeing the greatest benefits, this latest review concludes. 

More teachers use computers in the classroom in Britain than anywhere else in Europe (see chart). Almost every school already has at least one IWB, and quite a few have one in every classroom. And unlike most other places, Britain has put more computer technology in primary classrooms than in secondary ones. That now looks prescient.

At Frith Manor … pupils are motivated by being able to show what they know.


Satire under the Nazis

Via the excellent Smashing Telly, Laughing With Hitler, originally on BBC Four and now on Google Video. It has its weaknesses, but if you're interested in satire you'll surely get a lot out of watching it. Much struck home — some of it amusing, plenty that was simply shocking:

image 
  • Werner Finck ('The bad times are over, we now have a thousand year Reich to get through'; 'How odd: it's spring, but everything is turning brown') and his club, Die Katakombe — look around the 8 minute mark.
image 
  • Traubert Petter and his performing chimps (c 28 minutes). The chimps were taught to give the Hitler salute (to the initial, stupid acclaim of party members — 'Even the monkeys greet us'), but then a party decree was issued banning apes from saluting the Führer. Traubert Petter was sent to serve on the Russian front (and survived).
image   
  • Fritz Muliar (c 32 minutes) who at 21 wrote his last will and testament, thinking he would be sentenced to death for making jokes about Hitler. Instead, he endured five years of hard labour in a penal battalion in Russia: 'I thought I would never laugh again'.

  • Robert Dorsay (c 48 minutes): opponent of the Nazis, he was betrayed by a fellow actor and was executed on 29 October, 1943, for telling jokes and making defeatist remarks.
image 
  • Dieter Hildebrandt (c 51/52 minutes): 'In those days you took a tiny hammer and hit a small bell and it went [loud, reverberating noise]. Today, you hit a huge bell with a huge hammer and it goes 'ping'.'

  • Fr Joseph Müller (c 52 minutes): parish priest of Groß Düngen, he was arrested (11 May, 1944) by the Gestapo. Appearing in the People's Court before Roland Freisler, he was found guilty, sentenced to the guillotine and was executed on 11 September, 1944 — for preaching Christian values and telling a joke about a dying soldier: 'Show me the people that I'm dying for', says the dying solider. A picture of Hitler and a picture of Göring are placed by him, one on each side. The soldier dies, saying, 'Now I shall die like Jesus Christ, between two criminals'.
image


Life, the web — all a tangle

The interview Tim Berners-Lee gave last year (IBM developerWorks) was widely reported. I blogged it, Web 2.0: 'what the Web was supposed to be all along', in August 2006, shortly after it was posted. What most struck me about it was expressed, pithily and succinctly, by Sir Tim in a remark about the web made on an earlier occasion — the MIT Technology Review Emerging Technologies conference in 2005, as reported by Andy Carvin in Tim Berners-Lee: Weaving a Semantic Web:

The original thing I wanted to do was to make it a collaborative medium, a place where we (could) all meet and read and write.

At the MIT conference, Sir Tim talked about Marc Andreessen and the emergence of a commercial web browser. In the IBM developerWorks interview he said of his web browser,

… the original World Wide Web browser of course was also an editor. … I really wanted it to be a collaborative authoring tool. And for some reason it didn't really take off that way.  And we could discuss for ages why it didn't. … I've always felt frustrated that most people don't...didn't have write access.

Just a couple of weeks ago, I came across a 1997 Time magazine piece about Tim Berners-Lee by Robert Wright, The Man Who Invented The Web:

Berners-Lee considers the Web an example of how early, random forces are amplified through time. "It was an accident of fate that all the first [commercially successful] programs were browsers and not editors," he says. To see how different things might have been, you have to watch him gleefully wield his original browser--a browser and editor--at his desk. He's working on one document and--flash--in a few user-friendly keystrokes, it is linked to another document. One document can be on his computer "desktop"--for his eyes only--another can be accessible to his colleagues or his family, and another can be public. A seamless neural connection between his brain and the social brain. … he is grateful that Andreessen co-authored a user-friendly browser and thus brought the Web to the public, even if in non-ideal form. Yet it can't have been easy watching Andreessen become the darling of the media after writing a third-generation browser that lacked basic editing capabilities.

Now the web is 'finally starting' to follow 'the technological lines he envisioned (… as software evolves)':

Berners-Lee, standing at a blackboard, draws a graph, as he's prone to do. It arrays social groups by size. Families, workplace groups, schools, towns, companies, the nation, the planet. The Web could in theory make things work smoothly at all of these levels, as well as between them. That, indeed, was the original idea--an organic expanse of collaboration. … "At the end of the day, it's up to us: how we actually react, and how we teach our children, and the values we instill." He points back to the graph. "I believe we and our children should be active at all points along this."

So, a fundamental deviation from Tim Berners-Lee's vision for the web occurred in the form taken by popular, commercially viable web browsers. In Dave Winer's view, this early deviation was heavily reinforced by Microsoft:

Since the re-rollout of Office in 1996, it's been really clear why Microsoft was so hell-bent at first owning and then suffocating the web browser, along with the web. … Because for them, writing was not something that would be done in a web browser, if they improved their browser as a writing tool, that would be the end of Word, and with it, a big reason for using Office. … If instead, Microsoft had embraced the web, and with it the shift in their product line and economics, in 1995, we'd have a much richer writing environment today. Blogging would have happened sooner, in a bigger way. It's hard to imagine how much the sins of Microsoft cost all of us.

What a tangled thing technology is (Berners-Lee — 'The Web is a tangle, your life is a tangle – get used to it'). I hope very much that Ted Nelson has brought Geeks Bearing Gifts nearer to publication. Meanwhile, with the Time 1997 piece a little bit more of the road map of the web's evolution became clearer to me. The read-only nature of the successful web browsers that came after Sir Tim's explains a great deal about how many an adult of a certain age perceives the web. I think of John Naughton's sketch of how today's 22 year-old conceives of the web, and of Andrew McAfee's comment,

Evidence is mounting that younger people don’t think of the Internet as a collection of content that other people produce for them to consume. Instead, they think about it as a dynamic, emergent, and peer-produced repository to which they’re eager to contribute.

And I think back to Bradley Horowitz's talk at the London March 2007 FOWA meeting, which I wrote about here — and Twittered at the time: 'from a hierarchy of creator(s)/synthesisers/consumers (1:10:100) towards a web world of participation (100)'.

If the history of the web browser had itself been different, would we have suffered the misalignment of perceptions about the essentially social, creative nature of the web ('I believe we and our children should be active at all points along this') that often exists now between today's different generations of users?

Well, and in any case, 'The Web is no longer the future of computing, computing is now about the Web' (Dare Obasanjo).

***

Footnote. Two other things from the Time piece (I pass them on as given there): ' … contrary to the mythology surrounding Netscape, it was he [Berners-Lee], not Andreessen, who wrote the first "graphical user interface" Web browser. (Nor was Andreessen's browser the first to feature pictures; but it was the first to put pictures and text in the same window, a key innovation.)'

Wikipedia's timeline of web browsers is available here.


Google and Jaiku: living the social network

I found Chris Messina's post, Theories about Google’s acquisition of Jaiku — and two passages really caught my attention:

In the future, you will buy a cellphone-like device. It will have a connection to the internet, no matter what. And it’ll probably be powered over the air. The device will be tradeable with your friends and will retain no solid-state memory. You literally could pick up a device on a park bench, login with your OpenID (IP-routed people, right?) from any number of service providers (though the best ones will be provided by the credit card companies). Your user data will live in the cloud and be delivered in bursts via myriad APIs strung together and then authorized with OAuth to accomplish specific tasks as they manifest. If you want to make a phone call, you call up the function on the touch screen and it’s all web-based, and looks and behaves natively. Your address book lives in Google-land on some server, and not in the phone. You start typing someone’s name and not only does it pull the latest photos of the first five to ten people it matches, but it does so in a distributed fashion, plucking the data from hcards across the web, grabbing both the most up-to-date contact information, the person’s hcalendar availability and their presence. It’s basically an IM-style buddy list for presence, and the data never grows old and never goes stale. Instead of just seeing someone’s inert photo when you bring up their record in your address book, you see all manner of social and presence data. Hell, you might even get a picture of their current location. This is the lowercase semantic web in action where the people who still hold on to figments of their privacy will become increasingly marginalized through obfuscation and increasingly invisible to the network. I don’t have an answer for this or a moral judgement on it; it’s going to happen one way or another. …

In the scheme of things, it really doesn’t have anything to do with Twitter, other than that Twitter is a dumb network that happens to transport simple messages really well in a format like Jaiku’s while Jaiku is a mobile service that happens to send dumb messages around to demonstrate what social presence on the phone could look like. These services are actually night and day different and it’s no wonder that Google bought Jaiku and not Twitter. And hey, it’s not a contest either, it’s just that Twitter didn’t have anything to offer Google in terms of development for their mobile strategy. Twitter is made up of web people and is therefore a content strategy; Jaiku folks do web stuff pretty well, but they also do client stuff pretty well. I mean, Andy used to work at Flock on a web browser. Jyri used to work at Nokia on devices. Jaiku practically invented the social presence browser for the phone (though, I might add, Facebook snatched up Parakey out of Google’s clenches, denying them of Joe Hewitt, who built the first web-based presence app with the Facebook iPhone app). If anything, the nearest approximation to Jaiku is Plazes, which actually does the location-cum-presence thing and has mobile development experience.

A long post, full of good links and well worth pondering.

Incidentally, and for all their significant differences (IP-routed people!), the first paragraph quoted above recalled to mind Bruce Sterling's wonderful riff, Harvey Feldspar's Geoblog, 07.10.2017, on a future without a mobile phone:

"But Mr. Feldspar, suppose this international criminal doesn't carry a mobile?" demanded representative Chuck Kingston (R-Alabama). It would have been rude to point out the obvious. So I didn't. But look, just between you and me: Anybody without a mobile is not any kind of danger to society. He's a pitiful derelict. Because he's got no phone. Duh.

He also has no email, voicemail, pager, chat client, or gaming platform. And probably no maps, guidebooks, Web browser, video player, music player, or radio. No transit tickets, payment system, biometric ID, environmental safety sensor, or Breathalyzer. No alarm clock, camera, laser scanner, navigator, pedometer, flashlight, remote control, or hi-def projector. No house key, office key, car key... Are you still with me? If you don't have a mobile, the modern world is a seething jungle crisscrossed by electric fences crowned with barbed wire. A guy without a mobile is beyond derelict. He's a nonperson.

A non-person, 'increasingly marginalized … increasingly invisible to the network'.