Thinning out, tidying up. Books to Oxfam, books to booksellers. Analogue to digital.

Here’s something I’ve long wanted to consign to my outboard brain. In a book bought eight years ago and now on its way out, these words, attributed to an unnamed headmaster (but I think I know who it is — they’d be utterly characteristic of him):

… four questions to ask myself in any situation:
What are the facts?
What are the issues?
What am I going to do?
Who do I have to tell?

Teaching’s changed over the course of my life, becoming suppler and subtler, gentler and wiser. Kinder. Looking back, there was a lot of focus on “facts” and not always much sensitivity to issues. Facts often seemed to be the issues.

Schools, like families, are crucibles of intense engagement. Those four questions are a great way of collecting yourself in the rush of a crisis. They’ve been of help and they can live here now. The book can go.

This archiving business, though … Opening up the book to get that bit and put it down here, I found forgotten notes on index cards — one about the book, but others to do with a job interview I had nearly 10 years ago — and a post-it with a rather good quotation on it from … ? And now that it’s so easy to digitalise and store, what do I keep? When should you forget? What should be put clean away?

Delicious (I)

I started blogging in November 2003 and my first use of Delicious was on 12 July, 2004. I see that many of my entries for that July are, unsurprisingly, tagged “blogging_community”. They’re still there in Delicious, but I can’t find them via the timeline of pages unless I reverse sort these. Is the Delicious edifice crumbling? I’m glad I’ve a number of local backups dating back over the years. But that is matter for another post.

When I started using Delicious, it was almost entirely as a prop to help me get up to speed with everything I was discovering online. Those years were hectic. I remember when I first started teaching what a learning curve there was and how weekends and nights, in term and holiday alike, disappeared in preparation, reading and marking for at least the first three years (made the more intense as I evolved into an English teacher, a subject I’d last studied formally in my mid-teens). We all know these periods of unavoidable, passionate engagement as we close with a new subject, a new discipline, a new pursuit.

I look back now to another time when, rather late to the party, I began to register what the arrival of the accessible read-write web meant. It was in November 2003, with the birth of TypePad, that it first hit me: a long period, where what had been hard — requiring coding skills that divided the world into the few who had them and the rest of us who, most decidedly, did not, was coming to an end and the ready ability to publish and be heard (who knew by whom?) was upon us. I knew then that I wanted to be involved in this, the future-already-becoming-the-present.

So, for a long while, Delicious, for me, was nearly all about discovery and very rapid note-taking, itself requiring the investment of much time, if I were to gain even a basic grasp of all this stuff. I learned to read fast but attentively, précis-by-excerpt, tag and bookmark, binding knowledge together in a way that had to do duty in the absence of something more adequate (no appropriate memory theatre, then or now).

Back then, as soon as TypePad made it straightforward to use FeedBurner (June 2006?), it seemed to me right to link together the feed for this blog and that of Delicious (a decision I probably wouldn’t make today, were I starting over). Blogging and bookmarking seemed like the two leaves of a diptych in a period when the pace was both frenetic and apparently inexorably determined by technological change.

Things haven’t got slower (as if — though I think I can now be, and am, more discriminating, both knowing more and being a bit the wiser), but my reading habits have certainly changed. With extensive commuting (c 160 miles a day), the time on a train to read and, more significantly, the year-long experience of using an iPad and Instapaper whilst being connected, the way I work, read and think has changed.

One of the pleasures of living in a more connected world is the constant discovery that changes you thought peculiar to you are going on, simultaneously, in others. I noted Read It Later’s post last week, Is Mobile Affecting When We Read?. I can certainly identify with the use of whitespace time, but I’ve been more struck in the last few months with how I’m storing material up in Instapaper, going back to it, archiving things that once I would have bookmarked straightaway in Delicious, ruminating over others and then, finally, sending myself an email reminder to bookmark X later. And later frequently, now, means Saturday — when I have the time to deal with what has become a sizeable backlog. More filtering happens at that stage, too.

Delicious (backed up locally and in Pinboard) has assumed a different role in my life. No longer the bank of preference for instant notes, it’s where I’m putting things that I’ve generally sifted or gone back to (sometimes a number of times). (Of course, some things still seem worth bookmarking at once, but the reason for that can itself turn out to be depressingly ephemeral.) I’m much more interested now, much more able now, to use Delicious as a repository for things which I’ve had the time, and the perspective, to weigh.

All of which makes Delicious, or something like it, even more important. And I haven’t even begun to talk about the network.

Narrating the work (II)

This resonates with me so much and I see I jotted down some notes about it before. Re-reading the posts involved, and some others, has set me thinking again. Some significant bookmarks I want to keep to hand:

1) Jon Udell, 2001, on the web:

an environment in which everyone can produce as well as consume web content. The web began in this state of grace, soon fell from it, and has recently been trying to find its way back. It's been a hard road, frankly.

That's both beautiful and true.

2) From the same:

There's one talent common to all these creative disciplines: storytelling. We are, as a species, hardwired not only for language but for narrative. A story is, you might say, an evolutionary mechanism designed to focus the attention of a group. Sometimes the point is to entertain, sometimes to teach, often both. The power of narrative, whatever its purpose, flows from a deep human need to identify with a group, and above all to find out what happens next. … It all boils down to just three things: a storyteller, an audience, and a venue.
3) Dave Winer, 2002, discussing an Instant Outliner:
(…) narrating your work is the way to go.
4) This is Jon Udell, back in April 2004, The participant/narrator: owning the role, writing about the "XML-Deviant column at O'Reilly's XML.com … which began in January 2000, [and] would have been called a blog had the term been more current then":
For people who lack the time to closely monitor activity in some area, these bulletins are a way to keep a finger on the pulse. For the participant/narrator, they're a way to build personal brand and -- perhaps -- influence the agenda. It's been clear to me for a long time that the participant/narrator, armed with easy-to-use Web publishing technology (aka blog tools), will be a key player on every professional and civic team.
Now that the hype about political blogs has died down, it's clear that this is the real deal: a grassroots effort to connect a political process to itself, to its constituency, and to the outside world. No fanfare, just steady and reliable information flow. Every team can benefit from this approach. By narrating the work, as Dave Winer once put it, we clarify the work. There can be more than [one] narrator, but it makes sense to have one team member own the primary role just as other members own other roles.
5) Jon Udell, July 2007, Beautiful code, expert minds, discussing a book where coders narrate their work ("Although this is a book by programmers and for programmers, the method of narrating the work process is, in principle, much more widely applicable"):
Access to expert minds is just inherently valuable. We’re entering an era in which we’ll be able to access many more — and many different kinds of — expert minds. I’m looking forward to it.
6) All this was set going again by Dave Winer's fine post yesterday, Narrate Your Work, "a big part of the future Rebooted News system, imho":
I clicked on the page of NYT editorial people on Twitter that I keep and I saw something very different, and this is the point of this story. I saw a news organization at work. Careful to say what they do and don't know. Informing each other on experience with similar stories in the past. Whether they were all reading all of the others' posts, I don't know. They were reading and passing on reports from other Twitter users, even those that didn't work at the Times. They were coordinating the work of a larger community than just people who work at the Times. … real reporters dealing with a true breaking story not just a simulation of a breaking story, let their hair down and share everything they know with the world. This is the impulse of news …

Jon Udell, 2001: "The web's leading blogger is clearly Dave Winer, who has for years pursued parallel careers as a software developer and storyteller (or, he might say, technology journalist)."

Three other passages from Jon Udell's 2001 post stand out for me:

Could it be that, despite Tim Berners-Lee's dream (and mine), the writable web is not the natural state of affairs? That, in fact, it is appropriate for consumers of web content to outnumber producers? And that tools and technologies are not the major constraint on the production of web content? Recent history suggests that the answer to all of these questions is probably yes. Personal computers have forever changed the way people make publications, movies, and music. But they have not changed the people who do these things. If you lack writing or editing or illustration skills, or filmic flair, or musical ability, then desktop publishing or video or music tools can't change that. What they can do -- and it's no small thing -- is help people with latent abilities in these areas discover and grow their talents. …
Blogging as a form of mainstream web entertainment, with its star performers and its popularity ratings, may or may not be a passing fad. What will endure, in any case, matters more: a powerful new way to tell stories that refer to, and make sense of, the documents and messages that we create and exchange in our professional lives. …
It [his project weblog] looks like a newspaper, and indeed serves a similar purpose.

Technorati Tags: , , , , , , , , ,


The internet means you don’t have to convince anyone else that something is a good idea before trying it.
Scott Bradner, former trustee of the Internet Society (quoted in Here Comes Everybody)

The communications tools broadly adopted in the last decade are the first to fit human social networks well,
and because they are easily modifiable they can be made to fit better over time.
— Clay Shirky (Here Comes Everybody, p 158) 

Clay Shirky at the ICABack before Easter, I was at the ICA for the Eno/Shirky evening. One of the books I then read over the break was Here Comes Everybody. I’ve been meaning for some time to put down a few notes about it here. This has grown to be a long post as I’ve added to it, wanting to get a few things out on the page and, so, clearer in my own mind.

It’s a great book to suggest to friends who are not familiar with the technologies Shirky discusses as it hides its knowledge well — but there are still leads to follow up. The modest ten or so pages of the Bibliography threw up a number of articles I'd either not heard of before or hadn’t visited in a long while. In the former camp, I recommend: Anderson: More Is Different (Science — 1972); R H Coase: The Nature of the Firm (pdf) — a 1937 economics paper; Richard P. Gabriel — Lisp: Good News, Bad News, How to Win Big: worse is better (1991); Alan Page Fiske: Human Sociality. (There’s an online “webliography” here.) And chapters 8–11, covering so many big topics — social capital; three kinds of loss (some solve-a-hard-problem jobs; some social bargains; negative aspects to new freedoms); small world networks; more on social capital; failure (‘open source … is outfailing’ commercial efforts, 245); more on groups (‘every working system is a mix of social and technological factors’, 260) — hit my Amazon Prime account hard. (Incidentally, there’s a Kevin Kelly piece on “more is different”, Zillionics, that appeared earlier this year. See also Kevin Kelly’s The Google Way of Science and Wired’s The Petabyte Age: Because More Isn't Just More — More Is Different.)

Further reading to one side, a number of things discussed in the book particularly interested me straightaway. Firstly, sociality, privacy and exposure online. Leisa recently posted Ambient Exposure, an update (of sorts) to her post of last March, Ambient Intimacy. The titles tell their own story. Early on, Clay writes about ‘how dramatically connected we've become to one another … [how much] information we give off about our selves’. This took me back to Adam Greenfield’s recent talk at the Royal Society (I’ve also been re-reading Everyware). Our love of flocking is being fed handsomely by means of the new tools Clay Shirky discusses so well.

Privacy is always coming up in conversations at school about online life, and what I’m hearing suggests our students are beginning to look at privacy and exposure with growing circumspection. Facebook’s People You May Know functionality has made some sit up and wonder where social software might be taking us. We’re slowly acquiring a stronger sense of how seduction through imagined privacy works (alone in a room, save for screen and keyboard) and a more developed understanding of what it means to write for unseen audiences. Meanwhile, there are things to be unlearned: ‘those of us who grew up with a strong separation between communication and broadcast media … assume that if something is out where we can find it, it must have been written for us. … Now that the cost of posting things in a global medium has collapsed, much of what gets posted on any given day is in public but not for the public’ (90).  In the Bibliography, Clay refers to a post of Danny O’Brien’s — all about register — which is a longtime favourite of mine, too.

Then there was what the book had to say about media and journalism. Simon Waldman, well-placed to pass comment, on chapters 3 and 4:

The chapters most relevant to media/journalism - ‘Everyone is a media outlet’ and ‘Publish first, filter later’ should be required reading for pretty much everyone currently sitting in a newspaper/broadcaster. It’s certainly the best thought through thing I’ve read on this, and the comparison to the decline of the scribes when the printing press came in is really well drawn. 

The summary to Chapter 4 (‘Publish, Then Filter’) runs, ‘The media landscape is transformed, because personal communication and publishing, previously separate functions, now shade into one another. One result is to break the older pattern of professional filtering of the good from the mediocre before publication; now such filtering is increasingly social, and happens after the fact’. ‘Filter-then-publish … rested on a scarcity of media that is a thing of the past. The expansion of social media means the only working system is publish-then-filter’ (98). (Language like this can sound an utopian note that rings on in the head long after the book’s been closed, as if we’d entered a world beyond old constraints. And look!: the Praetorian Guard of elite gatekeepers is no more.)

I was interested, too, to read Shirky’s thoughts about the impact of new technologies on institutions. His application of Ronald Coase’s 1937 paper and, in particular, the idea of the Coasean floor (‘activities … [that] are valuable to someone but too expensive to be taken on in any institutional way’), was very striking: the new tools allow ‘serious, complex work [to be] taken on without institutional direction’ and things can now be achieved by ‘loosely coordinated groups’ which previously ‘lay under the Coasean floor’.

We didn't notice how many things were under that floor because, prior to the current era, the alternative to institutional action was usually no action. (47)

Later in the book (107), he comes back to institutions, taking what is happening to media businesses as not unique but prophetic — for ‘All businesses are media businesses … [as] all businesses rely on the managing of information for two audiences — employees and the world’:

The increase in the power of both individuals and groups, outside traditional organisational structures, is unprecedented. Many institutions we rely on today will not survive this change without significant alteration, and the more an institution or industry relies on information as its core product, the greater and more complete the change will be. The linking of symmetrical participation and amateur production makes this period of change remarkable. Symmetrical participation means that once people have the capacity to receive information, they have the capability to send it as well. Owning a television does not give you the ability to make TV shows, but owning a computer means that you can create as well as receive many kinds of content, from the written word through sound and images. Amateur production, the result of all this new capability, means that the category of "consumer" is now a temporary behaviour rather than a permanent identity.

‘Every new user is a potential creator and consumer’ (106) is reminiscent of Bradley Horowitz in Creators, Synthesizers, and Consumers (2006).


Continue reading "Kayaking" »

Douglas Adams: an awful lot of 'us'

Reading Kevin Marks' post sent me back to that old favourite, Douglas Adams' 1999 piece, . I often use the first part of this in talks to both students and adults, but I've spent far too little time on the second half, the bit that Kevin quoted from. Here are some excerpts: 

… ‘interactivity’ is one of those neologisms that Mr Humphrys likes to dangle between a pair of verbal tweezers, but the reason we suddenly need such a word is that during this century we have for the first time been dominated by non-interactive forms of entertainment: cinema, radio, recorded music and television. Before they came along all entertainment was interactive: theatre, music, sport – the performers and audience were there together, and even a respectfully silent audience exerted a powerful shaping presence on the unfolding of whatever drama they were there for. We didn’t need a special word for interactivity in the same way that we don’t (yet) need a special word for people with only one head.

I expect that history will show ‘normal’ mainstream twentieth century media to be the aberration in all this. … 

Because the Internet is so new we still don’t really understand what it is. We mistake it for a type of publishing or broadcasting, because that’s what we’re used to. So people complain that there’s a lot of rubbish online, or that it’s dominated by Americans, or that you can’t necessarily trust what you read on the web. Imagine trying to apply any of those criticisms to what you hear on the telephone. Of course you can’t ‘trust’ what people tell you on the web anymore than you can ‘trust’ what people tell you on megaphones, postcards or in restaurants. Working out the social politics of who you can trust and why is, quite literally, what a very large part of our brain has evolved to do. For some batty reason we turn off this natural scepticism when we see things in any medium which require a lot of work or resources to work in, or in which we can’t easily answer back – like newspapers, television or granite. Hence ‘carved in stone.’ What should concern us is not that we can’t take what we read on the internet on trust – of course you can’t, it’s just people talking – but that we ever got into the dangerous habit of believing what we read in the newspapers or saw on the TV – a mistake that no one who has met an actual journalist would ever make. One of the most important things you learn from the internet is that there is no ‘them’ out there. It’s just an awful lot of ‘us’. … 

Before long, computers will be as trivial and plentiful as chairs (and a couple of decades or so after that, as sheets of paper or grains of sand) and we will cease to be aware of the things. In fact I’m sure we will look back on this last decade and wonder how we could ever have mistaken what we were doing with them for ‘productivity.’

In between reading Kevin's posting and writing this, I gave a talk to Heads of Departments (Faculties) at  Marlborough College (where I once taught) in the course of which I talked about this, 


and also about those dates-from-the-point-of-view-of-someone-now-aged-22 which John (I blogged about them ). 

Before heading off to Marlborough, I showed the two slides to some of my 14 year-old students. About the first (the IBM PC), the best comment I got was, 'What was the point of inventing that?'! I suggested you had to start somewhere, and the same student then added, 'They must have found it so frustrating'. How telling is that! With John Naughton's dates, another student commented, 'You're saying how recent all this is. To me, it all feels so old'. 

Back to Douglas Adams: 

… the biggest problem is that we are still the first generation of users, and for all that we may have invented the net, we still don’t really get it. … Most of us are stumbling along in a kind of pidgin version of it, squinting myopically at things the size of fridges on our desks, not quite understanding where email goes, and cursing at the beeps of mobile phones. Our children, however, are doing something completely different. Risto Linturi, research fellow of the Helsinki Telephone Corporation, quoted in Wired magazine, describes the extraordinary behaviour [of] kids in the streets of Helsinki, all carrying cellphones with messaging capabilities. They are not exchanging important business information, they’re just chattering, staying in touch. "We are herd animals," he says. "These kids are connected to their herd – they always know where it’s moving." Pervasive wireless communication, he believes will "bring us back to behaviour patterns that were natural to us and destroy behaviour patterns that were brought about by the limitations of technology."

We are natural villagers. For most of mankind’s history we have lived in very small communities in which we knew everybody and everybody knew us. But gradually there grew to be far too many of us, and our communities became too large and disparate for us to be able to feel a part of them, and our technologies were unequal to the task of drawing us together. But that is changing.

Interactivity. Many-to-many communications. Pervasive networking. These are cumbersome new terms for elements in our lives so fundamental that, before we lost them, we didn’t even know to have names for them.


Sometimes I'm asked by colleagues why students should blog, and what Douglas Adams wrote back in 1999 would sit right in the middle of my answer. But you can approach the question from another angle altogether, by thinking about why teachers should blog. Martin Weller and Tony Hirst gave an OU presentation on 18 May about just this, available . The presentation draws upon blog postings by Martin that I'd been reading last month. For example, ('reasons why educators should blog'):

  • It exposes the process …
  • It provides a useful tool for engaging with other technologies …  
  • It's a good means of getting down, or building up, all those thoughts that never quite get in to journal papers.

These things cut both ways: a good ICT course would recognise the first two bullet-points as important reasons why students should be blogging and the third, with some adaptation, would be true for students, too, of course. (And teachers, of course, should blog for the same core reasons students should — the ones Douglas Adams lays out.)

Martin's earlier posting, , proffered other reasons, of which I would say that 'the economics of reputation' is also absolutely central to why our students should blog:

… increasingly one’s reputation online is seen as a valuable commodity. This is partly because a good reputation is difficult to establish and also because in an environment where content is free and widely available then quality becomes a differentiating factor.

(Again, the other reasons he cites there also have something to say to students, allowing for the fact that his focus was on why educators should blog: engagement with your subject area; increased reflection; personal status and payback; organisational status; link to teaching; eating our own dog food.) In addition to these two postings, Martin's now his and Tony Hirst's workshop.

I have a great deal of respect for what's happening at the OU, so it's nice to be able to wind up with this from John Naughton, blogging today — :

Tony Hirst pointed me to a lovely blog by a T189 student. It's a stunning example of the usefulness of blogging in education. It provides the student with a tool for publication and self-expression, and it provides very useful information for teachers (e.g. in this case about the difficulties T189 students are experiencing with our Flash-based tutorials). I wish more of our students would blog.


Tumblr: keeping the croutons coming

I can't make up my mind about Tumblr.

Tumblr FAQs:

What's a tumblelog?

To make a simple analogy: If blogs are journals, tumblelogs are scrapbooks.

You can also look at tumblelogs as slightly more structured blogs that make it easier, faster, and more fun to post and share stuff you find or create.

Is Tumblr better/worse than Blogger, TypePad, FaceBook, etc.? 

It's totally different. That's why we built it, and why we love it so much.

Blogs are great, but they can be a lot of work. And they're really built to handle longer-form text posts. Tumblelogs, on the other hand, let you easily and quickly post and share anything you find or create.

I'm playing around with a Tumblr scrapbook here. I like the idea of a scrapbook — something more impromptu than a blog, where stuff that just catches my attention (often my eye) can go. Unlike del.icio.us, this is stuff I'm not seeking to tag — but when the day comes, as it surely will, that I need to search for something in my tumblelog then its tag-less-ness will be a weakness. (So this morning I stuck the photo of Will Self's room from today's Guardian in my Tumblr scrapbook, but also del.icio.us'd the Guardian piece.)

Tom Carden, whose Tumblr site is here, and from whom I nicked the first couple of things I Tumbled — to get me started, wrote earlier this week:

I’ve got a lot of personal enjoyment and utility out of posting links to del.icio.us, and it seems like a lot of other people get value from reading them - either directly, or on aggregate. I’ve been trying out Tumblr recently and I’m thrilled that it’s allowing me to do the same thing but with images and videos, and the occasional quote. It’s a very free and easy way to keep track of things I think are noteworthy.

A great part of Tumblr's appeal is its brilliant ease of use. Tumblr comes from 'a smallish web-development company in New York City called Davidville' and they have a Tumblr-related blog here. (They also make Senduit.)

Now, if Google Notebook (which is continuing to develop very well) could acquire more Tumblr-like features … Or if del.icio.us could …

Forgetting, again

A post by Abe Burmeister set me thinking, again, about forgetting.

A friend said yesterday, 'After all, when we were young, at some point, we all did something, whatever it was — ran naked down some street, something …'. A photo taken then meant that we were caught forever, always running naked down that street, but it might have disappeared for much of its life, gathering dust in some drawer. Now that photo makes it (straight) to the web and to a kind of permanence and presence (even ubiquity) never before possible. The years pass, but the (by now distributed) photo doesn't.

Back in 2003, Fabio Sergio wrote about how,

… with everyone apparently fascinated with ways to remember I find myself toying with the idea of "technologies for forgetting" … All in all we are facing a future strung tight between the ideal, pacific world of the Memex, where man will be given "access to and command over the inherited knowledge of the ages", and one where Lenny Nero will feel at home, characterized by our collective inability to let go of our past.

I keep hoping (and working) for the first scenario to become our future, but recognize it will require active involvement from everyone, driven by ample awareness of what's at stake.

Over at Abstract Dynamics yesterday, Abe was saying something similar (in a piece about Gmail and Google's goal of "organizing all the world's information"):

Some information is meant to disappear, sometimes for the better, sometimes for the worse. Google it seems is not willing to make that distinction, although ironically they more than any other entity have the power to make things disappear. …

'Some information is meant to disappear', or be mediated. The memory of that time when as a kid you ran naked down some street can linger on in the telling, to be recalled years later, embellished and without its sting, a source of amusement, leg-pulling and amicable, entertaining embarrassment — your children delighted both at your discomfort and at discovering that once you were just like them. But a stark photo on the web, that's copied and posted again and again, sent to the senior partner of your new firm the day you're about to start working there, published in a newspaper years later …

Fabio imagines an angry argument between Mr A and Mr B, and imagines it twice — unfilmed and filmed. In the first case,

After a few days they hook up again, matters having cooled off and all, and they talk about the incident, re-living the discussion while trying to clear things up.The inherent fuzziness of their recollection helps in dumbing sharp edges down, as we have been proven to remember positive things better and negative things less clearly, and in the end they agree on a common explanation of the argument, thus creating the possibility for their relationship to evolve around the event. What is important here, though, is that what actually happened matters as much as what they mutually agreed happened. The final experience, mediated through their second conversation, has the opportunity to change from negative to positive, leaving clarification in place of contrast. All's well that ends well, right?

With filmed evidence of what actually happened,

… there will be simply less room to maneuver for both of them, less room to mediate experience into memory. Due to the timelessness quality of the digitally-produced artifacts, which potentially shine as new forever after they've been first created, Mr. A's descendants will still be able to hear (and judge) Mr. B's words and attitude. Take this one social magnitude level higher and what you get is a society unable to let go of its past's tiniest details.

Forgetting strikes me as something we need to pay a lot more attention to as we go forward with digital technology. It crops up in surprisingly different contexts (IT departments should check out danah boyd's post about teenagers and passwords — 'Technology is a bit too obsessed with remembering; there's a lot of value in forgetting').

And now I remember it, Anne Galloway wrote back in 2003 about forgetting:

We need to forget certain things to survive and stay together. What will happen if everything is tracked and recorded. How will we be able to forget? Will the owners and administrators of the data allow us to forget? For example, we have social and cultural practices (expectations and norms) in place that accommodate comments MADE IN PASSING ... what if certain comments are not allowed to pass?

And also this from 2004, on the Forgetting Machine:

So I was reminded of my Forgetting Machine. And that I am trying to build something that reminds us that not all things can or should be remembered. A tricky task, for sure! Part of this involves the creative corruption of information - along the lines of bricolage or remixing - as well as the selective and wholesale deletion of information.

Anne's paper (2006, I think), 'Collective remembering and the importance of forgetting: a critical design challenge', is available here (pdf). From the Abstract:

Memories are understood as relations of power through which we, as individuals and groups, actively negotiate and decide what can be recollected and what can be forgotten. And without being able to decide what we can remember and forget, we are effectively left without hope of becoming different people or creating different worlds.

That's absolutely my concern for the teenagers posting photos and stories about themselves and each other. I want for them (as I want for my own children) the possibility of their becoming different people, to have the chance to let experience grow into memory and to be allowed to let go, to forget.

Anne has a fine phrase in her paper, 'ubiquitous machines of merciless memory' — 'there is such a thing as too much memory … we need to forget in order to live'. Fabio Sergio asks: 'Are we heading towards an über-politically correct world, where we'll be forced to always ponder all of our words for fear of getting quoted 20 years from now … a future devoid of the room for doubt?'

This, then, is something we also need to be talking about in ICT: forgetting and remembering. I commend Anne's paper very warmly. It asks wise questions — 'What does it really mean if the memories held by our machines never change or get forgotten?' — and remembers that forgetting can be 'a kind of affirmation rather than … a denial. … the value of forgetting is its ability to interrupt time or escape temporal continuity, and thus (re)imagine human experience'. Her paper challenges designers to remember all this, too, and to design accordingly and wisely.

Ray Ozzie

This (which I got to via Jon Udell) is from 2003:

Many years ago, in the mid 70's at University of Illinois, I was fortunate enough to have been touched by something called PLATO - an acronym for Programmed Logic for Automated Teaching Operations. At the time, PLATO was a mainframe-based time sharing system with about a thousand custom multimedia terminals - that is, 512x512 graphics, touch screen pointing device, synchronized microfiche and audio, and "always on" connectivity - quite an achievement for the time, particularly given that I was still using coding sheets and Hollerith cards to do classwork.

Although primarily intended as a computer-assisted teaching system, PLATO evolved into the first large scale "online community", with eMail, online discussions, instant messaging, chat rooms, remote screen sharing, and massive multiplayer gaming. We established long-distance relationships for work and for love; we balanced the duality of our real and virtual lives. In short, the tens or hundreds of thousands of us who had a chance to experience PLATO in those days were afforded a preview of what was to come in the Internet era - an era of global ubiquitous communications and interaction.

As many of us who had spent years immersed in the PLATO environment left and entered the "real world", we were shocked and dismayed to find a world lacking electronic connection. And as I entered the business world, it simply made no sense to me that computers were being used solely for computing and "data processing"; the collaborative online work environment that I'd taken for granted, that I'd used day in and day out, was simply missing in action. Our work lives are all about interpersonal connections, our businesses processes are structured into connections amongst people and systems that must be coordinated. What better use of technology than to help people to connect?

And so, for most of my life since that time, it has been my goal to explore what lies at the intersection between people, organizations, and technology. To attempt to utilize technology - to mold it, to shape it into a form such that it can help organizations to achieve a greater "return on connection" from employee, customer, and partner relationships, and to help individuals to strengthen the bonds between themselves and those with whom they interact - online. Because - empirically - collaborative technology has substantive value, in reducing the cost of coordination, in providing shared awareness across differences in space and time.

The way that I explore is to build products, and to see how they are used. To see what works, and what doesn't. To listen, to interact, to refine. Because cooperative work exists at the intersection between people, organizations, and technology, collaborative systems are truly fascinating: in order to serve people effectively, technologists must, for example, understand social dynamics, social networks, human factors. …

The bottom line to "why?" To create real value in a dimension that I passionately believe in.

I'm staying out of the Lotus Notes quagmire ('We spent years and years at Lotus trying to convince people of the "higher order" value of collaborative processes, sharing, and KM.  And I learned the hard way that fighting what appear to be natural organizational and social dynamics is very tough'), but am just recording here something I read today, found inspiring and really rather astonishing — not as much for its content as for how Ozzie traces the roots of his vision back to something he was working with in the mid-70s. It made me look out again his more famous posting about Live Clipboard:

I’ve been wondering, “what would it take to enable users themselves to wire-the-web”? … The world of the Web today is enabled by the power of a simple user model – Address/Go or Link, Back, Forward, Home. And certain “in-page” models have emerged from the ether: clicking the logo in the upper-left is Home, search in the upper-right, Legal/Corporate/Privacy/etc at the bottom. How we interact with shopping carts is now fairly standard. But each site is still in many ways like a standalone application. Data inside of one site is contained within a silo. Sure, we can cut and paste text string fragments from here to there, but the excitement on the web these days is all about “structured data” such as Contacts and Profiles, Events and Calendars, and Shopping Carts and Receipts, etc. And in most cases, the structured form of this data, which could be externalized as an XML item or a microformat, generally isn’t. It’s trapped inside the page, relegated to a pretty rendering.

So, where’s the clipboard of the web? Where’s the user model that would enable a user to copy and paste structured information from one website to another? Where’s the user model that would enable a user to copy and paste structured information from a website to an application running on a PC or other kind of device, or vice-versa? And finally, where’s the user model that would enable a user to “wire the web”, by enabling publish-and-subscribe scenarios web-to-web, or web-to-PC? …

I’d like to extend the clipboard user model to the web.

Of course, that was posted in March last year and since then everyone's been asking 'What happened to Live Clipboard?' and 'Where's Ozzie?'. We may have some answers to both these questions this year.

There's only so much partisan OS/platform war one can take. The really important question is that one about the read/write web: 'What would it take to enable users themselves to wire-the-web?'. I love the way Ozzie set that question in the context of 'the wild world of the web', mashups and all: 'mashups demonstrate how quickly a “mesh” can form when the process of wiring together components is made easy'.

Customer consciousness

After Guy Jackson, Electronic Publishing Manager at Macmillan Dictionaries, left a comment on my about the OED/KB917422 issue, I was thinking how we've come to accept that a blog posting about a product, made in some corner of the net, can be easily found by a conscientious company — or rival — and commented on … and how in this way we have a new customer/business relationship.

This was flagged up months ago by Robert Scoble when he spoke at Reboot 7 (I wrote up something about this and there's a bit more ) and, of course, there's Robert's and Shel Israel's book, Naked Conversations — sub-title, 'How Blogs are Changing the Way Businesses Talk with Customers'.

Then, earlier today, I read Alex Barnett on 'support tagging'. Read his whole for the background to this idea. This struck me:

Naturally, there will be those who scoff and respond to the support tagging idea along the lines of "Why? Customers should come to our support site, and open a ticket there". And that's how it's done today - make your customers come to you.

But why not reverse this completely? In one sense, this already happens today: customer conscious companies are trawling the RSS search engines and blogs looking for customer feedback / gripes / issues and post comments on those blogs (or post a blog and pingback). This is how these companies win the hearts, minds and loyalty of their customer. It's amazing customer service - a true differentiator. 

By providing a support tag, it could allow for further structuring around this 'listening to the blogs customer support' approach.

I'd given my post a number of specific Technorati tags, including KB924867 (the hotfix MS has now issued). I noticed a Technorati search was run for http://technorati.com/tags/KB924867 yesterday morning — and there's only the one post tagged with KB924867. I don't know if this was how Guy found my post, but one reason I tagged it like this is because I know the KB917422 issue will have affected a lot of users out there — and this tag might be a way to spread the word that a hotfix has arrived. 

Whatever the route, Macmillan has got itself into my consciousness because Guy bothered to find and then comment on my post. And because he did that, I discovered that any student having a copy of the Macmillan English Dictionary has free access to the Macmillan English Dictionary Online. When a student next asks my advice about buying a dictionary, I'm likely to pass this news on to him, too. 

I really like Alex's idea of extending this much further: a company issuing a support tag for a product would get my attention if they picked up on my blog post (tagged with the support tag), commented there and acted to help. The come-to-me web, indeed.

Paradigm shifts

I like the discipline of the del.icio.us 255 character limit for the excerpt from, or comment on, the item you're bookmarking there. But sometimes there's just too much that's good to be contained or summed like that.

The amazing miracle of YouTube versus The Times, as everyone reading this blog surely already knows, is that YouTube is a platform where cream--user-uploaded videos--rises the the top, to be savored by the world, while The New York Times Company is an information organization that pays thousands of journalists, designers, business people and administrative types millions of dollars to create expert content that tells people what to think and what to like. And honey, that day is passing fast. 

The point here--just to kick it a little harder--is that this is yet more evidence how social media platforms that are shifting the paradigms in a profound way--Not only does YouTube have a mass market, it's video on the web appeal that the more high-brow Times will never have (Is YouTube the next MTV?). Furthermore, it's a platform that gives Google the opportunity to morph into a multimedia MySpace ecosystem, way beyond what Orkut could ever be--and most cruelly, it's something that teens and twenty-somethings care about, which may no longer be the case for The New York Times.

So Google bought YouTube, not a media company, and the fact that doesn't even surprise anyone one anymore and that it makes perfect sense, that, dudes, is a paradigm shift.


… consumerization will be the most significant trend to have an impact on IT over the next 10 years. … "Consumers are rapidly creating personal IT architectures capable of running corporate-style IT architectures," he [Gartner's director of global research, Peter Sondergaard] said. "They have faster processors, more storage and more bandwidth."

He advised corporate IT executives to adapt to the changes and prepare for what he called "digital natives," or people so fully immersed in digital culture that they are unconcerned about the effects of their technology choices on the organizations that employ them. … 

In a paper prepared by Gene Phifer, David Mitchell Smith and Ray Valdes, Gartner researchers noted that corporate IT departments historically have lagged behind popular technology waves, such as the arrival of graphical user interfaces and the Internet in business. They argued that the biggest impacts of Web 2.0 within enterprises are collaboration technologies--notably blogs, wikis and social networking sites--and programmable Web sites that allow business users to create mashup applications. … "Our core hypothesis is that an agility-oriented, bifurcated strategy--one reliant on top-down control and management, the other dependent on bottom-up, free-market style selection--will ultimately let IT organizations play to their strengths while affording their enterprises maximum opportunity as well," the Gartner report said.

Technorati tags: , , , , , , , , , ,