Previous month:
February 2007
Next month:
April 2007

March 2007 survey

The team have blogged today about a survey they're inviting users to take:

As mentioned recently in this blog, we’re working on a lot of improvements for We’ve just posted a survey as a way to get feedback and opinions from you, the users. We want to make sure that we’re focusing on the right things, fixing what’s busted and not busting the stuff that already works. It’s not very long as surveys go, and hopefully it’s not too über-corporate. Thanks for taking the time to tell us what you think. We’ll be posting a summary of the results a few weeks after the survey closes.

I value greatly and use it … a lot (for which I've had my leg pulled more than once). I agree with Alan Dean who asked (comment on the blog posting cited above) that the survey be amended somewhat ('For example, on the first question I would like to select a "Research" or "Reference" option, but none of the available choices are really descriptive of this, the major way that I use your site.'), but whether this is done or not I hope lots of users of will take the five minutes required to complete the survey.

Right at the end you do get to an option where you can submit "other" points. These were mine:

1) 'Search' definitely needs to be improved — it needs to be quicker (searching within my own bookmarks for a term is often very slow and frequently results in a blank page), and pages generally need to load faster.

2) I'd also be interested if you made it possible to save a copy of a web page (as ma.gnolia does).

3) My data, and that of my network, really ought to be integrated (as and when I want it to be) with the web searches I perform through (eg) Google or Yahoo!. This is a great and obvious "gap".

4) The social aspect of needs to continue to evolve. For example, I'd welcome something like a network/open notebook stream where I could be scribbling brief notes like 'Anyone else puzzled by this?', etc. I find the notes we are entering often strain towards doing this, but they're a bit like a conversation in a soundproof room and (even then) you don't generally "hear" the replies, assuming there are any.

I'm sure that in a day or two I'll realise I would like to finesse these comments somewhat (they've received minimal touching up as posted here), but even Wiltshire has its fast currents and I'm currently speeding along … 

Above all, I want to shout from the roof top because we live in a time when knowledge can be readily shared — and that's exciting. Getting excited about and around ideas and knowledge ought to be a fundamental trait of our culture, and I find my network on indispensable as a way of unearthing new, challenging and unexpected ideas. Steven Johnson wrote:

Thanks to the connective nature of hypertext, and the blogosphere's exploratory hunger for finding new stuff, the web is the greatest serendipity engine in the history of culture. It is far, far easier to sit down in front of your browser and stumble across something completely brilliant but surprising than it is walking through a library looking at the spines of books.

A great part of the appeal of for me is that it is a semi-guided (my network — absolutely crucial) knowledge discovery machine that again and again surprises me with new material and the interconnectedness of it all. It's a search engine that finds me things I hadn't yet thought of looking for.

How best to develop to let this all live even more fully? Twitter may or may not be rightly described as microblogging, but I want to microblog around knowledge. Will Yahoo! (this last week making great developments with their APIs and email) be brave enough to develop to allow this ecology to grow accordingly? (I've long given up on the bookmarks extension for Firefox — and have been amazed by the continuing flow of emails about the various instantiations of this add-on and the problems surrounding these.) is a fantastic innovation, Yahoo! — look after it!

The culture of generosity

In a short interview on CNET News, marking the second anniversary of Yahoo!'s acquisition of Flickr (is it only two years?), Caterina Fake talks about attention economies, the internet "culture of generosity" and the creating of online communities ('It really is kind of like building a civilization. You need to have a culture and mores and a sense of this is "what people do here." If people greet each other and are helpful, and stomp on trolls immediately and keep the trash in the trash cans, that becomes what the culture of the place is. And that scales.'), and then the interview winds up with this:

Jeremy Neumann (from the audience) asks, "At South by Southwest Bruce Sterling was very down on blogs, podcasting, videos and other participatory media, comparing it to folk art which he said was really, really bad. Is it the taking part and the sharing that counts or are we raising the bar with user-generated content?"

Fake: It used to be when you wanted to hear music, you didn't go turn on the radio and listen to Christina Aguilera. You went down to the living room and grabbed cousin Joe and played the banjo.

There's nobody trying to be The Rolling Stones down there, or even Whitesnake. The "audience" for this stuff is usually friends, family, people like that. It's not meant to be judged by, ahem, Whitesnake standards. So I'd have to disagree with Bruce Sterling there. On the other hand, there are gems in all those family snapshots and MP3s of people noodling in their basements. And social networks are great ways of surfacing those really amazing things.

Interestingness on Flickr is a great way to do that. It looks at all the human activity around a photo and determines which ones are the most interesting.

Scepticism about the value of the write part of the read/write web is persistent amongst those not yet using the web to write. No surprise there (but how quickly we can forget this)! I'm not sure how convincing they'd find it, but this interview does offer another accessible piece to which I can point them.

There's no substitute for participation, though — for reading and writing online.

How did we get here? Where are we going?

Earlier this week, I gave a talk/presentation to our Heads of Departments (Faculties) on ICT and schools. I built upon some previous talks I have given on this theme (anyone devoted to these PowerPoints will find some of the slides familiar), but took things further, looking more closely at what underlies some Web 2.0 sites and at the main areas where schools now need to be focusing their thoughts and efforts. The talk was given on the Tuesday evening; in the morning I'd heard Demos give a presentation at the ISC conference (see previous post), and I adapted some of their conclusions in my summing up at the end of my talk.

My slides are available here (pdf) — from S3, itself a sign of the times. We're experimenting with student podcasting, running off S3. I am very interested in the possibilities for rapid enactment of an idea that S3 gives us, let alone the prospect of a near-future where schools contract out at least some of their server needs.

Many slides have active links, by the way: do click through and follow the threads. The fundamental metaphor I keep coming back to in all this work is still that of the conversation. I quote David Weinberger in one of the slides: 'The Internet as a technology teaches us one value more deeply than any other: the joy of being connected'. To be connected is to be in conversation.

Technorati tags: , , ,

The OU points the way

It was a pleasure yesterday morning to chair the ISC 07 Conference session on Distance Learning. Professor Brenda Gourley was speaking. Brenda is the OU's fourth Vice-Chancellor. Her home page is here and, as it shows, she is keenly interested in all things e-learning:

Recognising the particular challenges of the twenty-first Century, Prof. Gourley is a sought-after speaker on a wide range of issues that affect higher education institutions and society more widely. Some recent contributions to conferences and seminars include `Will eLearning Deliver on its Promise?', `The Digital Divide as a Development Issue', and `Moving Open and Distance Learning to Centre Stage'.

Her speech will be online shortly (here) — but she was kindly happy for me to quote from the draft (which indeed deviated little from what she delivered, and not in substance). The thrust of her speech was (no surprise) that this is a time of the greatest challenge for higher education: 'it is no longer useful to classify institutions into distance learning and otherwise. We are witnessing higher education move into an entire new paradigm'; 'We cannot have all the resources we need within any one institution nor even is it desirable that we do'.

It's extraordinary to learn that in the UK 42% of students are (officially) part-time and that soon part-time students will outnumber full-time ones: 'Given the necessity of life-long learning, there are now more people over the age of 24 in some form of higher education than the under 24 year olds – and they are usually in some form of work'. Over a third of those part-time undergraduates are studying with the Open University. In this new world, university students are selecting from courses 'offered across the world, a mix of campus-based and e-learning courses that transcend national boundaries - and certainly transcend university boundaries. Internationalisation has taken on a whole new meaning – and 'transnational' education is a new reality'.

I find it impossible to doubt that many aspects of what Brenda spoke about describe the changing world of secondary education, too. Indeed, she said as much: that young people use technology in ways their educators are not used to; that this creates great power that is changing the world faster than our political masters often can; that our world is characterised more and more by its collaborative nature — teams working to tackle problems and to share knowledge. Teachers at all levels simply have to respond to these challenges and prepare the young for this world:

What we see on the Web are people from all over the world creating communities of interest (some of them very sophisticated indeed) on a whole range of subject matter – and we need to ask ourselves how we harness this energy and use the material. …

The main task of an education system focused on the young is to prepare them for a world in which there is less order, less predictability and more chaos, where old solutions are running up against very complex challenges. This is a world which is more individualised at one level but also more community (or network) focussed at another. It is a world where collaboration, partnerships and alliances are important – not necessarily (and indeed probably not) for the long term. Increasingly such alliances come together around a particular project or objective and dissolve when the objective is achieved.

One thing Brenda said towards the end struck me very much: whilst "sculpting" of content is most important (along with the attention paid to creating a community of users), course content itself may come to play a different role in the open, collaborative world of e-learning now upon us. What is the point of re-creating the same bits of information if that content already exists in suitable form elsewhere?

Very relevant to this is The Open Educational Resources Movement (OER), an UNESCO initiative. (I'm ashamed I'd not heard of it before.) From the OECD site:

By “open educational resources” we understand:

• Open courseware and content;
• Open software tools;
• Open material for e-learning capacity building of faculty staff;
• Repositories of learning objects;
• Free educational courses.

And there's more elsewhere on the OECD site:

Although learning resources are often considered as key intellectual property in a competitive higher education world, more and more institutions and individuals are sharing their digital learning resources over the Internet openly and for free, as Open Educational Resources. The OECD’s OER project asks why this is happening, who is involved and what the most important implications are of this development.

The project will analyse and map the scale and scope of initiatives regarding "open educational resources" (OER) in terms of their purpose, content, and funding. It will look into different sustainable cost/benefits models and intellectual property right issues linked to OER initiatives. Furthermore we address questions regarding incentives and barriers for universities and faculty staff to deliver their material to OER initiatives and how to improve access and usefulness for users of OER initiatives.

Back to the ISC session and Brenda Gourley's talk. Simply put, hers was the most inspiring talk about education I've heard in a long time. I recommend reading it when it goes online (tomorrow, I think). It was so good to hear her acknowledge in her closing remarks that even the OU has at times found the challenges discomfiting, but that the excitement of what's unfolding is strongly felt back in Milton Keynes — and in all the other places to which their reach extends.

I'm not taking an OU course, but I kind of feel I am … a course in managing and delivering education for the digital age. I have been following the work of Marc Eisenstadt and Tony Hirst for a while now — both distinguished members of the OU educational technology team. Brenda mentioned yesterday that another colleague of hers, Martin Weller, was publishing today his Virtual Learning Environments: Using, Choosing and Developing Your VLE. I'm also following, closely (as I'm sure are many others), how the OU gets on with Moodle. Given the OU's commitment both to educational technology and to supporting students in its academic programmes, I suspect its development and deployment of Moodle will have a lot tell us at secondary level as we consider the best ways to develop our VLEs. (With the OU handling around 250,000 "transactions" online each day, it certainly also understands issues of scaling.)

She quoted John Naughton (a distinguished OU academic), himself quoting someone else, that reading on the internet is like drinking from a fire hose (read the whole post — it's well worth it). Life on the net never ceases to astonish me and in that spirit of wonder here are some OU places to go visit if you don't know them already: LearningSpace (OpenLearn — which scholars create); LabSpace (an 'experimental zone' for users to use … which then feeds back into LearningSpace); Children's Research Centre (new to me — looks very interesting); The Institute of Educational Technology; Knowledge Media Institute.

Fame and Glory

At the weekend, the Observer Music Monthly published a column about YouTube videos of pop stars throwing tantrums.

This is Elton saying he makes music, not films, whilst his team and the film people stand around and endure the paddy. (There's an Olbermann special on another Elton flare-up, here.) Then, also behaving badly: Grace Jones; yet another Liam Gallagher moment (apparently in February this year and, this time, in front of his five year-old); Preston (from Ordinary Boys) walking off Never Mind the Buzzcocks (Simon Amstell is merciless —€” who's behaving badly?); and Bjork — 'my motherly instincts took over'.

But in a different league altogether is The Bee Gees meet Clive Anderson (1996). I've heard about this on and off over the years, but this is the first time I've seen it — and to see is to be amazed by Barry Gibb's reaction to Anderson's wind-ups. The best moment is the one the Observer picked out:

Barry Gibb: 'We used to be called Les Tossers.'

Anderson: 'You'll always be Les Tossers to me.'

Some of this might come in useful when we get to discussing (in ICT) online-posting, privacy and forgetting. I'd want to work this in with danah's reflections on narcissism and "MySpace". Obviously this is germane:

One of the reasons that celebrities go batty is that fame feeds into their narcissism, further heightening their sense of self-worth as more and more people tell them that they're all that. They never see criticism, their narcissism is never called into check.

danah's focus, though, is designedly elsewhere:

What i do know is that MySpace provides a platform for people to seek attention. It does not inherently provide attention and this is why even if people wanted 90M viewers to their blog, they're likely to only get 6. MySpace may help some people feel the rush of attention, but it does not create the desire for attention. The desire for attention runs much deeper and has more to do with how we as a society value people than with what technology we provide them.

I am most certainly worried about the level of narcissism that exists today. I am worried by how we feed our children meritocratic myths and dreams of being anyone just so that current powers can maintain their supremacy at a direct cost to those who are supplying the dreams. I am worried that our "solutions" to the burst bubble are physically, psychologically, and culturally devastating, filled with hate and toxic waste. I am worried that Paris Hilton is a more meaningful role model to most American girls than Mother Theresa ever was. But i am not inherently worried about social network technology or video cameras or magazines. I'm worried by how society leverages different media to perpetuate disturbing ideals and pray on people's desire for freedom and attention. Eliminating MySpace will not stop the narcissistic crisis that we're facing; it will simply allow us to play ostrich as we continue to damage our children with unrealistic views of the world.

Once again, it's not the technology that's the problem, but as we "teach" the technology we can expect these social and ethical and psychological issues to make themselves known. Increasingly, I think the ICT teacher, and the teacher using ICT, is called upon (almost first and foremost) to be pastorally skilful. We haven't been focusing on this in ICT, instead looking nearly always at the technological skills. We need both, but I think the pastoral is going to prove crucial.

Tasking every which way

Via Martin's feed, this article on, What's Next: Taskus Interruptus. At first, it seems it's just another article about how multi-tasking is ruining concentration and productivity. But then:

Does multitasking really impair our ability to get our jobs done? The answer for most workers is, I think, no. But it's not because multitasking doesn't impair your ability to perform tasks. It does. It's because we're now in a complex, fast-response world in which getting a complete task done in the least amount of time is no longer the priority. Instead, today's top priority is to immediately address whatever fraction of a vast, malleable range of tasks has become most critical--a just-in-time, networked workstyle. Focusing on one task to the exclusion of others isn't even an option anymore. When experts examine the detrimental effects of multitasking on productivity, they're asking the wrong question. We don't need to wonder about the ways in which multitasking and interruption impair our ability to speed through a task. We need to appreciate the ways in which multitasking and interruption have become essential to meeting the increasingly nonlinear demands of our jobs.

That means it's essential not only to put up with but also to embrace multitasking. Fifteen years ago, it was almost impossible to get a fast response in midevening, or even midday, from your head of product development or the CEO of a key supplier. But today, with projects and products being zipped around the globe, chances are you know exactly how to get someone's attention at a moment's notice. And the ability to do so has a direct impact on the bottom line, says Michael McCloskey, CEO of FrontRange Solutions, a customer relationship management software and services provider in Dublin, California. "If I'm in a price negotiation with a big customer, and they've got their legal and purchasing people right there, and they want an answer to a question, I better be able to get that answer," he says. "Because I may not be able to get those people in the same room talking about my product again anytime soon." McCloskey admits that he often has to interrupt people during important tasks to do so. But he has no second thoughts. "Ninety percent of the time," he says, "it's worth it."

Meanwhile, businesses have long been moving away from the sort of stovepipe structure that allowed employees to focus on meeting the demands of a single boss or worry only about a small group of employees or customers. Today the dotted-line relationships form a dense web that extends out to customers, suppliers, and partners. In other words, forget about closing the door and crunching on that one presentation. You've got 20 other people breathing at you just as hard, and each one wants to know that you're making progress. "The way we look at getting the job done is changing," says Martin Frid-Nielsen, CEO of Soonr, a Campbell, California, company offering a service that connects cell phones to PC applications. "It's about how in touch you are and how you're engaging many other people."

I've been persuaded for a long while now that we need to look at how our students (and their teachers) are learning to be adept at handling many demands at once in a world rendered very open, through technology, to multiple channels of communication.

I was reminded of something Seth Godin wrote back in January:

I sat next to Cory at a conference today. It was like playing basketball next to Michael Jordan. Cory was looking at more than 30 screens a minute. He was bouncing from his mail to his calendar to a travel site and then back. His fingers were a blur as he processed inbound mail, visiting more than a dozen sites in the amount of time it took for my neck to cramp up. I'm very fast, but Cory is in a different league entirely. Rereading this, I can see I'm not doing it justice. I wish I had a video...

This was never a skill before. I mean, maybe if you were an air traffic controller, but for most of us, most of the time, this data overload skill and the ability to make snap judgments is not taught or rewarded.

As the world welcomes more real-time editors working hard in low-overhead organizations, I think it's going to be a skill in very high demand.

In my case, I might make a start by doing something as elementary as, AT LAST, making the effort to type more efficiently.

Looking back at FOWA 07


I go to events to learn things, to have my mindset challenged. Anil Dash

(See also Anil's The Essentials of Web 2.0 Your Event Doesn't Cover.)

Just read (yesterday) the Stanford Magazine online, The Effort Effect (a piece about Carol Dweck's work), and that connected with what Anil said last month (quotation above). Challenge is the common thread, be it new ideas and perspectives or dealing with difficulties and failure. Of course, if you're impressed with the new, challenging ideas, implementing these in your life and the life of your organisation may lead you by a very short route to difficulty, failure — and renewed effort. Challenges.

It always takes me time to digest a good conference and there's plenty to praise about FOWA07 (which I want to approach by asking what's lasted). The mp3s of all the talks are now available, by the way, and the presentations are there, too. (For Flickr 'most interesting' FOWA07 photos, go here.)

A couple of riders first. I agree with Tom when he wrote, 'It was a shame that the conference felt quite so aggressively targetted at the web-app-as-startup crowd; last year's was much more about "applications on the web", and richer for it'.

My other rider is about something I missed, the Adobe Apollo demo. When I do see these (or other) impressive demo vids I want to have this advice from Tim O'Reilly in my head (as it happens, he's talking about an Adobe Apollo demo, 'a special product preview summit today called Engage'):

Is what's easiest for the producer of content (asset reuse and the ability to create integrated experiences across platforms) really what's best for the user? Only if content developers use that power wisely. Kathy Sierra reminds us that success in the social media era is about creating "I rule" moments for users. So when I hear a software vendor talking about creating "I rule" moments for content suppliers, I worry that they're on the wrong track, unless they work to offset the natural tendency towards "efficiency" for the provider rather than great experiences for the user.

At times I feel completely saturated with 'wow' experiences that don't turn out to 'put the user first'.

And so to what's lasted. The talk was excellent and I blogged a little about it here. Presentations by Tara Hunt, Khoi Vinh and Simon Willison also stand out for me. Simon Willison has posted his slides here and went on to write Six cool things you can build with OpenID. OpenID is on all our radars now but there are issues (see, eg, O'Reilly, Tim Bray, Mike Migurski, Dare Obasanjo … well, the list goes on).

Earlier, Tara Hunt posted her slides (here). Tara knows a thing or two about how to turn an audience on (recognising it in others, too), and I really enjoyed her talk, packed with lots of insight into how online communities grow and work. I'm grateful to her, too, for the link to John Coates' Cyberspace Innkeeping: Building Online Community (1992, '93, '98): 'at its essence the advice is to be kind, be interested and pay attention. Not so different than the rest of life. And that's the point. As virtual as you may want to make it, it is still reality governed by the same operating principles as the rest of life. Cyberspace doesn't live outside the rest of the universe.'

Above all, though, it's the presentation and those by Vinh and Horowitz that are still with me. I made a scatter of notes from Khoi Vinh's talk but I don't need to look at them to recall my surprise when he pointed to the NYT's innovation of permalinks for its articles —€” I twittered that: 'NYT hasn't publicized well (yep) its new permalink feature —€” remains good for "several years" after article goes b/hind paywall'.

He spoke about the new kinds of functionality consumers of the NYT are demanding (readers are approaching the NYT online with a different mindset, not just through/in a different medium) and the degree of uncontrol the NYT has to learn to live with, and I remember clearly his spelling out that most users are intermediates and beginners are more easily offended than experts —€” these points I can apply directly to the experience of new ICT developments in my school community. The testing of usability must not be done by executives, and what must be tested is usability and not the preparedness of users to accept something. That's also very pertinent to my work.

The most interesting thing he talked about was the Twitter interface and the difference between this and Twitterific. This goes back to a post on his blog made earlier in February, Writing and Sizing Twitter — a post which has now been'd some 40 times. It was an electric moment for this conference-goer, one of those times when something is being dissected before your eyes with intelligence and in such a way as to illuminate design and function and user-experience. I wish I had a Mac and could try Twitterific for myself …

Finally, I also really enjoyed Bradley Horowitz's talk. 'VP ADD, Yahoo!' —€” what a job title (VP Advanced Development Division). He came across very well —€” an interesting man — and spoke about 'Social Interaction: What the Future Holds'. I twittered friends: 'Listening to Bradley Horowitz: from a hierarchy of creator(s)/synthesisers/consumers (1:10:100) towards a web world of participation (100).' He showed his pyramid of creators, synthesisers and consumers, followed not by three concentric circles (which, when I'd seen them last month, at the OII talk Yahoo!'s Ricardo Baeza-Yates gave, had had me a little puzzled —€” if I didn't dream it all, the circles were presumably to be understood as temporary enclosures, permeable to each other and changing places?), but by one yellow circle labelled '100%'  —€” 100% participating users: users becoming editors, user becoming neighbours. Yahoo! is a huge company, yet Ricardo Baeza-Yates, 'Director of Yahoo! Research Barcelona and Yahoo! Research Latin America in Santiago, Chile', talks in Oxford one day and then, the next, I'm hearing a presentation in London that overlaps very much with Ricardo's talk, albeit with a different image at its centre. That's some coordination of vision, but there seems to be a search on for Best Metaphor/Best Summative Image —€” something which came out also at the end of Bradley's talk, when he spoke about the move from sampling to synthesising (from orchestra-but-discrete-instruments to rich remix/quilting). What is the best metaphor to catch what's happening?

Other parts of Bradley Horowitz's talk echoed things he'd written about a year ago in the post I've already referred to, Creators, Synthesizers, and Consumers:

… social software sites don'€™t require 100% active participation to generate great value. That being said, I'm a huge believer in removing obstacles and barriers to entry that preclude participation. … One direction we (i.e. both Yahoo and the industry) are moving is implicit creation. A great example is Yahoo! Music's LaunchCast service, an internet radio station.  I am selfishly motivated to rate artists, songs and music as they stream by…  the more I do this, the better the service gets at predicting what I might like.  What's interesting is that the self-same radio station can be published as a public artifact. The act of consumption was itself an act of creation, no additional effort expended…   I am what I play - I am the DJ (with props to Bowie.)  Very cool. 

I spoke a lot more about this in the Wired article.  In the new paradigm of "€œprogramming"€ where there are a million things on at any instant, we'€™re going to need some new and different models of directing our attention. …   Everyone becomes a programmer without even trying, and that programming can be socialized, shared, distributed, etc. …

Listen to Bradley's talk, and listen out, in particular, for the explanation/exploration of interestingness and clusters on Flickr —€” not to mention Highway 66. (These will be familiar to anyone who's been following Flickr closely over the last few months. He wrote about Flickr and interestingness in Creators, Synthesizers, and Consumers: 'Without anyone explicitly voting, and without disrupting the natural activity on the site, Flickr surfaces fantastic content in a way that constantly delights and astounds.  In this case lurkers are gently and transparently nudged toward remixers, adding value to others'€™ content'.)

Finally, I had forgotten that Yahoo! owns MyBlogLog:

MyBlogLog turns on the lights, and invites people to look at (and dialog) with each other in addition to looking at the screen. Maybe the right analogy is a sports bar. The game is on the big screen providing the content and context. But the fun part is hooting and hollering with your mates, heckling the guys there to support the other team the next table over, etc. It'€™s communal. It'€™s interactive. It's participatory. It's fun.

Oh, and, inevitably, there were Pipes: the summary runs —€” HTML was pumped out and pumped out; RSS is mashups for the masses, but Pipes is a much richer palette. Then, his next point: the pyramid is back — a few will create the Pipes of value to the many. …  (I still haven't given Pipes the time they merit.)

For full running notes on the talk, go here. Nodalities has a write up here, and Lars, as ever, a mindmap here.

There are so many ways a conference can be challenging. A lot about Yahoo! has really come to make sense to me since hearing Horowitz and Baeza-Yates talk (and see now Caterina's post), and Horowitz and the other speakers and presentations I've mentioned here have sustained me over the last three weeks or so with what they said. I've suggested above a number of points at which what was said at FOWA bears directly on my experience as a very new Director of educational ICT. Yesterday, re-writing a part of our online help for our intranet, I was consciously changing language to try to speak to my non-geek colleagues (the great majority) and to suggest a more collaborative, dynamic purpose to our shared resources. That's the easy bit. I don't imagine for a moment that explaining what is meant by 'The act of consumption was itself an act of creation' ('I am what I play - I am the DJ') will be like simply walking across a ready-made bridge … There's a huge, huge chasm between FOWA and the world of the many and we, the people in this organisation, have to make the bridge together. Challenges!


Wanting to look into 'privacy' some more I asked a few friends about the long roots of the modern idea. Where does it come from and what conceptions of privacy did previous ages entertain? Of course, this is a vast subject and the very term even as used today is complicated. The OED's definitions run (I've put the first cited date of usage in square brackets after each definition):

The state or quality of being private.
1. a. The state or condition of being withdrawn from the society of others, or from public interest; seclusion. [c 1450]
b. The state or condition of being alone, undisturbed, or free from public attention, as a matter of choice or right; freedom from interference or intrusion. Also attrib., designating that which affords a privacy of this kind. [1814]

2. a. pl. Private or retired places; private apartments; places of retreat. Now rare. [1678]
{dag}b. A secret place, a place of concealment. Obs. [1686]

3. a. Absence or avoidance of publicity or display; a condition approaching to secrecy or concealment. [1598]
{dag}b. Keeping of a secret, reticence. Obs. [1736]

4. a. A private matter, a secret; pl. private or personal matters or relations. Now rare. [1591]
{dag}b. pl. The private parts. Obs. [1656]

{dag}5. Intimacy, confidential relations. Obs. [1638]

6. The state of being privy to some act; = PRIVITY. rare. [1719]

For the etymology, we go to private (a.) — and this immediately tells us so much:

ad. L. pr{imac}v{amac}t-us withdrawn from public life, deprived of office, peculiar to oneself, private; as n. a man in private life; prop. pa. pple. of pr{imac}v-{amac}re to bereave, deprive

A friend who's a medievalist recommended A History of Private Life and a classicist friend recommended the same ('though it only starts from Rome'). I've been meaning to make a start on those volumes (five?) for some time.

Another classicist friend, a Latin specialist, came straight back with: Andrew M. Riggsby, " 'Public' and 'private' in Roman culture: the case of the cubiculum," Journal of Roman Archaeology 10 (1997), 36–56. I'm going to have to go the Bodleian for that, but I did find Riggsby's university homepage and also this:

Riggsby describes the Roman conception of privacy as “not so much a right as a mandate: ‘If you are going to behave that way, you must do it in a certain restricted area. Keep it out of our view.’” … Keeping Riggsby’s interpretation of the Roman “right to privacy” in mind, the obligation to sequester morally questionable behavior outweighs any right to unmonitored freedom of action within the confines of “private space.” … Citing the anti-democratic tendencies of Roman society, Riggsby points out that, “The state does not permit a protected private sphere to go unwatched and allow it to become a potential source of disruption.” … According to Riggsby, “[Roman] Aristocratic moral tradition dictated that any act…is subject to the moral evaluation of the community. In one sense, then, there was no generally accepted norm of privacy, no legitimate moral claim to freedom from judgment of certain kinds of activity or activity within certain social or physical regions.”

Nearer our own times, there's masses of material, but I liked this for its broad sweep — suggestive as a springboard to more reading and research, and for its focus on how space described possibilities for privacy:

English 792X: Seminar: The Invention of Privacy in the Renaissance    Prof. Elsky    T 6:20-8:00
The premise of this course is that the modern imagination of everyday life began in the Renaissance. This interdisciplinary course examines how a social and intellectual revolution led to the invention of privacy as a cherished and sometimes feared value. We will look at new methods of analyzing literature based on an understanding of material culture, particularly the everyday life of the times. Our starting point will be a look at the way people lived in their homes and houses, and the way people's living spaces were completely redesigned in this period to include a variety of private withdrawing rooms. We will consider examples of new spaces designed for men, women, and families, and how these spaces affected family relations and relations between men and women. We will then turn to an exploration of how these new spatial structures actually altered the structure of the imagination as reflected in the major writers of the period. Literary works will include lyric, drama, and prose, as well as personal diaries written by both men and women. We will emphasize how the great writers of the period used the settings of privacy to reveal deep personal and spiritual fulfillment, on the one hand, but also the threat of social subversion, betrayal, illicitness, revenge, and murder.

Technorati tags: , ,

Erasing that memory

When Eternal Sunshine of the Spotless Mind came out I was keen to go and see it — and I wasn't disappointed. I see I read the NYT review in April 2004 and then, when the DVD came out, blogged about it again (2006) and linked (via Mind Hacks) to Eternal Sunshine of the Spotless Mind and the mythical memory videotape.

Back in March 2004, I'd read Steve Johnson's review of the film in Slate, part of which ran:

Eternal Sunshine of the Spotless Mind is remarkably in sync with modern neuroscience, but in one respect the film put its emphasis in the wrong place. To be fair, it's a failing shared with a host of recent films about memory loss: Memento, 50 First Dates, Paycheck. Selective erasure of memories may not be a feasible procedure in the near future, but cosmetic memory enhancement is likely to be a reality in the next 10 years, just as targeted mood enhancers like Prozac have become commonplace over the past 10. You won't be able to sharpen your memory of a single person, but you may well be able to take a pill that will increase your general faculties of recollection. This is the ultimate irony of Eternal Sunshine and films like it. While the culture frets over the perils of high-tech erasure, we should really be worrying about the opposite: what will happen when we remember too much.

And then along comes this:

A single, specific memory has been wiped from the brains of rats, leaving other recollections intact. … The brain secures memories by transferring them from short-term to long-term storage, through a process called reconsolidation. It has been shown before that this process can be interrupted with drugs. But Joseph LeDoux of the Center for Neural Science at New York University and his colleagues wanted to know how specific this interference was: could the transfer of one specific memory be meddled with without affecting others? "Our concern was: would you do something really massive to their memory network?" says LeDoux.

To find out, they trained rats to fear two different musical tones, by playing them at the same time as giving the rats an electric shock. Then, they gave half the rats a drug known to cause limited amnesia (U0126, which is not approved for use in people), and reminded all the animals, half of which were still under the influence of the drug, of one of their fearful memories by replaying just one of the tones. When they tested the rats with both tones a day later, untreated animals were still fearful of both sounds, as if they expected a shock. But those treated with the drug were no longer afraid of the tone they had been reminded of under treatment. The process of re-arousing the rats' memory of being shocked with the one tone while they were drugged had wiped out that memory completely, while leaving their memory of the second tone intact.

LeDoux's team also confirms the idea that a part of the brain called the amygdala is central to this process - communication between neurons in this part of the brain usually increases when a fearful memory forms, but it decreases in the treated rats. This shows that the fearful memory is actually deleted, rather than simply breaking the link between the memory and a fearful response.

Greg Quirk, a neurophysiologist from the Ponce School of Medicine in Puerto Rico, thinks that psychiatrists working to treat patients with conditions such as PTSD [post-traumatic stress disorder] will be encouraged by the step forward. "These drugs would be adjuncts to therapy," he says. "This is the future of psychiatry - neuroscience will provide tools to help it become more effective."

Forgetting, again

A post by Abe Burmeister set me thinking, again, about forgetting.

A friend said yesterday, 'After all, when we were young, at some point, we all did something, whatever it was — ran naked down some street, something …'. A photo taken then meant that we were caught forever, always running naked down that street, but it might have disappeared for much of its life, gathering dust in some drawer. Now that photo makes it (straight) to the web and to a kind of permanence and presence (even ubiquity) never before possible. The years pass, but the (by now distributed) photo doesn't.

Back in 2003, Fabio Sergio wrote about how,

… with everyone apparently fascinated with ways to remember I find myself toying with the idea of "technologies for forgetting" … All in all we are facing a future strung tight between the ideal, pacific world of the Memex, where man will be given "access to and command over the inherited knowledge of the ages", and one where Lenny Nero will feel at home, characterized by our collective inability to let go of our past.

I keep hoping (and working) for the first scenario to become our future, but recognize it will require active involvement from everyone, driven by ample awareness of what's at stake.

Over at Abstract Dynamics yesterday, Abe was saying something similar (in a piece about Gmail and Google's goal of "organizing all the world's information"):

Some information is meant to disappear, sometimes for the better, sometimes for the worse. Google it seems is not willing to make that distinction, although ironically they more than any other entity have the power to make things disappear. …

'Some information is meant to disappear', or be mediated. The memory of that time when as a kid you ran naked down some street can linger on in the telling, to be recalled years later, embellished and without its sting, a source of amusement, leg-pulling and amicable, entertaining embarrassment — your children delighted both at your discomfort and at discovering that once you were just like them. But a stark photo on the web, that's copied and posted again and again, sent to the senior partner of your new firm the day you're about to start working there, published in a newspaper years later …

Fabio imagines an angry argument between Mr A and Mr B, and imagines it twice — unfilmed and filmed. In the first case,

After a few days they hook up again, matters having cooled off and all, and they talk about the incident, re-living the discussion while trying to clear things up.The inherent fuzziness of their recollection helps in dumbing sharp edges down, as we have been proven to remember positive things better and negative things less clearly, and in the end they agree on a common explanation of the argument, thus creating the possibility for their relationship to evolve around the event. What is important here, though, is that what actually happened matters as much as what they mutually agreed happened. The final experience, mediated through their second conversation, has the opportunity to change from negative to positive, leaving clarification in place of contrast. All's well that ends well, right?

With filmed evidence of what actually happened,

… there will be simply less room to maneuver for both of them, less room to mediate experience into memory. Due to the timelessness quality of the digitally-produced artifacts, which potentially shine as new forever after they've been first created, Mr. A's descendants will still be able to hear (and judge) Mr. B's words and attitude. Take this one social magnitude level higher and what you get is a society unable to let go of its past's tiniest details.

Forgetting strikes me as something we need to pay a lot more attention to as we go forward with digital technology. It crops up in surprisingly different contexts (IT departments should check out danah boyd's post about teenagers and passwords — 'Technology is a bit too obsessed with remembering; there's a lot of value in forgetting').

And now I remember it, Anne Galloway wrote back in 2003 about forgetting:

We need to forget certain things to survive and stay together. What will happen if everything is tracked and recorded. How will we be able to forget? Will the owners and administrators of the data allow us to forget? For example, we have social and cultural practices (expectations and norms) in place that accommodate comments MADE IN PASSING ... what if certain comments are not allowed to pass?

And also this from 2004, on the Forgetting Machine:

So I was reminded of my Forgetting Machine. And that I am trying to build something that reminds us that not all things can or should be remembered. A tricky task, for sure! Part of this involves the creative corruption of information - along the lines of bricolage or remixing - as well as the selective and wholesale deletion of information.

Anne's paper (2006, I think), 'Collective remembering and the importance of forgetting: a critical design challenge', is available here (pdf). From the Abstract:

Memories are understood as relations of power through which we, as individuals and groups, actively negotiate and decide what can be recollected and what can be forgotten. And without being able to decide what we can remember and forget, we are effectively left without hope of becoming different people or creating different worlds.

That's absolutely my concern for the teenagers posting photos and stories about themselves and each other. I want for them (as I want for my own children) the possibility of their becoming different people, to have the chance to let experience grow into memory and to be allowed to let go, to forget.

Anne has a fine phrase in her paper, 'ubiquitous machines of merciless memory' — 'there is such a thing as too much memory … we need to forget in order to live'. Fabio Sergio asks: 'Are we heading towards an über-politically correct world, where we'll be forced to always ponder all of our words for fear of getting quoted 20 years from now … a future devoid of the room for doubt?'

This, then, is something we also need to be talking about in ICT: forgetting and remembering. I commend Anne's paper very warmly. It asks wise questions — 'What does it really mean if the memories held by our machines never change or get forgotten?' — and remembers that forgetting can be 'a kind of affirmation rather than … a denial. … the value of forgetting is its ability to interrupt time or escape temporal continuity, and thus (re)imagine human experience'. Her paper challenges designers to remember all this, too, and to design accordingly and wisely.