Semantic Web

Life, the web — all a tangle

The interview Tim Berners-Lee gave last year (IBM developerWorks) was widely reported. I blogged it, Web 2.0: 'what the Web was supposed to be all along', in August 2006, shortly after it was posted. What most struck me about it was expressed, pithily and succinctly, by Sir Tim in a remark about the web made on an earlier occasion — the MIT Technology Review Emerging Technologies conference in 2005, as reported by Andy Carvin in Tim Berners-Lee: Weaving a Semantic Web:

The original thing I wanted to do was to make it a collaborative medium, a place where we (could) all meet and read and write.

At the MIT conference, Sir Tim talked about Marc Andreessen and the emergence of a commercial web browser. In the IBM developerWorks interview he said of his web browser,

… the original World Wide Web browser of course was also an editor. … I really wanted it to be a collaborative authoring tool. And for some reason it didn't really take off that way.  And we could discuss for ages why it didn't. … I've always felt frustrated that most people don't...didn't have write access.

Just a couple of weeks ago, I came across a 1997 Time magazine piece about Tim Berners-Lee by Robert Wright, The Man Who Invented The Web:

Berners-Lee considers the Web an example of how early, random forces are amplified through time. "It was an accident of fate that all the first [commercially successful] programs were browsers and not editors," he says. To see how different things might have been, you have to watch him gleefully wield his original browser--a browser and editor--at his desk. He's working on one document and--flash--in a few user-friendly keystrokes, it is linked to another document. One document can be on his computer "desktop"--for his eyes only--another can be accessible to his colleagues or his family, and another can be public. A seamless neural connection between his brain and the social brain. … he is grateful that Andreessen co-authored a user-friendly browser and thus brought the Web to the public, even if in non-ideal form. Yet it can't have been easy watching Andreessen become the darling of the media after writing a third-generation browser that lacked basic editing capabilities.

Now the web is 'finally starting' to follow 'the technological lines he envisioned (… as software evolves)':

Berners-Lee, standing at a blackboard, draws a graph, as he's prone to do. It arrays social groups by size. Families, workplace groups, schools, towns, companies, the nation, the planet. The Web could in theory make things work smoothly at all of these levels, as well as between them. That, indeed, was the original idea--an organic expanse of collaboration. … "At the end of the day, it's up to us: how we actually react, and how we teach our children, and the values we instill." He points back to the graph. "I believe we and our children should be active at all points along this."

So, a fundamental deviation from Tim Berners-Lee's vision for the web occurred in the form taken by popular, commercially viable web browsers. In Dave Winer's view, this early deviation was heavily reinforced by Microsoft:

Since the re-rollout of Office in 1996, it's been really clear why Microsoft was so hell-bent at first owning and then suffocating the web browser, along with the web. … Because for them, writing was not something that would be done in a web browser, if they improved their browser as a writing tool, that would be the end of Word, and with it, a big reason for using Office. … If instead, Microsoft had embraced the web, and with it the shift in their product line and economics, in 1995, we'd have a much richer writing environment today. Blogging would have happened sooner, in a bigger way. It's hard to imagine how much the sins of Microsoft cost all of us.

What a tangled thing technology is (Berners-Lee — 'The Web is a tangle, your life is a tangle – get used to it'). I hope very much that Ted Nelson has brought Geeks Bearing Gifts nearer to publication. Meanwhile, with the Time 1997 piece a little bit more of the road map of the web's evolution became clearer to me. The read-only nature of the successful web browsers that came after Sir Tim's explains a great deal about how many an adult of a certain age perceives the web. I think of John Naughton's sketch of how today's 22 year-old conceives of the web, and of Andrew McAfee's comment,

Evidence is mounting that younger people don’t think of the Internet as a collection of content that other people produce for them to consume. Instead, they think about it as a dynamic, emergent, and peer-produced repository to which they’re eager to contribute.

And I think back to Bradley Horowitz's talk at the London March 2007 FOWA meeting, which I wrote about here — and Twittered at the time: 'from a hierarchy of creator(s)/synthesisers/consumers (1:10:100) towards a web world of participation (100)'.

If the history of the web browser had itself been different, would we have suffered the misalignment of perceptions about the essentially social, creative nature of the web ('I believe we and our children should be active at all points along this') that often exists now between today's different generations of users?

Well, and in any case, 'The Web is no longer the future of computing, computing is now about the Web' (Dare Obasanjo).


Footnote. Two other things from the Time piece (I pass them on as given there): ' … contrary to the mythology surrounding Netscape, it was he [Berners-Lee], not Andreessen, who wrote the first "graphical user interface" Web browser. (Nor was Andreessen's browser the first to feature pictures; but it was the first to put pictures and text in the same window, a key innovation.)'

Wikipedia's timeline of web browsers is available here.

Google and Jaiku: living the social network

I found Chris Messina's post, Theories about Google’s acquisition of Jaiku — and two passages really caught my attention:

In the future, you will buy a cellphone-like device. It will have a connection to the internet, no matter what. And it’ll probably be powered over the air. The device will be tradeable with your friends and will retain no solid-state memory. You literally could pick up a device on a park bench, login with your OpenID (IP-routed people, right?) from any number of service providers (though the best ones will be provided by the credit card companies). Your user data will live in the cloud and be delivered in bursts via myriad APIs strung together and then authorized with OAuth to accomplish specific tasks as they manifest. If you want to make a phone call, you call up the function on the touch screen and it’s all web-based, and looks and behaves natively. Your address book lives in Google-land on some server, and not in the phone. You start typing someone’s name and not only does it pull the latest photos of the first five to ten people it matches, but it does so in a distributed fashion, plucking the data from hcards across the web, grabbing both the most up-to-date contact information, the person’s hcalendar availability and their presence. It’s basically an IM-style buddy list for presence, and the data never grows old and never goes stale. Instead of just seeing someone’s inert photo when you bring up their record in your address book, you see all manner of social and presence data. Hell, you might even get a picture of their current location. This is the lowercase semantic web in action where the people who still hold on to figments of their privacy will become increasingly marginalized through obfuscation and increasingly invisible to the network. I don’t have an answer for this or a moral judgement on it; it’s going to happen one way or another. …

In the scheme of things, it really doesn’t have anything to do with Twitter, other than that Twitter is a dumb network that happens to transport simple messages really well in a format like Jaiku’s while Jaiku is a mobile service that happens to send dumb messages around to demonstrate what social presence on the phone could look like. These services are actually night and day different and it’s no wonder that Google bought Jaiku and not Twitter. And hey, it’s not a contest either, it’s just that Twitter didn’t have anything to offer Google in terms of development for their mobile strategy. Twitter is made up of web people and is therefore a content strategy; Jaiku folks do web stuff pretty well, but they also do client stuff pretty well. I mean, Andy used to work at Flock on a web browser. Jyri used to work at Nokia on devices. Jaiku practically invented the social presence browser for the phone (though, I might add, Facebook snatched up Parakey out of Google’s clenches, denying them of Joe Hewitt, who built the first web-based presence app with the Facebook iPhone app). If anything, the nearest approximation to Jaiku is Plazes, which actually does the location-cum-presence thing and has mobile development experience.

A long post, full of good links and well worth pondering.

Incidentally, and for all their significant differences (IP-routed people!), the first paragraph quoted above recalled to mind Bruce Sterling's wonderful riff, Harvey Feldspar's Geoblog, 07.10.2017, on a future without a mobile phone:

"But Mr. Feldspar, suppose this international criminal doesn't carry a mobile?" demanded representative Chuck Kingston (R-Alabama). It would have been rude to point out the obvious. So I didn't. But look, just between you and me: Anybody without a mobile is not any kind of danger to society. He's a pitiful derelict. Because he's got no phone. Duh.

He also has no email, voicemail, pager, chat client, or gaming platform. And probably no maps, guidebooks, Web browser, video player, music player, or radio. No transit tickets, payment system, biometric ID, environmental safety sensor, or Breathalyzer. No alarm clock, camera, laser scanner, navigator, pedometer, flashlight, remote control, or hi-def projector. No house key, office key, car key... Are you still with me? If you don't have a mobile, the modern world is a seething jungle crisscrossed by electric fences crowned with barbed wire. A guy without a mobile is beyond derelict. He's a nonperson.

A non-person, 'increasingly marginalized … increasingly invisible to the network'.

Web 2.0: 'what the Web was supposed to be all along'

Tim Berners-Lee, interviewed by Scott Laningham for IBM developerWorks

BERNERS-LEE: … the original World Wide Web browser of course was also an editor. I never imagined that anybody would want to write in anchor brackets. We'd had WYSIWYG editors for a long time. So my function was that everybody would be able to edit in this space, or different people would have access rights to different spaces. But I really wanted it to be a collaborative authoring tool. And for some reason it didn't really take off that way. And we could discuss for ages why it didn't. You know, there were browser editors, maybe the HTML got too complicated for a browser just to be easy. 

But I've always felt frustrated that most people don't … didn't have write access. And wikis and blogs are two areas where suddenly two sort of genres of online information suddenly allow people to edit, and they're very widely picked up, and people are very excited about them. And I think that really for me reinforces the idea that people need to be creative. They want to be able to record what they think. … 

LANINGHAM: You know, with Web 2.0, a common explanation out there is Web 1.0 was about connecting computers and making information available; and Web 2 is about connecting people and facilitating new kinds of collaboration. Is that how you see Web 2.0? 

BERNERS-LEE: Totally not. 

Web 1.0 was all about connecting people. It was an interactive space, and I think Web 2.0 is of course a piece of jargon, nobody even knows what it means. If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along. 

And in fact, you know, this Web 2.0, quote, it means using the standards which have been produced by all these people working on Web 1.0. It means using the document object model, it means for HTML and SCG and so on, it's using HTTP, so it's building stuff using the Web standards, plus Java script of course. So Web 2.0 for some people it means moving some of the thinking client side so making it more immediate, but the idea of the Web as interaction between people is really what the Web is. That was what it was designed to be as a collaborative space where people can interact. 

Now, I really like the idea of people building things in hypertext, the sort of a common hypertext space to explain what the common understanding is and thus capturing all the ideas which led to a given position. I think that's really important. And I think that blogs and wikis are two things which are fun, I think they've taken off partly because they do a lot of the management of the navigation for you and allow you to add content yourself. 

But I think there will be a whole lot more things like that to come, different sorts of ways in which people will be able to work together. 

The semantic wikis are very interesting. These are wikis in which people can add data and then that data can then be surfaced and sliced and diced using all kinds of different semantic Web tools, so that's why it's exciting the way people, things are going, but I think there are lots of new things in that vein that we have yet to invent.

Transcript here. Podcast here. (Found via Read/Write Web.)

There is something so generous and inspiring in this originating vision of Sir Tim's — made all the more so because it was there at the outset. Had the Web been widely understood in this way from the start, many walled gardens (I'm thinking particularly about schools) would have resisted it vigorously. But now, or (at least) for now, walls have been breached.

I spoke about the-web-as-the-read/write-web, and its implications for education, at Reboot and MicroLearning: see here.

Technorati tags:

Catching up with Reading Lists

OPML. OPML. OPML. How could I forget thee?

Reading lists are OPML documents that point to RSS feeds, like most of the OPML documents you find, but instead of subscribing to each feed in the document, the reader or aggregator subscribes to the OPML document itself. When the author of the OPML document adds a feed, the aggregator automatically checks that feed in its next scan, and (key point) when a feed is removed, the aggregator no longer checks that feed. THe editor of the OPML file can update all the subscribers by updating the OPML file. Think of it as sort of a mutual fund for subscriptions.

    OPML is a really useful file structure that just about everyone who uses a feed aggregator, like bloglines, is already using without necessarily knowing it. Most readers keep subscribed feeds for a user in OPML format, for easy importing and exporting. If you export your OPML feed you get a XML file of your feeds, which other feed readers understand.

    The problem with OPML files from readers is that they are static, meaning I can give you my OPML file but you will never know if I add or delete feeds unless I tell you and give you the new file. All you get is a snapshot of my feeds from the moment that I share my file with you. Dave [Winer] thinks these files should be dynamic, which means that I can share my opml file, or as he calls it my reading list, and anyone who subscribes to it will always have the current version, no matter how often I amend that list. There is very little technology needed to allow this to happen - the various feed readers simply need to agree to support dynamic lists and allow people to share them permanently. Dave’s trying to make this happen. If he succeeds, we’ll all be able to subscribe to reading lists from people we trust on a given subject, and good feeds will be that much easier to find. … In a comment, Eric Lin writes:

    I could easily see this not only as a way to share my reading list with others I know, but also to be matched with others I don't know with common interests. What if the system could match me with other people who have similar tech, music or lifestyle feeds as I do. It would be a fantastic way to make new connections as well as strengthen existing ones, and I could see communities forming around overlapping feeds. These communities might be stronger than those that form around a single website because they'd have more in common.

  • Nick Bradbury: Reading Lists for RSS
  • In a nutshell, the idea is that you'd subscribe to an OPML document which contains a list of feeds that someone is reading, some organization is recommending, or some service has generated (such as "Top 100" list). Changes to the source OPML document would be synchronized, so that you're automatically subscribed to feeds added to the reading list. Likewise, you'd be unsubscribed from feeds removed from the original OPML.

(Thoroughly indebted to Alex's post, Reading Lists = the killer app for OPML.)

The social aspect of OPML, 'communities forming around overlapping feeds', is really interesting.

Technorati tags:

Weird feed behaviour

1) I've had to decouple FeedDemon 1.6 RC2 (a beta) from NewsGator: the synching between the two had gone haywire, ever since a problem that developed some time around 28 December at the NewsGator end of things, and it was driving me nuts.

2) More to the point here, apologies to my FeedBurner subscribers: FeedBurner has a range of services on offer — PingShot service and FeedFlare — and, I'm not sure, but changing my options on both of these seems to have set off a riot in that feed, posts reappearing as unread a number of times and (most recently) a strange 'noemail' address appearing entirely unasked for in the headers of posts. I've reset my options within FeedBurner and I hope things will now quieten down again.

For good measure, I've been playing with Technorati tags: in TypePad these have to be entered manually (TypePad's categories are read as Technorati tags, but categories are not the same kind of animal as tags) which is a little bit of work. (Within Firefox, Performancing semi-automates the process for you.) The work's worth it when the tags are read by Technorati, but I'm finding the process more miss than hit. As ever, Dave Sifry is very supportive, but we still haven't cracked the problem. Niall Kennedy at Technorati suggests it may be feed-related, which led me to validate my feed and the feed of a number of blogs. Errors abound everywhere, which made me feel a bit better. I still can't get the Technorati tags to work consistently, though, and the most recent ones have simply gone unnoticed by Technorati's spiders.

Web 2.0. Dontcha just luv it.

Technorati tags: , , ,

Jeff Jarvis on tagging

Well, I've had a geeky good time with the subject of tags. But this isn't just another valentine to just another cool online trend; we're so over that. No, tags have a larger lesson to teach to media. They present a clear demonstration that the web is not about flat content. The web is about connections and the value that arises from them if you enable people to collect and communicate. In the old, big, centralised, controlled world of media, a few people with a few tools - pencils, presses and Dewey decimals - thought they could organise the world and its content. But as it turns out, left to its own devices, the world is often better at organising itself. Jeff Jarvis, Media Guardian

Technorati tags: ,

Structured blogging

Structured blogging is back.

This is a marker so I don't lose sight of what might be a significant development next year.

Structured Blogging is a way to get more information on the web in a way that's more usable. You can enter information in this form and it'll get published on your blog like a normal entry, but it will also be published in a machine-readable format so that other services can read and understand it. Think of structured blogging as RSS for your information. Now any kind of data - events, reviews, classified ads - can be represented in your blog. Structured Blogging

Almost immediately, controversy. The engaged but non-technical punter is bound to be confused. On the one hand, Stowe Boyd:

My bet is that Structured Blogging will fail, not because people wouldn't like some of the consequences -- such as an easy way to compare blog posts about concrete things like record reviews, and so on -- but because of the inherent, and wonderful messiness of the world of blogging. Because blog posts don't have to conform to any structural standards, they can be used to do anything: nothing is out of bounds, because we haven't created the boundaries. The messiness of the world we are living in is one of the reasons that it is such a rich and rewarding experience. I am not sure who is benefitted if everyone falling into line and adopting consistent standards for the structure of blog posts. Perhaps companies like PubSub -- one of the driving force behind all this -- who would like to be able to sort out all the blog posts about hotels, gadgets, and wine out there, and aggregate the results in some algorithmic fashion, and then make money from the resulting ratings and reviews. But I am not sure that it would be a better world for bloggers, or even blog readers. So I favor the microformat approach, which is messy, puts more of a burden on the blogger, and will require a host of tools to be built to make it all work. But microformats will work bottom-up -- tiny little tagged bits of information buried in the blog posts -- as opposed to structurally. And I am betting -- as always -- on bottom-up.

This feels right to me, but the idea that 'The promise of structured content is that we would have an explosion of software aggregating it into useful, specialized services' (bokardo) is attractive (of course) and when I find David, Marc and Thomas all lining up behind it …

Another source of confusion is the link between this, or the lack of link between this, and microformats. Bob Wyman explains that structured blogging is what we do and microformat is just what it says on the can — the format we use: 'The two concepts are orthogonal. They don't compete. They can't compete. Verbs don't compete with nouns'.

One thing seems certain: if it's as unclear as this, how on earth will it take off (assuming it should)?

David Weinberger at the OII

Back in July, 2004, I came across David Weinberger's post about Three Orders of Organisation, and then I read about his idea of Trees vs Leaves. You can read him on the former here and the latter here. The material behind and in these two postings formed much of the substance of David's seminar at the OII on Wednesday morning. In addition, I've come across a third posting, The end of data?, which also fed in to what he said this week in Oxford. There's a book on the way, Everything Is Miscellaneous — overview here — and there's a summary of an earlier version of yesterday's talk here. Finally, the OII has a webcast of the talk.

The seminar was a whistle stop tour of some "high" points in the development of taxonomies — Aristotle on nesting, Porphyry's tree, Dewey and library classification (David has blogged about Dewey a number of times, eg here and here): 'all of these systems assume there's a top down view of knowledge' and seek to banish ambiguity and present a clear picture of reality/knowledge. Everything in its right place …

But in the bottom-up world of social tagging an item can be in many categories simultaneously (I don't think 'tags' are the same as 'categories', but I'm running here with the general tenor of David's argument), and users are contributors both to the stock of tagged items and to their ordering. In this world, trees will never go away, but we need to stop looking for The Tree. Instead, we should build a big pile of data (leaves), attach as much metadata as possible and filter on the way out not on the way in. Users will do the filtering, and the moment of "taxonomizing" should be postponed until the users need to do it. There is now nothing that is not metadata — data is metadata — and we can no longer predict what users want. Messiness is a virtue.

David sees this bottom-up approach to tagging as a reaction to the semantic web. There is no end to the way the deck of digitalised knowledge in this world can be cut and sliced. (Wikipedia, as Jimmy Wales says, is not paper: for one thing, David said, where the Encyclopedia Britannica restricts itself to 32 printed volumes and 65,000 topics, Wikipedia has no such restrictions and is currently running at some 800,000 entires — including ones on the Deep-fried Mars Bar and, famously, the Heavy Metal Umlaut.) In the world of multi-subjectivity, knowledge is never going to be "perfect". Instead, we must think in terms of 'good enough'. We are living through a revolution, a fundamental change to the way we understand knowledge and our pursuit of it. The global conversation that is the net changes the roles of filters and, therefore, our understanding of what a filter is.

In the questions at the end of his talk, it seemed to me that in fact David is prepared to admit much more nuance and to accept that top-down taxonomies are not going to go away. And, yes, he agreed that the web is both a distributed library as well as being something that is about and for connectivity. It was put to him that the top-down, authoritarian conception of the semantic web is only one model, and that there are other models where the semantic web is bottom up. I share the view developed by him and his questioner at this point, that the net can provide for many different ways of organising knowledge. And I'm sure he's right when he says that soon we will see people making a living through devising new classificatory systems.

There's a problem of scale, too: as David put it, too many taggers can make for an unhelpful, confusing tag-soup, counterable, perhaps, through cluster-analyses intended to disambiguate (eg, Flickr's Capri clusters). But in David's view, if 'good enough' is good enough then scaling should not prove a problem.

So is "good enough" good enough? Tom Chance probed whether it's sufficient in matters more important than the examples David used (eg, beer): when it comes to deciding about nuclear power, 'good enough' is surely short of the mark. My colleague, Ian, linked this point to one about the role of institutions in this new world. They're highly unlikely to go away (!), but the morning's seminar left me in no doubt that trust, and the verification of trust, in institutions is altered by the rise of online, do-it-yourself mass publishing. Yet, as Jonathan Zittrain said in his summing up, the desire for the canonical article on a topic continues.

At the start of his talk David remarked, 'This could be the bright, shiny period of the internet, of openness'. The net gives us many reasons to be happy, but there are many forces at work which may make history of David's visionary presentation. More about this soon.

Web 2.0: 'something qualitatively different about today's web'

'Web 2.0' is a big, fat target of a term, but it's not just hype. Tim O'Reilly:

The reason that the term "Web 2.0" has been bandied about so much since Dale Dougherty came up with it a year and a half ago in a conference planning session (leading to our Web 2.0 Conference) is because it does capture the widespread sense that there's something qualitatively different about today's web. … Web 2.0 is the era when people have come to realize that it's not the software that enables the web that matters so much as the services that are delivered over the web. Web 1.0 was the era when people could think that Netscape (a software company) was the contender for the computer industry crown; Web 2.0 is the era when people are recognizing that leadership in the computer industry has passed from traditional software companies to a new kind of internet service company. The net has replaced the PC as the platform that matters, just as the PC replaced the mainframe and minicomputer.

Richard MacManus sums it up: 'what Web 2.0 means to me - everyday, non-technical people using Web technologies to enhance their own lives and businesses. The Web is an infrastructure, a foundation. What we create and build on the Web is what Web 2.0 is all about.' Richard quotes Ian Davis, 'Web 2.0 is an attitude not a technology', who goes on to say:

It’s about enabling and encouraging participation through open applications and services. By open I mean technically open with appropriate APIs but also, more importantly, socially open, with rights granted to use the content in new and exciting contexts. Of course the web has always been about participation, and would be nothing without it. It’s single greatest achievement, the networked hyperlink, encouraged participation from the start. Somehow, through the late nineties, the web lost contact with its roots and selfish interests took hold. This is why I think the Web 2.0 label is cunning: semantically it links us back to that original web and the ideals it championed, but at the same time it implies regeneration with a new version. Technology has moved on and it’s important that the social face of the web keeps pace.

Davis also talks about Web 2.0, the Semantic Web, XML and RDF. (And smushing — not come across that before! Got to pass smushing on to the OED.)

Incidentally, I then went on to read Ian Davis' most recent post, 'Searching Folksonomies':

… I don’t actually visit all that often. For me, is a write-only environment. I fire and forget. I’m bookmarking because I might one day want to go back and use find it but in practice I rarely do. I seem to remember that the last time I did try to find something I’d bookmarked, I couldn’t remember the tags I’d used or even if I had bookmarked it and I ended up with Google anyway. … the fact of the matter is that tagging systems and folksonomies are great for organising, but boy do they suck when it comes to finding something. Google still wins hands down  …

He suggests a Web 2.0 solution — 'I want all the pages I’ve bookmarked to be searched and shown first whenever I search in Google. Maybe I could do this as an extension to Google desktop, but a better solution would be for Google to allow me to register my RSS feeds with them. Then, they could subscribe to my feeds to learn what I’ve recently read or bookmarked and show those at the top of any search results. That would be extremely cool and infinitely useful!'

And by association, this: I haven't yet been able to import my bookmarks into Yahoo's My Web. Despite following the instructions, I always arrive at an error page: 'Sorry, we were unable to detect a valid feed'. Unless, that is, I use a backup from several months ago: 850 bookmarks as opposed to nearly 2000. Is there an upper limit to how many bookmarks My Web can import at one time?

Kayaking in chaos

Not a quiet weekend after all, but one when discussion on the web about tags and tagging — with implications for much more! — felt (for me) like we'd reached a clearing in the wood. (Though my metaphor should probably be one to do with rivers. Still, there's more than enough double-take to this post's title to be going on with.)

It began with yesterday's post by Clay Shirky on Many 2 Many:

… It doesn’t matter whether we “accept” folksonomies, because we’re not going to be given that choice. The mass amateurization of publishing means the mass amateurization of cataloging is a forced move. I think Liz’s examination of the ways that folksonomies are inferior to other cataloging methods is vital, not because we’ll get to choose whether folksonomies spread, but because we might be able to affect how they spread, by identifying ways of improving them as we go.

To put this metaphorically, we are not driving a car, with gas, brakes, reverse and a lot of choice as to route. We are steering a kayak, pushed rapidly and monotonically down a route determined by the environment. We have a (very small) degree of control over our course in this particular stretch of river, and that control does not extend to being able to reverse, stop, or even significantly alter the direction we’re moving in.

Cory commented: 'These paragraphs could just as readily apply to changes in copyright, lossily compressed music, or spam: they are characteristics inherent in the ecology itself. The discussion needs to center around how to exist in their presence, not how to change them.'

And just now, responding to David Weinberger's question posted earlier today, Aren't we going to innovate our way out of this?, Clay Shirky writes:

… My answer is yes, but only for small values of "out". A big part of what's coming is accepting and adapting to the mess, instead of exiting it. …   The Web … is chaos. Chaos! You can link anything to anything else! … How on earth can you organize the Web? It plainly isn't now, and it never can be. …

The whole of this last post needs to be read and thought about a long time (several whiskies). I particularly like: 'Anything that operates at really large scale takes on the characteristics of organic systems, including especially degeneracy, the principle that there is not a one-to-one mapping between function and location in the system. (Christopher Alexander got there a long time ago, in A City Is Not a Tree, to which we might only add that the Web is not a tree either.)'

There will be losses and gains. I like LaughingMeme's approach:

People are still too stiff and rigid with their tagging technique. Loosen up. You don't have to find the "right category" to put something into, that is part of the tyranny and inflexibility of a classification scheme that we're trying to get away from. Don't tell me what it is, the "truth" of it as it were. Tell my why it matters.

But I can see the force of Liz Lawley's concerns and we'll need to go on innovating as much as possible to limit the losses and maximise the gains.