Remix

Paradigm shifts

I like the discipline of the del.icio.us 255 character limit for the excerpt from, or comment on, the item you're bookmarking there. But sometimes there's just too much that's good to be contained or summed like that.

The amazing miracle of YouTube versus The Times, as everyone reading this blog surely already knows, is that YouTube is a platform where cream--user-uploaded videos--rises the the top, to be savored by the world, while The New York Times Company is an information organization that pays thousands of journalists, designers, business people and administrative types millions of dollars to create expert content that tells people what to think and what to like. And honey, that day is passing fast. 

The point here--just to kick it a little harder--is that this is yet more evidence how social media platforms that are shifting the paradigms in a profound way--Not only does YouTube have a mass market, it's video on the web appeal that the more high-brow Times will never have (Is YouTube the next MTV?). Furthermore, it's a platform that gives Google the opportunity to morph into a multimedia MySpace ecosystem, way beyond what Orkut could ever be--and most cruelly, it's something that teens and twenty-somethings care about, which may no longer be the case for The New York Times.

So Google bought YouTube, not a media company, and the fact that doesn't even surprise anyone one anymore and that it makes perfect sense, that, dudes, is a paradigm shift.

*

… consumerization will be the most significant trend to have an impact on IT over the next 10 years. … "Consumers are rapidly creating personal IT architectures capable of running corporate-style IT architectures," he [Gartner's director of global research, Peter Sondergaard] said. "They have faster processors, more storage and more bandwidth."

He advised corporate IT executives to adapt to the changes and prepare for what he called "digital natives," or people so fully immersed in digital culture that they are unconcerned about the effects of their technology choices on the organizations that employ them. … 

In a paper prepared by Gene Phifer, David Mitchell Smith and Ray Valdes, Gartner researchers noted that corporate IT departments historically have lagged behind popular technology waves, such as the arrival of graphical user interfaces and the Internet in business. They argued that the biggest impacts of Web 2.0 within enterprises are collaboration technologies--notably blogs, wikis and social networking sites--and programmable Web sites that allow business users to create mashup applications. … "Our core hypothesis is that an agility-oriented, bifurcated strategy--one reliant on top-down control and management, the other dependent on bottom-up, free-market style selection--will ultimately let IT organizations play to their strengths while affording their enterprises maximum opportunity as well," the Gartner report said.

Technorati tags: , , , , , , , , , ,


From the horse's mouth: Google's Global Counsel

Busy week last week, culminating with a trip to Brixton Academy on the Thursday to hear Pete Doherty and Babyshambles. There is musicianship and lyrical skill in there (I'm convinced of it! Some of my friends who are musicians are … less certain, shall we say), but this populist, narcissistic evening obscured most of that. (I found myself thinking how strangely reminiscent of Blair he is: needing to be loved, yet coming over so much of the time as considering himself … special.) We move on.

Friday afternoon and a quick trip to the where Andrew McLaughlin, Google's worldwide policy counsel, was speaking on :

Andrew McLaughlin is Head of Global Public Policy for Google Inc. Central policy issues for Google include privacy and data protection, censorship and content regulation, intellectual property (including copyright, patent, and trademark), communications and media policy, antitrust/competition, and the regulation of Internet networks and technologies. The leading countries for Google's government affairs activities include the US, Canada, Brazil, Japan, South Korea, China, India, Australia, Russia, Germany, France, the UK, Israel, Egypt, and Ireland. Andrew co-leads Google's Africa Strategy Group.

Now that was a well-spent hour+. Some notes: 

Google faces a number of challenges: 

  1. Censorship: repressive regimes are what one immediately thinks of here and of these China is the only one to which Google has made any accommodation. User-generated content is highly sensitive to the powers-that-be in Saudi Arabia, China, Iran … (So that's blogs, then.) Less obvious forms of censorship include interpretations of what "has to go" because of concerns about child protection and issues to do with cultural protection. Pay close attention to the EC Audio-Visual Services Directive (formerly, ) — an effort to create content control — and the Online Content Directive (I think I got this down right, but I can't find anything about it online). 
  2. Copyright: without Fair Use rights, Google would not exist. Copyright must be revised so as to seek a better balance between the rights of creators (to whose benefit copyright law is currently skewed) and the rights of users. Andrew showed three videos which, in different ways, re-mix copyright material: , and . (BSB was, he said, a huge phenomenon in China.) Currently, no meaningful Fair Use rights exist in Australia. 
  3. Discrimination by carriers: network neutrality; quality of service. 
  4. Security. For example, Google Earth maps the world and you can swoop in on … a Chinese nuclear facility. The UK's attitude is 'no security through obscurity', but China, Russia, India and others are not so happy. So far, Google hasn't blurred or blocked a single image at the request of a government. During the recent war in the Lebanon, there was no real time coverage of the action (within Google's technical ability to do) and served images are, on average and approximately, 18 months behind the present, except during national disasters when all the stops are pulled out and images are as current as possible. (This is all to avoid any unhelpful clash with governmental agencies and consequent, restrictive legislation.) Finally, out of concerns about privacy, image resolution will never go so low as to allow identification of individuals.

Google chooses not to geo-target users by ISP address and then use this to enforce a government's repressive/restrictive laws. So, users can go to to search for what Germany requires Google to block on Google Deutschland. (Yahoo! was forced to implement a ban in France on accessing , but this was in a specific case and established no generic principle.)

maintains a database of Cease and Desist orders.

Some positive things to celebrate or look forward to:

  1. : one day IM chat in two different languages will be possible. Saudi Arabia doesn't like the service (it was being used to translate English > English, generating an unblocked — new — URL in the process). 
  2. Cloud computing. 
  3. Ubiquitous connectivity: mobile telephony; spreading wireless access; increasing deployment of fiber connectivity. 
  4. Other specific initiatives: eg, , .

After the talk, I asked Andrew about Google Desktop and, specifically, : 'The latest version of Google Desktop provides a Search Across Computers feature. This feature will allow you to search your home computer from your work computer, for example'. (To access this option in Google Desktop Beta Preferences, right click on the Google Desktop icon in the system tray > Preferences > Google Account Features.) I wasn't surprised to hear that the take-up of this has been limited. Many of us seem to be happy-ish with our email residing on Google's servers, but putting our documents there seems to cross some kind of psychological barrier. I suspect that this will change over the next few years as we slide into using more tools that work both online and off, but users haven't taken to this just yet.

By the way, I note that : Microsoft and Google have joined forces with the British Library in calling on the government to radically overhaul the intellectual property (IP) law.


Education and the virtually real: Second Life

From the same posting by David Weinberger that I just mentioned comes this:

Nikolaj Nyholm talks about how Imity.com uses Second Life to prototype user interactions. 

Matt Bidulph has been doing Second Life mashups. You can use http, he says, to pipe out info from SecondLife, including what people are saying. Cory Ondrejka, Second Life CTO, says that there's been an explosion of interest and development since they put in http requests. (Someday, he says, they'll make every object a Web server.) He says that there are 100 classes a week inside Second Life in how to use the API and scripting language. He looks forward to the day when there is a Second Life renderer inside a Web browser.

Now, I don't (yet) use Second Life but I am interested in ways of prototyping things (there's more than one feed-in here to using similar ideas in education). Up until recently, I was prepared for things like this: 

Video scenarios present people interacting with fictional technology by faking the actual functionality through the use of film techniques. … the idea of making little movies that demonstrate interaction ideas is really liberating. Orange Cone

That's exciting, but now I'm suddenly aware of Second Life being used by designers and businesses in similar, or near similar, ways — see here for two examples: W Hotels ('opening a virtual hotel in Second Life to test out "virtual architecture"  for ALOFT, a new hotel idea') and American Apparel ('opened a virtual store … people can outfit their avatars. That gives American Apparel an inside look at what they want in the real world'). Amazon seems to be going SL-wards, according to Business Week online, and Jeff Barr, Amazon's Web Services Evangelist, reports he has been working on 'a prototype for a developer relations “outpost” in Second Life' — see the images he's posted there. 

There's going to be a lot of this and very soon we'll be using Second Life (etc) in teaching, too. Some have got there already. Here's an example from NMC Campus Observer, focusing on the work of 'Lorenzo Stork (a.k.a Larry Miller, from University of Tennessee)', interviewed (of course) in SL itself: 

Lorenzo/Larry went on to talk about his first in world project, a Continuing Medical Education class … in cooperation with the University of Illinois, Chicago Medical College Library. … Doctors will get a small dose of content, but they will then have to address a patient scenario related to hypertension and diabetes. In the scenario, they will be required to use some of the Second Life library resources accessed via Info Island, then return at the end for some in world discussion. Participants will be practicing doctors working on their CME credits, and it is Lorenzo/Larry’s hope that the doctors build some of their experience in Second Life before the workshop.

NMC Campus Observer is one site to watch closely. This from their About page

The NMC Campus is an experimental effort developed to inform the New Media Consortium’s work in educational gaming.  In early 2006, the organization made the decision to create a space for experimentation in a virtual 3-D world  and began a search for suitable platforms, with a special interest in massively multi-player environments. 

Ultimately, Second Life was chosen, and working with an advisory board drawn from its membership, the NMC began designing a space within Second Life expressly to support collaboration, learning, insightful interaction, and experimentation — and to encourage exploration of the potential of virtual environments.  (See the Concept document for the NMC Campus for additional background.)

Other SL-centred developments I've noticed recently include the communal writeboard facility in Second Life and (going back to the opening idea of mashups) the ability to listen to Last.fm stations within SL. 

Mitch Kapor is reported recently as saying (this via his own blog): 

Second Life is a disruptive technology on the level of the personal computer or the Internet. “Everything we can imagine and things that we can’t imagine from the real world will have their in-world counterparts, and it’s a wonderful thing because there are many fewer constraints in Second Life than in real life, and it is, potentially at least, extraordinarily empowering.”

Technorati tags: , , , , , , ,


Canter on Web 2.0

Matt Gertner:

… we should be wary of writing Web 2.0 off as vacuous before it has a realistic chance of achieving its potential, particularly since this is likely to take several years. … Web 2.0 may be a messy term, and it’s undeniably over- (and frequently mis-) used. But it’s still a useful way of encapsulating a real and important trend.

I'm all in favour of educated scepticism, but some reservations seem to fly in the face of what end-users are experiencing (and then to bring down the fundamentalist shutters on any further discussion). As usual, Richard MacManus has some sound reflections on the wave of anti-hype. (Incidentally, through his site I came across Michael Casey's LibraryCrunch and a posting there about libraries and Web 2.0 — something to which all schools and universities need to give a lot of thought.)

I've been reading Marc Canter's Breaking the Web Wide Open!: 'The online world is evolving into a new open web (sometimes called the Web 2.0), which is all about being personalized and customized for each user. Not only open source software, but open standards are becoming an essential component'.

Open standards mean sharing, empowering, and community support. Someone floats a new idea (or meme) and the community runs with it – with each person making their own contributions to the standard – evolving it without a moment's hesitation about "giving away their intellectual property." … The combination of Open APIs, standardized schemas for handling meta-data, and an industry which agrees on these standards are breaking the web wide open right now. So what new open standards should the web incumbents—and you—be watching? Keep an eye on the following developments:

Identity
Attention
Open Media
Microcontent Publishing
Open Social Networks
Tags
Pinging
Routing
Open Communications
Device Management and Control

… Today's incumbents will have to adapt to the new openness of the Web 2.0. If they stick to their proprietary standards, code, and content, they'll become the new walled gardens—places users visit briefly to retrieve data and content from enclosed data silos, but not where users "live." The incumbents' revenue models will have to change. Instead of "owning" their users, users will know they own themselves, and will expect a return on their valuable identity and attention. Instead of being locked into incompatible media formats, users will expect easy access to digital content across many platforms.


Music like running water

John Naughton, writing in today's Observer, recalls a 2002 NYT article about David Bowie in which the musician speculated on the future of music:

'The absolute transformation of everything we ever thought about music will take place within 10 years,' he wrote, 'and nothing is going to be able to stop it. I see absolutely no point in pretending that it's not going to happen. I'm fully confident that copyright, for instance, will no longer exist in 10 years, and authorship and intellectual property is in for such a bashing. Music, itself, is going to become like running water or electricity...'

If you want to read the NYT piece, you can go here and pay $3.95 for the pleasure. And it's worth reading. Alternatively, you can go here and read it for free. Or here. I find it … amusing that the NYT carries an article prophesying the end of copyright as we know it, then tries to charge you for the same article — only to find itself defeated by the power of the net.

And that's the point. As Naughton's Observer article explains, what has happened so far (mp3/compression technology, Napster/etc, iPods/etc) to change the music-as-packaged-product model for the broadcasting and entertainment industries is but

… half a revolution, because it's still [my emphasis] based on the music-as-product model. For the record industry, it has been an unqualified disaster, because millions of people aren't paying for their packages. Legal download services like Apple iTunes are beginning to mitigate the disaster, but it's not clear that even iTunes can compete with illicit file-sharing.

So what's to be done? Here's where the water analogy comes in. It's as if we lived in a world where water was only made available in Perrier bottles, so that if you want the stuff you have to buy (or steal) bottles. But in fact water is also available as a public service, piped through mains and available by turning a tap. We pay for this either via a flat tax or a charge based on how much we use, and everyone is (reasonably) happy. We have access to water whenever we need it; and the companies that provide the stuff earn reasonable revenues from providing it.

As broadband internet access becomes ubiquitous - and wireless - this model suddenly becomes feasible for music. At the moment, the only way we can have the stuff we crave is to buy or steal the product. But if we could access whatever we wanted, at any time, on payment of a levy, our need to own the packages would diminish. We could just turn on the tap, as it were, and get Beethoven or So Solid Crew on demand. Not to mention the collected works of David Bowie. And then we could give him a Brit Award for being so far ahead of the game.

Bowie's vision of the future (2012!) is wilder/more radical than Naughton's, of course.


Nifty Web 2.0 definition

I enjoyed Tim O'Reilly's long piece about Web 2.0, but his summary has focused my mind:

Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an "architecture of participation," and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.