My Photo
Blog powered by Typepad

May 2005

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

« Press 'Hits" and How (Paul Graham Believes) They Happen | Main | Why Current Intellectual Property Law is So Wrong-Headed »

April 22, 2005

Comments

Joe Buck

Dan, I think you missed the zero'th version of the web, the version that we had before the release of the first Netscape. It was also a highly participatory web, and new users were encouraged to produce their own "home pages" (and most did). In some ways, we are returning to that era.

Marc Andressen's 1993-era list of "What's new on the Web" was basically a blog, in hand-coded HTML; entries were dated, with new entries appearing at the top. A large number of people who had access were in universities, and lots of people got their feet wet coding their own HTML. Experimentation was rampant, and the business world had not yet invaded. Web pages had grey backgrounds and simple layout, and almost anyone with any interest could quickly make a page or two.

Jeremy Zawodny

Your link to Yahoo's APIs actually goes to Google.

I think you meant to use http://developer.yahoo.net/ in there.

Dan Gillmor

Jeremy, sorry...fixed.

Neil T.

Hey, congrats on the FT gig. Nice to know that they got someone who knows what he's talking about :)

Dan Gillmor

Joe, excellent points.

Ran Talbott

"Such pages, however, tended toward dullness and infrequent updating."

Or, to put it another way: Such pages tended to fulfill the original vision of the Web, which was to allow absolutely anyone to publish information that might otherwise be lost to the public, in a way that allows it to be searched, indexed, bookmarked, and linked to related information. And accessed by absolutely anybody who's looking for it.

As opposed to today's "dynamic" Web, where you need a broadband connection, an industrial-grade graphics workstation, and more plug-ins than a Roman orgy just to look up the atomic weight of molybdenum. Which you can't bookmark because the URL is a dynamically-generated conglomeration of the hostname, your session ID, the phase of the moon, and the bra size of the webmaster's current girlfriend, that doesn't point to a page that's actually stored on disk somewhere.

As nifty as it is that people have found new ways to make use of HTTP and HTML, we seem to be slowly losing the very concept of "publishing" as "preserving a record of today for future recall". Instead of being the equivalent of an "address" where one can "go" to retrieve information, the URL has become a "magic incantation" that instructs a distant server to perform some action that may or may not produce the same results as the last time it was used.

In some ways, that's good: it's nice to be able to use the same mechanism to say "Bring up the latest edition of Dan's blog", "Show me the current pressure and temperature readings of Injection Molder #7", and "Display page 7 from our company's 2003 annual report".

But there's some very scary Orwellian potential here, as well as the risk of exacerbating the Digital Divide by constantly ramping up the minimal platform needed to access much of the web. Those librarians Dan mentioned lately shouldn't be the only ones worried about making sure that a large percentage of online content remains "dull" and "static".

adamsj

"more plug-ins than a Roman orgy"

God, Ran, I'm jealous--that's a brilliant turn of phrase.

Ross Mayfield

Was odd to have this copy of FT on the plane to Europe and have the IT section seem like the Merc

jim wilde

Hi Dan,

I love this stuff - xml, webservices, oss. We took a cms, drupal, and added several of our own webservices to ceate a service - Ideascape - that interacts with del.icio.us and turns knowledge and idea management as well as open innovation upside down.

Joshua Porter

My favorite Web 2.0 (3.0) app is the Google/Craigslist app written by Paul Rademacher. This is a great example of bringing in the content from 2 services via their APIs and creating a new, powerful interface that couldn't have existed in earlier versions of the web.

mahlon

I'd argue that the catalyst for moving from Web 1.0 to Web 2.0 was relevant search, driven by Google's innovation. The price we pay for the participatory, read-write Web has a much lower signal-to-noise ratio than the one-way, read-only Web 1.0, and before Google, it nearly collapsed under its own weight. Do you remember trying to find a needle in a haystack with AltaVista? But through relevant search, we now routinely filter out most of the noise.

On Google's first-quarter analyst call, Larry Page said, "What we've done for the Web, Google aims to do for television." If you think about it, today's read-only, one-way TV has a lot in common with Web 1.0. What will TV be like if Google can do with TV what they did for the Web? I have a few thoughts on Google TV, and would love comments.

Of course, Google's got much more competition now than they did 10 years ago, and there are many good competing ideas for TV 2.0. But who else has the scale to make this work?

Stéphane LEE

Using web services for aggregating tags : Guten tag (RSS included).

Enjoy !

Matt

I've got a different opinion on things. I see web 2.0 as more advanced, websites as services, tagging, social mixing, etc.

I see web 3.0 as being more of what's coming -- rich web frameworks like rico, dojo, backbase, etc., all providing ways for a new breed of web applications to actually accomplish the "browser as a platform" promise we heard so much about in the web1.0 days.

nick botulism

thanks for writing this. the term "web 3.0" makes more sense to me than "web 2.0". the term "web 2.0" irritates me and gets my hackles up, because it implies that we are on the brink of the first major change in the web. "web 3.0" recognizes that the web has already been going through a lot of changes. more accurate, less hubristic.

The comments to this entry are closed.