Web 2.0? – more like Web 1.0 RC2!?

Now its 2006, Google Maps and Google Mail are interesting like cold coffee but the hype these projects started is still going on. Everyday there are new »Web 2.0« apps popping up. None of them is doing something really new, right now they are busy combining all of the early »Web 2.0« apps like flickr, del.icio.us, news.google.com, technorati plus some online community functionality. Of course you can’t come up with a new web app without animated effects like color fades or transparency fades. But what really is Web 2.0? I know this question is a common one but so far it seems there are still more questions than answers so I’m gonna say something about it too. First of all, if this is Web 2.0 then one might guess that everything before that was Web 1.0. In my opinion, now is Web 1.0 and everything before now was some crappy alpha or beta release. When Tim Berners-Lee put the very first webpage online this was something like the developer preview release, when mosaic, the first graphical browser for webpages, came out it was something like an early alpha release, the »browser wars« were like the beta releases where the developers tried out to add even more new features and the users cried for standards they could rely on. Well, now we have like Web 1.0 Release Candidate 2. It is still not stable because IE7 will take some more time to be ready but when it is, we can officially rely on technologies like XHTML 1.0, CSS 2.1, Javascript 1.5, XML 1.0 and DOM Level 1, 2 and eventually 3 but thats maybe for the 1.0.1 release. That is of course only the technical part of the web, what about the other goals? If you consider having information available in hypertext as a goal, I’d say we’ve reached version 1.5. Links were always popular, if you had no content at all on your webpage the one thing you would have, was your link collection to other websites. In the early days though, it wasn’t really helpful to have links because there weren’t many interesting sites so there wasn’t really much to link to. For me the breakthrough really came with wikipedia. The way hyperlinks are used in the wikipedia is extremely helpful and I think it is one of the best implementations of hypertext out there. We may reach hypertext version 2.0 with XHTML 2.0 where every element can have a href attribute.

Another term you’ve probably heard already is »the semantic web« which describes the idea of giving the content of the web a computer-understandable meaning. How? Well, a webpage has a header, a body tag, a title tag and a tag for meta information. Your blog has posts, every post has a title which is held within a h1 tag and a body which is encapsulated in a p tag. So a computer can »see« what the title of the webpage is, or what headlines belong to which paragraphs and so on. In my perception the semantic web really started with the blog hype. You post an article on your blog, somebody reads it and mentions your post in his/her blog and then sends a trackback to your blog saying »Hey I read your post, I mentioned it in my Post on my Blog, here is the link to it« which appears as a comment in your blog. What happens now is that the search bots (or internet bots) visit your page and not only understand the plain html structure of your blog but also follow its semantic structure. They find your post, they follow the trackbacks and this way they not only get what you said to a specific topic but also what others said to the same topic. This way you can assign a certain relevance to posts. Lets say the google bot finds out that 25 other blogs have some sort of reference to your post then the google bot considers you post as pretty relevant for the given topic. By making your content understandable for computers / programs you also make it way more accessible for other human beings to find it. This way it is way easier to group content by certain topics or themes and even by relevance. You are able to search for quality not only quantity and that makes the web a better, more productive place.

I think when google started its search engine the semantic web started because the google results were based on some sort of relevance algorithm which was way ahead of altavistas or lycos engine. Yahoo! is and was another story because they had actual human beings who looked at websites before they were put into the yahoo web directory. Then, I think sometime in 2002, technorati went online. They focussed on being a blog search engine and it worked quite well. Because blogs have well structured content, they work very well when you analyze them like I described before. With technorati you are able to find blog posts to certain topics, sometimes way ahead of news.google.com. You can filter them through simple searches, tags, relevance of topics, relevance of single posts, language and of course time. I think the semantic web definitely reached version 1.0 as well. I’m not sure how far this concept can be extended but there is no doubt about that the semantic web is a good idea.

The last thing I want to cover is participation and again wikipedia really cracked a lot of heads. People who were like »This wikipedia article is complete bullshit« got answers like »Well its a wiki, you can change it«. Suddenly there was something that everybody could contribute too for some sort of higher cause – knowledge, free knowledge. This was just the best cause somebody could come up with, who would have something to say against »people contributing to a free encyclopedia«? Blogs of course on the other side also cracked a lot of heads and more people got inspired to contribute something to the web whether it was useful or not. The blog software provided by communities or open source projects made it easy to publish your thoughts, even without knowledge of HTML and CSS.

Okay, there were a lot of leading 1’s in this post so far except for CSS 2.1 and you have to remember how long it took to get to this point, again, its 2006. On August 6th, 1991 Tim Berners-Lee put the very first webpage online, that’s 15 years! Considering this Web 1.0 sounds like we haven’t really achieved much. This is why I can understand the need for the term »Web 2.0«. I think it describes more the mindset of the people than the actual technology that we are using right now. See, it took a lot of time to convince people of the web, it took a lot of time to figure out how we want the web to be and what we could use it for. In fact we are still in the process of figuring out what we could use the web for and it took a lot of time to get us all inspired to think of such things. Now in 2006 the web is more alive than ever before, people are trying to reach the frontiers of what can be done with the current technology and they are already asking for the next generation (XHTML 2.0, CSS3, etc …) and I think that is what »Web 2.0« really stands for. The wikipedia version of Web 2.0, to which I linked in the beginning of this post, is pretty close to my interpretation. Web 2.0 does not necessarily make use of AJAX or fading colors or anything like that – if you want to be Web 2.0 make your web app well structured, semantic, standard compliant and useful, make it accessible.

Technorati Tags:
, , ,

Leave a Reply

Your email address will not be published. Required fields are marked *