Thursday, December 21, 2006

Ajax is the beginning of the end for relational databases

On the web, every page is a text document. The text documents are interpreted and displayed by browsers according to a standard contract – HTML.

The classic client-server programming model works very differently – a Visual Basic application, for example, will contain all manner of binary files and libraries that directly call operating system functions. It’s like writing a new browser for each website!

The choice of model has far-reaching consequences – for example in security, in user interface design, in software distribution, and in application integration. But, surprisingly, one part of the classic model remains – the relational database. Mixing a relational database with online text documents should really not work:

  • Database tables are not documents, so they can’t be referenced by URLs or appear in hyperlinks in a standard way
  • There is no standard file format for database tables, so they can’t be transported in a standard way across HTTP
  • Cacheing models for databases and web servers are inconsistent
  • Relational database best practice is for data normalization, whereas HTML documents are naturally un-normalized and hierarchical
  • Security works differently in the database layer to the web server layer, leading to potential inconsistencies and gaps
  • Systems integration for databases is much more difficult, due to the lack of a common extensible approach to table structures and their transformations
As a workaround, developers borrow old client-server tools like ADO.net and JDBC links in object-oriented code, and mismatch them with the text document. This creates a complicated intermediate layer and doesn’t really address the issues above.

Since the biggest use of relational databases nowadays is to supply HTML documents, their massive take-up is something of a mystery. It’s explained by three factors – they are fast, they are stable, and they haven’t had much competition!

But now some competition is arriving at the low end – Ajax. Ajax offers the ability to look up reference data sources during runtime, which is exactly the function of a relational database. And by using URLs, HTTP, XML and javascript, Ajax overcomes each of the relational database flaws above.

Viewing Ajax as a competitor to a relational database might be unconventional, but it’s revealing. In Ajax, the XML file itself represents the database table. There is less focus on a query language like SQL - DOM javascript carries out the bare minimum, with XSLT transformations filling some gaps – but a lot more focus on getting access to the data via URLs and HTTP and XmlHttpRequest.

Apart from data access, the key advantage of Ajax is that it returns data in the same text document format as HTML. So it can be readily inserted without any intermediate layer – in fact sometimes with pretty much no code, using something like < div src="database.xml" >

Of course, you would never use Ajax to download an XML file with a million rows. But you could do some pre-processing on the server, and return a brief XML results file via Ajax. And this pre-processing step is becoming one of the main functions of relational databases – they are being relegated to storing data and returning query results in XML via HTTP, which his not their natural position.

Recently several groups have extended this approach. On the W3C side, an ambitious attempt to replace lines of javascript code with declarative XML binding has been defined – XForms. Although of great theoretical benefit, XForms is hampered by its lack of adoption by the browser manufacturers. And as a grassroots movement, the Atom Publishing Protocol is seeking to standardize the structure of XML databases. It does not require any browser enhancements and has already attained significant adoption, most notably by Google as the foundation for Gmail, Google Calendar and Google Docs & Spreadsheets.

My contrarian prediction for 2007 is that Ajax will take over from ADO.net and JDBC as the middle ground between web pages and data sources. The Atom Publishing Protocol will become a popular way to manage data sources – taking over even more of the relational database scope.

Relational databases will get decoupled from web application development. They will be increasingly relegated to XML storage and output, plus their traditional role in the declining client-server world. After more than 20 years, this is the start of their demise as the all-conquering database technology.

Monday, December 18, 2006

The Mobile Phone

Over the years, the question "what is a mobile phone" has raised different responses. From a rich man's toy, it evolved into an essential device for making calls. Now it's a bit more confusing - "phone" doesn't seem right for a device that plays music and provides email access, and the British slang "mobile" seems a bit vague.

There's a revealing connection with the misnomer "Personal Computer". PCs are not personal - we share them at work, at home, and on our travels. They used to be personal, but then the internet happened, and with it the ability to log on to any website from any PC.

It’s far more appropriate to call mobile phones “Personal Computers”, because they are truly personal; we feel naked without them, wherever we are. And due to that same factor - the internet - they are becoming general purpose computers.

In the next couple of years the internet will take pride of place in the mobile phone. For example, imagine if your mobile phone was synchronized with your email account, so you could visit Hotmail from any PC and update your phone address book, view your missed calls, listen to voicemails, and phone someone up, as if from your phone. And it would work the other way round, too - you could read your Hotmail emails and instant messages from your phone. This would provide a unified communications service, with a single ‘inbox’ containing email, instant messages, SMS text messages and voicemails, accessible from anywhere.

How many of us have lost their phone address book along with their phone (or even when upgrading phones)? The unified communications service avoids all these issues by storing the details on a website and synchronizing with the phone.

And what if you could automatically access all your phone's photos and music via your email account? And if you uploaded more to your email account, you could view them too via your phone?

During its path to becoming a truly personal computer, the mobile phone will become a device for accessing your Hotmail or Skype or iTunes account while on the move.

From a business perspective, this creates a huge dilemma for the mobile industry. Does Vodafone become just a mobile ISP? Or does it build a website to compete with Hotmail, Gmail, Skype and iTunes all at once?

From a technical perspective, the vision highlights the absurdity of the .mobi domain and other attempts to create a 'walled garden' - they are betting against the internet. Once you allow phones to access the web, they are exposed to creativity that will overcome any attempt at central control. Small screen sizes will always be a limitation, but there’s a lot you can do given a limited real estate if you really try!

From the customer's perspective, the mobile internet will finally enable the new world promised (but not delivered) of 3G - 'killer' new products and services enabled via websites. Many of these take advantage of physical proximity – see below for some ideas - but all of them rely on the web.

For example, what if your phone had swipe-card functionality? This would enable you to make simple purchases without carrying your wallet around. And it would be internet-based – during the swipe, the phone would fetch a web page that asked for authentication and confirmed details of the transaction. Rather than typing a website into the address bar, the browser is activated by physical proximity. Creative informative and transactional uses will result from following web standards.

The personal nature of the phone could also be used to secure web transactions. Banks are introducing two-factor authentication to improve security, which augments the traditional password with a read-out from a physical device. Why should the phone not play this role?

Internet access will also be key to finally introducing mapping services on your phone. Why not use Google Earth on your phone? This would allow you to easily find directions, locate the nearest Chinese restaurant, or even receive warnings when your children wander.

And finally, one of the major gripes of mobile phones is their high international charges. The mobile internet would eliminate this, as there is no geography on the internet. All calls are the price of local calls.

The mobile phone is turning into the truly personal computer, and mobile web access will enable its functionality, from phone calls to emails to swipe card.

Friday, December 08, 2006

Creative Destruction in the Media

Much has been made of the 'long tail' of the internet - the fact that on the web, niche, amateur producers of content, like YouTube film makers or unsigned MySpace bands, can collectively account for a significant proportion of the industry.

But in reality this presents no danger to existing professional producers - as disposable incomes rise, people will want more and better entertainment, and professionals will comprise the 'quality' end of a much bigger industry. There will always be demand for the Sopranos, or Beyonce, or even TV gameshows, and it's funny how those unsigned bands don't stay unsigned for long!

The real revolution will occur in content aggregation and distribution. The fundamental principle will be 'universal access' - anyone in the world using any internet connection will be able to access every piece of content ever created, whenever they want to. At home, on your phone, at any PC; films, music, live sports, chat shows, home videos & photos, the whole lot, using just a credit card!

It's happening already - witness the flurry of deals that YouTube is signing with media companies, or the success of iTunes, or TV companies rushing to post shows on their websites.

From a business perspective, this means creative destruction. It will turn the industry upside down, destroying many business models and bringing riches to those quick to change. There will be three main effects: globalisation of aggregation, pay-by-advert, and the fall of the conglomerates.

First, globalisation. National media companies will find themselves in competition - if, sitting in the UK, I can log on to a US website to watch sitcoms, why should I bother with ITV? If I can watch the Champions League on a French website, why should I view it on Sky? The only reason we currently have excusive geographical deals (like the rights to show "Friends" in Germany) is because TV distribution has always been split geographically too. But on the internet, companies can set up as global distributors, in competition with national operators, and reap massive economies of scale (including better negotiating leverage with the producers). ITV will have to decide whether it can compete with HBO in providing US sitcoms, or whether it should stick to national content like the XFactor.

The second effect is that the producers will be paid directly per advert. On the internet, it is possible to target adverts personally to each individual watching a show. Google pays advertisers per click, but TV is less interactive and not likely to generate many 'clicks'. But it could pay per view of each advert, and this revenue could be divvied up with the producers. In this model, episodes of 'Friends' would be posted to YouTube, advertisers would bid on a per-viewer basis, and the revenue would be shared among the parties. No exclusive deals, no geographical boundaries, just an open online marketplace.

The third effect is the fall of the conglomerates. There will still be large media companies, but they won't cover the whole value chain (production, aggregation, distribution). People purchase music from iTunes, that was produced by a band working for EMI, and the distributor is their local broadband provider. In the same way, people will view films on YouTube, that were produced by Universal Studios, and the distributor is the broadband provider. The aggregator (like YouTube) will be interested in maximising coverage fairly for the consumer, which is a conflict of interest from promoting the output of a single producer. And broadband providers have tried and failed to push their own media - once you've got someone on the internet, you can't force them to visit your website. So the industry will split into producer (BBC / HBO), aggregator (YouTube / iTunes), and distributor (ISP / telco).

The benefits for the consumer will be huge - the goal of 'universal access' is now achievable.

But is the business world ready?

It seems incredible that this enormous industry is being shaken to its foundation by the web - a technology dreamt up by a physicist only 17 years ago (http://www.w3.org/History/1989/proposal.html). The participants are still grasping the scale and direction of change, and there is plenty of active resistance by the media companies. But the world is changing around them - led by Silicon Valley.

The future of telecoms

Silicon Valley has upturned many industries in its time - now its 'creative destruction' is at work again. The telecom industry is currently undergoing a huge strategic shift:

  • Landline phone companies merging with mobile companies
  • Mobile companies merging with broadband companies
  • The rise of Voice Over IP (VOIP), using data networks to efficiently transport phone calls

Where is all this heading?

The reason landline companies are merging with broadband and mobile companies is because it's vastly less expensive for landline companies to run voice over an IP data network than over a traditional circuit. And thanks to rapid technology advances, it will quickly become even less expensive (and higher quality).

And the mobile industry recognises that 3G is a failure - a massively expensive minor enhancement - but it can gain redemption by offering high speed mobile data access and internet services.

But once you've given your customers access to the internet, you can't make them use your content! This is the lesson all ISPs have learnt - think of the failed old AOL portal abandoned by customers in favour of Google, the 'free email addresses' abandoned in favour of Hotmail, and 'walled gardens' on the mobile web that have failed to attract customers.

Current phone companies - like AT&T or BT - will just sell access to the internet (fixed or mobile). No-one will have a phone number with them, just like no-one uses their ISP's provided email address, or visits their ISP's internet portal for news. People will use Google Talk or Skype for phone calls, and Hotmail or Yahoo mail for email addresses, and cnn.com or bbc.co.uk for news.

The reason? Skype (like its competitors Google Talk and Vonage) understands internet technology, moves at internet speed, and will have more customers than a broadband provider ever can, because it is open to anyone globally on the internet. So it can take advantage of massive efficiences of scale in providing voice services, and it can integrate them with other services like blogging or instant messaging.

The same will happen in the mobile world (albeit more slowly given lagging technology) - people will purchase mobile access to the internet. But they won't use the mobile phone number they were given; instead they will use their Skype or Google Talk account.

The industry will be split in three:

  • Device manufacturers, selling mobile devices and PCs. Three or four globally, based in Far East.
  • Broadband providers - offering fixed and mobile broadband internet access. Three or four based in each country.
  • Internet Communications companies - websites offering email and phone services over the internet. Three or four globally, based in Silicon Valley.

Existing phone companies, landline and mobile, will become simply broadband providers. And they will remain utilities, despite all their attempts to break out of this market. Their best option is to receive commission from internet communications companies, by referring new customers. This is what happened this week in the UK, with BSkyB white labelling Google's products to its broadband customers.

And internet communications companies will offer an integrated suite including email, phone with voicemail, instant messaging, RSS reader, blogging, buddies, community features, personal web-pages as per MySpace, etc. These will be served to any device, with any screen size, whether fixed or mobile. And they will come in two flavours - free but advert supported for consumers, and subscription-based for enterprises. It's a whole lot more than just phone calls, and the telcos can't compete!

I'm sure we'll see many attempts at vertical integration across these industries. But the cultures and business models are so different that this will be an anomaly. Telcos will become utility internet access companies, and all 'content' - including phone calls - will be handled via separate communications companies.

Tuesday, November 14, 2006

Web Security is still primitive

The current focus in web security is on fixing existing issues with viruses, phishing etc. This is the main barrier to trust on the internet today - and without trust, internet communities, like any other, will die.

But even if we stopped nearly all of these attacks, security on the web would still be primitive. That's because most web architects interpret 'security' too narrowly.

For example, why do I have have different usernames and passwords on different websites and on my PC? This makes it tricky for me to remember which one to type in, causing security issues like account locking and writing passwords down. And can this be done in a secure enough way that my bank will accept this single password, but that I can choose to remain anonymous on other sites?

And shouldn't I maintain a basic profile that all websites can look up (given my permission), so they all know my latest credit card number or address? And shouldn't there be a central repository where I can find out who knows my details, for example what my phone number is, and where I can accept / deny requests for access to this information?

And how do we solve the 'forgetting problem', where I tell someone a secret and have confidence that immediately after, they forget it? For example, when I use a website to purchase a gift, I don't want them to keep my bank account details, they should be permanently deleted after use. How can I be sure they've actually deleted it?

And how can I use hosted applications, like Salesforce or Google Spreadsheets, while maintaining privacy of my data? Could I store the data locally, but use the application remotely? Or is there a way for me to manage exactly who has access to this data, even though it's hosted remotely?

And should I be able to demand access to all data stored about me by any organization?

Some answers to these questions have already been attempted. Microsoft Passport was supposed to be a security model that all websites could sign up to, but it dissolved. We've recently made some progress in understanding federated security - see the Liberty Alliance, although there is distinct lack of real implementations.

These questions will become increasingly important as the web matures; data privacy and federated accounts will become a huge part of online security. But we're years from being able to address them. Web Security is primitive at best.

Wednesday, November 08, 2006

XHTML and Internet Explorer need each other

What is the future of XHTML, now that Tim Berners Lee has recognized lack of take-up and kicked off further development in HTML? It's clear that even XML enthusiasts are now having to re-assess XHTML and accept that it could be years, if it all, before XHTML gains wide use.

The first reason why XHTML hasn't caught on is the classic chicken and egg scenario. Internet Explorer doesn't support XHTML (except as a broken version of HTML) because there are so few developers using it, and developers don't use XHTML because Internet Exporer (IE) doesn't support it.

And the second reason is that up 'til now the main benefits of XHTML are in computer parsing and XML data integration. But while browsers handle parsing errors, and most structured data is still in non-XML relational databases, these benefits haven't been enough to outweigh the extra effort of using a stricter language.

With the recent rise of XML standards like Atom or SVG, the benefits of XHTML are much more obvious. Trying to embed HTML content inside Atom, or use XML data sources to populate HTML pages, is currently unsatisfactory due to namespace and mime type problems. And Ajax lends itself to the kind of scripted data transformations that XHTML thrives on.

But without support from IE, this kind of use is unlikely to catch on. So what are the chances that IE will be upgraded to handle XHTML? Up till now, precisely zero - in fact there wasn't any new functionality at all in IE for five years.

Now IE development has re-started, XHTML could be just what Microsoft needs to regain momentum on the internet. Firstly, as Kurt Cagle points out, XHTML enables them to move ahead with a new platform free of legacy issues, while maintaining support for masses of HTML code. Secondly, XHTML supports their new Live strategy of merging desktop and internet apps - I suspect MS Office developers are now lobbying for IE improvement that previously they stifled.

The current rumours from Microsoft are not encouraging. The IE team have plenty of work to do, even to fix bugs in their support for existing standards such as CSS. And their history of supporting standards they haven't developed themselves is not great.

But the fact remains: without XHTML, Internet Explorer will not reach its potential. And without Internet Explorer, XHTML will not reach its potential.

New uses for RSS / Atom

RSS, like its web standard sister Atom, was designed to support syndication across blogs. By opening up the data behind a set of blog entries, information could easily be shared across the internet without manual copy-paste or messy screen scraping.

Due to the massive growth and enthusiasm for blogs, all manner of technology to support RSS / Atom is now commonly available - downloadable client news readers, website based news readers, blogging sites, browser toolbars, even integration with the Vista operating system.

But like all great technology, RSS / Atom is useful for a lot more than initially realised. In fact, it's becoming one of the foundations of the web. Here are some examples:

  • Business Process alerting engines: use a feed reader to monitor queues and automatically escalate exceptions via email, IM, SMS or VOIP.
  • Document Management: replacing user-generated content in Windows Explorer and other document management systems with a set of RSS / Atom feeds. This allow for syndication, tagging, search, subscription, separation of presentation from content, and integration with internet technologies like CSS, HTML, the URI, and javascript.
  • Synchronization: Using RSS / Atom to manage automated synchronization, for example between Blackberry / iPod and a PC, or between a PC and internet sites
  • Email / Calendar / Contact storage: using simple extensions to RSS / Atom, it's possible to store emails, calendar entries and contact information as a native XML feed. This can replace the dreaded .pst file, bringing the same advantages as Office XML formats did over the old binary files.
The common theme is that RSS / Atom is an internet-based approach to managing sets - whether sets of files, of emails, or of blog entries, it handles them all.

Several extensions to RSS / Atom are in active development (as extensions, these work best with Atom which is namespace aware): Atom Publishing Protocol, to handle updates / deletes as well as simple views of data; SSE, to handle synchronization between feeds; and Gdata, to allow for native email / calendar / contact storage.

But even without these extensions, it's clear that over the next few years, RSS / Atom will become one of the most important foundations of the web.

Tuesday, November 07, 2006

Vector Ajax on the web

Ajax technology sat unexplored in Internet Explorer for six years before it sparked Web 2.0 and enabled a revolution in web design. Originally created to support Microsoft Outlook Web Access, Ajax is such a powerful tool that it has enabled rival web-based mail clients to supplant its creator.

Could another hidden gem from the same era create a similar revolution on the web? I think it could - vector graphics.

For years, vector graphics has been a niche product, with specialist uses in diagramming and drawing applications (except Visio, the market leader). Scalable Vector Graphics (SVG), the XML standard for describing graphics such as circles, rectangles and lines, has been stuttering forward in web standards committees but has not gained major consumer attention. VML, an earlier, more basic standard, has sat unused in Internet Explorer since 1999.

Yet the first stirrings of take-up are appearing, in unexpected areas.

First, VML appeared in Google Maps, to draw driving routes. Second, Firefox was upgraded to include core SVG functionality - so all major browsers now support either VML or SVG.

But it's the rise of Ajax and the internet productivity suite that will drive demand for vector graphics. Google Spreadsheets, the Zoho Suite, and Writely all require drawing functionality that simple images like .jpegs cannot cater for. They use the browser in new, interactive ways - as document editors, not just document viewers - for which vector graphics is uniquely placed.

It's the combination of SVG with Ajax - "Vector Ajax" - that beats competing technologies like Flash. Imagine constructing charts or diagrams in the browser using XML and Javascript, allowing the user to interact with them, then posting the chart or diagram back to a server using a standard HTML form.

Using VML in Internet Explorer 5+ and SVG in Firefox 1.5+ and other browsers, websites are springing up that offer:

  • Diagramming (as per Visio) or CAD in the browser (see Cumulate Labs)
  • Interactive Charts and Graphs in the browser (see XY Graph)
  • Graphical widgets like clocks, sliders, and searchable maps in the browser (see SVG clock)
  • Javascript libraries to ease development (see Dojo 2D)
  • Data visualization and manipulation (e.g. drag-and-drop organization charts, garden designs, architectural blueprints, programming language flow etc) in the browser
  • Improved user experience across all websites
These sites are not yet fully featured - but they are a proof of concept and open up the marketplace for a huge range of software. Expect the big beasts (Microsoft, Google) to pour into the void within the next year. For example, drawing functionality should soon become available in Google Spreadsheets. This kind of application should work even better in the browser than in a stand-alone client application, because SVG/VML are just branches of XML, opening up technologies like collaboration, wikis, cacheing, syndication / alerting, single sign on, storage, etc.

Ajax has opened developer's eyes to the browser's strengths as a platform. Now Vector Ajax can do the same for the browser as a powerful presentation layer.

Conclusion: Six years since its introduction to the browser, vector graphics is about to revolutionize the web as Ajax did before it, enabling a new generation of rich interactive websites.

Web-based productivity apps - 2

Existing office suites have suffered for years from a lack of collaboration - for example, the ability to work on a document simultaneously with someone else, track versions by different authors, search for a document across a massive file system, and annotate the document with opinions.

The strategy of Microsoft, the market leader, is to introduce Sharepoint as a document management tool to overcome these hurdles, and also to make some improvements to Windows Explorer in Vista.

Yet why should these basic features require a separate tool for advanced users? In a connected world, isn't this kind of usage the rule, not the exception, even outside the enterprise?

Instead, the approach of browser-based application providers, such as Google, is to replace Windows Explorer and Sharepoint with an online file system that incorporates search and collaboration. In addition, no cumbersome upload / download is required, since everything takes place within the browser.

Over time, the C: drive and other client shares will be used solely for system files and program files. All user-generated content will migrate to the internet (or an intranet). This will produce enormous benefits, including advanced search, workflow, backup, tagging, security, client access, syndication / alerting, and browser-recognized URIs.

For the consumer or small business, this file system will be hosted on the internet, supported by adverts. In the enterprise, it will be served via an intranet web server purchased from the application provider, which will address data privacy issues.

Based on the previous post, 80% of the documents available on the file server will be browser-based. Only 20% will require another client application to view or edit, with download / upload.

On top of this file system will reside a set of applications, based on HTML in the browser:

  • Word documents
  • Spreadsheets
  • Project Management
  • Presentations
  • Drawings / Diagrams
  • Graphs / Charts
  • etc
All of these applications are possible - and have already been demonstrated - using exising browsers. The first three can be written in pure HTML, CSS and Ajax. The last three also need either SVG or VML - see a future post on the massive potential for these technologies in current browsers.

Within the next year, browser-based applications will be good enough to cause a multi-billion dollar headache for the incumbent, Microsoft. And companies like Google will be locking the next generation of users into a new, open, integrated suite, residing solely in the browser.

Web-based productivity apps - 1

Recently there has been an explosion of activity around web-based productivity applications - Google has developed Google spreadsheets and purchased Writely; Sun and OpenOffice have demonstrated the future of office applications in XML; and start-up Zoho has developed a complete online set of applications, from project management to presentations.

Of course, this is Microsoft's natural territory - it has dominated productivity applications for more than ten years now, earning $8bn dollars profit from MS Office during 2006 alone. But will it cannibalize this business by producing a free online version? I suspect it seeks to enhance functionality in its existing Office suite, lock people into Sharepoint, and slash prices to offset gradual decline.

This area is mature, the applications are nowadays technically straightforward, and there are only three reasons why Microsoft is able to earn so much in one year:

  • File format lock-in - everyone else uses MS Office, so to open their files you need to use MS Office too (but the new "open" XML formats in MS Office 2007 will negate this reason).
  • Functionality - MS Office is still the tool to beat for stability, look and feel, and functionality (but it wouldn't take an enormous effort nowadays for a deep-pocketed competitor to overcome this).
  • Inertia - staff everywhere would need expensive re-training to move from MS Office (but they are getting more familiar with look and feel for internet applications)
  • As you can see, the barriers to entry in this space are falling quickly, and Microsoft's competitors are likely to thrive in the next few years.
In future, the productivity suite will be divided into two markets, broken down by the 80/20 rule.
  • Browser-based: 80% of people will use browser-based tools. These will be supported via adverts on the internet, or (for the enterprise) by sales of specialist intranet servers that address privacy issues. Although these will not have full functionality, they will be good enough for most people and will also support collaborative and sharing features.
  • Client-based: 20% of people will use advanced client applications, rather than the browser. These are the 'Power Users' who require extra functionality, based on the OpenOffice or OpenDocument XML standards.
The advantages of browser-based applications are many:
  • Collaboration and sharing through internet technologies like RSS / Atom, VOIP, social networks / project homepages, chat & discussions, search, wikis, etc.
  • No client downloads required, and users understand the browser interface
  • Development is made easier by taking advantage of tried and tested technologies like URIs, HTML, Javascript, CSS etc; and upgrading the suite is as simple as upgrading the website.
From this analysis, the real competitor to MS Office is not OpenOffice, but HTML-based browser applications, which could take most of the market share. It's pretty ironic that, despite Microsoft not adding any new technologies to Internet Explorer for 6 years, it still has enough functionality to decimate their Office business! Client applications will have to run fast to stay ahead of browser development in the next few years.

But which productivity applications will thrive in the browser? See my next post for details!

Monday, November 06, 2006

There's plenty of innovations to come using existing internet technology

The web has moved on enormously in the last five years. Social networking sites like MySpace, blogs, news feeds, search engines and innovative online applications have created "Web 2.0" and whole new industries from scratch.

Meanwhile, the dominant browser, Internet Explorer, gained no new functionality.

So how did Web 2.0 happen? We began to learn the potential of existing internet technology - which is so powerful that it consumes all competing technologies - computer networking, the phone system, and even now newspapers, radio and television. There was so much scope for exploiting existing technology that we didn't need new browser technology to move forward.

Internet technology is based on two forces; the first is the URI and HTTP - to an end user, an address that you type into the browser, but to computers, an executable command. The URI is the foundation of the world wide web, the best way we've found to distribute applications across the world, and the source of modern collaboration technologies.

The second force is HTML and the browser - a simple, straightforward way to define and view web pages, including hyperlinks, document structure and styles. Millions of people have learnt HTML just by clicking "view source"!

In the last few years, computer scientists have focused on improving the second force, by creating even more powerful languages than HTML - for example, XHTML, SVG, RDF, and XForms. They clamour for browser technology to take in these standards.

These improvements will one day revolutionize the web. But not right now - because there's still plenty of room left to exploit existing internet technology.

Ajax technology sat in the browser for six years before it was fully exploited. VML, for vector graphics in Internet Explorer, is still largely unexplored, as is its cousin in Firefox, SVG. We are still learning to use CSS - witness the recent rise of the CSS only menu. The REST approach to internet applications is not yet mainstream.

But most of all, we are still learning what the internet is for. Originally a set of static homepages, it became a layer over which traditional forms-based applications were deployed. Now we are beginning to understand how important collaboration is (witness the success of Skype and social networking sites). We are beginning to see that client applications such the basic word processor, calendaring and document management are best done in the browser.

Where is this all heading? Visit this blog soon for some answers!