Wednesday, December 10, 2008

Mozilla futures

A great quote recently from Tristan Nitot, head of Mozilla Europe:

For years, the Mozilla goal was simple:
promote choice and innovation on the Internet
and the biggest lever we had to achieve this was Firefox. In 2003, there was a monopoly (which leads to lack of choice) and total lack of innovation (why would Microsoft invest in something that did not generate revenue and could threaten its business?). Making a modern, safer, easier to use, cross-platform and extensible browser made sense.

Fast forward 5 years or so. We're in a totally different place. The browser market is in much better shape than before. 2009 seems very promising with Safari 4, Chrome 1.0, IE8 and Firefox 3.1. There is absolutely no doubt that keeping on improving Firefox is the right thing to do, but in this new era, the old Mozilla goal sounds less relevant.

Tristan is right. Against all odds, Firefox has achieved their initial goal. Congratulations!

So, what's next for Mozilla and Firefox? Well, I believe they have to extend the web itself:

  • Mobile: Create a full-power browser for mobile phones, even better than Apple's iPhone browser
  • Sensors: Enable the browser to allow websites control over device hardware such as cameras, microphones, accelerometers, etc.
  • Graphics: Enhance the browser to give it native control over video, vector graphics, plus 2D and 3D rendering.
  • Security: Work to eliminate security issues from the web
  • Performance: Make javascript as fast as C. Use hardware acceleration to speed up graphics until canvas is fast enough to enable modern video games. Take advantage of multi-core machines.
  • Process: Mozilla has proved that a radically open development model works. All their development software, bugs, internal issues, team meeting notes and working designs are on the web for everyone to see and discuss publicly. That includes working very closely with standards bodies so the web doesn't dissolve into rival models. Their challenge is to ensure this approach beats the closed proprietary model. That means they need more market share!

Perhaps the last point is the most important ... Mozilla have set an example to us all by building a huge community for good around the world. Their task will be to expand it further and use it to influence the development of the web.

Wednesday, November 26, 2008


Nokia is at a turning point. Long the dominant supplier of mobile phones worldwide, it faces new deep-pocketed competitors in Google & Apple, plus structural changes in the industry. Nokia is losing market share and needs a clear strategy.

The mobile phone industry has matured rapidly. Until very recently, companies competed by offering features - a better camera phone, more storage for music, a more colourful screen. The big manufacturers created vast portfolios of phones tailored to different market segments and geographies - camera phones, music phones, children's phones, smartphones, and economy phones.

Smartphones have now converged on a standard set of these features. The next battlefield is the software platform that ties it all together. Apple and Google have turned their phones into general purpose digital devices - in other words, personal computers. They are competing by attracting third party application developers to take advantage of the computing power in their phones. The Apple store, for example, is one of the key selling points behind the iPhone, with more than 5,000 applications already created.

To attract this kind of ecosystem, you need to develop a strong platform. Apple have done this by porting Mac OS to the phone. Google have developed Android. Various other vendors have combined forces to develop LiMo, using Linux. Unfortunately for Nokia, their operating system Symbian is not popular and needs significant investment. Nokia have taken control and will open source the OS. But by the time it's complete, it may be too late.

Nokia have pushed into software and services. "Comes with Music" is an innovative attempt to establish a new business model for digital music, but it's losing money. The purchase of Navteq enabled Nokia to move into location-based services. Neither has yet attracted much interest by the consumer. This is an uphill battle.

A new approach

Nokia is still a profitable company, and it still has marketshare dominance. It just needs to follow the principles below:
Device Convergence
Now the platform is all-important, simplify the device portfolio. Nokia should be offering no more then 3-5 phones in each country, each based off the same platform. After all, Apple have only got one phone, but they're still the second biggest smartphone manufacturer in the world!

Also, now phones are becoming general purpose computers, why not merge with a PC company such as Acer or Dell? The platform should work across all shapes and sizes, perhaps extending upwards from mobile through the Netbook format.

The web is your platform
Fighting against Windows, Mac OS and Linux is a fool's game. Instead, Nokia should align themselves to the biggest platform of all ... the web. Nokia should become a platinum sponsor of Firefox, who are writing an amazing-looking mobile browser, and make it the whole front end to their phones. Creating code for a Nokia phone would then be easy - any web designer could do it.

Nokia could create Firefox extensions to give the browser power over local features such as the microphone, camera and accelerometer. Nokia would instantly have the biggest group of developers and the most third-party innovation of any mobile platform, and they would have the future on their side.

Partner with Silicon Valley
The hotbed of software innovation is Silicon Valley. Writing applications is a totally different business to selling hardware, and Nokia is going to struggle with its services strategy. Nokia should partner, not compete, with Silicon Valley. Why create your own email application if you can just recommend GMail? Why create your own photo application if you can just recommend Flickr? Why not partner with Facebook and Twitter to provide the next level of communications?

Remember, the web is your platform, not the operating system. So please, no more do-it-yourself services after music and maps.

Each of these principles could be easily achieved in a year or so. But they would establish Nokia on the right side of the changes taking place in the mobile industry.

Saturday, November 08, 2008

Web sensors

One of the big topics nowadays in client computing is sensors - from touch screens to GPS devices, webcams, microphones, proximity sensors, accelerometers, temperature sensors, etc. Now they are coming to the browser, which could radically change the web.

Sensors provide information about the environment around the computer, and can make the user experience more natural (e.g. "pinching" a touch screen) or open up new features such as GPS navigation.

Recently Microsoft, Google and of course Apple have improved their core support for sensors.

But consider the progress made recently by browsers:

Even without an overarching sensor framework, support for geolocation, audio and video should have an incredible effect on the web:

  • Accessibility: dictate your search to Google, via a microphone
  • Games: Play human pacman by watching on a map where your friends are in the city
  • Communication: Make phone calls using your browser
  • Market share: yet more reasons to use the web stack, rather than a client stack
  • Security: We'll need strong security to prevent people from hacking in to your microphone

Thursday, October 30, 2008

Web services

Tim O'Reilly has written another great essay explaining the three types of cloud computing: Utility Computing, Platform as a Service, and cloud-based end-user applications.

For now let's focus on just one consequence: "Web Services" finally get real. Let's go through the various services that are already emerging. Remember, these are not only services for consumers, but also a platform for developers.

Most obviously, identity. This includes authorisation, authentication, presence, and basic user data e.g. email address. OpenID and OAuth are the emerging standards in this space, with support already from Google, MySpace, Microsoft, and Yahoo.

Secondly, social networking basics : Friends & the Activity Stream. OpenSocial is the emerging standard here.

Third, basic content: Images, Videos, Audio, Blogs. The emerging standard here is Atom. However services will likely get enhancements; for example, the ability to manipulate the underlying files e.g. removing red-eye on photos or enhancing audio treble.

Fourth, various additional services:

  • Location-based services e.g. Maps / GPS
  • Time-based services e.g. calendars / tasklists / clocks
  • Messaging (email, instant messaging, phone)
  • Financial services (payments, credit checking, etc)

Finally, professional content:

  • News
  • Sport
  • Financial

All of these services already exist on the web, but they are in silos and can't easily be accessed by developers. Finally now, web application providers are rushing to become platforms for developers, opening up their data using standard patterns like REST and OAuth.

For example, Paypal could offer to track your payments in Google Calendar. Or you could ask the BBC to enter any news items within 20 miles of your house into your Myspace Activity stream. Or you could put your phonecall history there. Or your project management system at work could put events into your personal calendar or task list.

From the consumer perspective, the web will become a lot more connected and personalised. From the developer perspective, there will be a huge number of web services making user data available securely (photos, videos, friend list). Writing a web application will involve plugging in to these standard services. Finally, the vision of web services will become real.

Web Office suites are a dead end

So, Microsoft have finally confirmed that a web-based version of Office is due soon.

That's good news. It means that Microsoft are responding to competition from Google and Zoho; hopefully in turn Google and Zoho will improve their products, which can only benefit the end consumer. It also means that the web has finally broached the biggest consumer software market in the world, the office suite. Web 2.0 has won!

However, while Web 2.0 might have won, I don't think the office suite will survive much longer. Microsoft, Google and Zoho may have faithfully reproduced the troika (word processor, spreadsheet, presentation) on the web, but its time has passed.

We've been stuck with these three applications for so long that it's difficult to see past them. But they've only survived due to network effects: everyone has them, because everyone else has them. It's time to re-examine their purpose.

First, the rise of the long tail. Because the browser is a general purpose platform, all sorts of special-purpose applications can be used instead of an office suite. Why use Microsoft Word to manage your CV, if you can use instead, which gives you CV advice and links you to employers? Why use Excel to manage your personal finances, if you can use, which automatically downloads, categorises and charts your bank accounts for you? Why use Powerpoint to explain your business, if you have a business website that does the same and is accessible to millions?

Second, the rise of the widget. Ever seen a video embedded in a spreadsheet, or an interactive calendar embedded in a presentation? I thought not! But because the browser is a general purpose platform, it's possible on the web. Many widgets like these that defy categorisation will spring up. Is it really a spreadsheet if you use it to post photos? Is it really a word processor if there is a table with formulas embedded? The spreadsheet, word processor and presentation will merge together into a single platform with many different widgets.

Finally, of course, the rise of collaboration. To an elder generation, something you did with the Nazis. To the younger generation, the whole point of content. If you can't see your friends, and make your content available to them, then they won't want it. That applies at work even more than outside work. The web office will be embedded inside a social network. It'll look more like Facebook than Powerpoint.

Microsoft's screenshots of the web version of Office look like they've faithfully reproduced Office in the browser. I think this approach will lead to a dead end. The all-powerful office suite is fading fast, and even the web can't save it.

Separating code from content and presentation

Most web designers nowadays know that content and presentation should be separated. There are still a few examples of HTML <table> and <br> elements used for layout, instead of CSS, but this is diminishing.

The same can't be said of separating code (i.e. javascript) and presentation. I still see lots of examples of layout being set via javascript. It's even built into most javascript frameworks - for example, jQuery has a set of css methods to allow the likes of $('#div1').css("color: red").

That's just wrong. You can't hope to maintain your website's style if you set CSS via javascript - it's difficult enough organising CSS files without having to wade through javascript as well to figure out why a certain style is set. Re-designing your site will require you to change all your code too!

The best solution is to remove all javascript and simply use CSS, with pseudo-classes like :hover and :active to get more control. However, if you're responding to a more complicated event, then you should use classes. Just do $('#div1').addClass('highlighted') instead. You can maintain the actual style in your CSS file, in this case div.highlighted {color: red}.

Similarly, I always try to remove all HTML from my code. It's very easy to get caught up creating and inserting whole DOM trees via javascript alone. But this is inaccessible to search engines, and creates content management nightmares. Now you have to search all your javascript to find where the image came from, as well as the HTML files!

The best solution to this is to remove all javascript and simply use HTML and CSS. However, if you're responding to a more complicated event, then you should have the additional markup already present in the HTML file, perhaps hidden using CSS display:none. You can simply turn it on in your code, without having to create it.

Wednesday, April 23, 2008

Microsoft Mesh

Details of Microsoft Mesh are finally emerging through the fog of Microsoft's PR department, and I think it's going to be absolutely massive. They've found a way to extend their C: drive monopoly to the web.

Basically, Mesh will turn your PC into an Atom Store, publishing your C: drive to the internet as a set of feeds. You can publish any local Word Documents, images, videos, or even folders.

What's more, your C: drive will obey the Atom Publishing Protocol, meaning other services will be able to post, edit or delete local files. This will be used to synchronise your local content with an online space, presumably a version of Sharepoint with developer APIs. Of course, this will turn every PC into a web server - I suspect the only connections allowed will be to Microsoft's servers.

This is exactly the kind of service I had in mind here!

Microsoft have finally found a way to extend their dominance of the PC to the web - via the C: drive. Now, your pictures in "My Pictures" will be automatically synched with Microsoft Live Photos whenever you turn your PC on. Why would you ever then manually upload them to Flickr? And your Office documents will be automatically synched with a personal Sharepoint that presumably enables document sharing. What's the point now of Google Apps?

Mesh finally makes the slogan "software plus services" meaningful. And they do it by using web standards - HTTP, ATOM and URLs. However, the remaining web standards - HTML, CSS and Javascript - are still being avoided on the client, in favour of binary files like Office documents. That will make it difficult to synch HTML documents created on the web (e.g. lists, and tables) back to the client.

This is definitely a half-way house. The only reason synchronisation is such an issue is that we're still storing data locally on clients, rather than in the cloud. But 50% web technology is much better than 0%, which is what Microsoft provide at the moment. Mesh will protect their Office monopoly for a few more years, but it will still surely crumble eventually in the face of HTML5 and CSS3. Microsoft are betting that Mesh will carry them over until they develop more competitive web applications.

Of course, if you have an Apple iPhone or Mac, or indeed use Linux, you will not synch with Microsoft. You will also have no need to synch with Microsoft if you already rely on full web technology, such as Google Apps. But if they can execute, Microsoft will once more be a force in Silicon Valley.

Sunday, April 06, 2008

Language services on the web

Applications like Microsoft Word have embedded spelling and grammar checks for years. So Google's recent release of a web-based API for language translation made me think - just how far could these automated services go?

There are huge benefits to hosting language services on the web, rather than installing them locally on each PC. The clearest is the availability of enormous data sets. For example, it turns out that Google's spell check service is totally automated - there is no manually maintained database of words, it simply searches the web for common character sequences. The top 10,000 sequences must surely be correctly spelt words!

The same brute force data attacks could surely also provide a grammar check service. For automatic translation, you just need to analyse enough Rosetta Stones, where the same text is written in multiple languages. And Google has been operating a free telephone 411 service in the US, supposedly so that it can gather enough data (through recording people's voices) to eventually deliver good speech recognition.

It's also important whether a service is descriptive (merely the result of viewing how language is used) or prescriptive (defining rules for people to follow). The writers of the Oxford English Dictionary claim their work reflects the usage patterns of different words; entries in the dictionary are not meant as prescriptive rules, though it clearly helps if you want to be understood! This is important because a descriptive service could in theory be automated simply by analysing literary data, whereas descriptive services can't.

Finally, services that require a semantic understanding of language are clearly some way off.

ServiceAuthorityEnough DataSemantics
Speech recognitionDescriptiveNot yetNo
ThesaurusDescriptiveNot yetNo
TranslationDescriptiveNot yetMaybe
EncyclopediaPrescriptiveNot yetYes

This table is saying that pretty much every service could be generated automatically simply by analysing huge amounts of data, without the need for understanding. The only exceptions are translations, dictionaries, and encyclopaedias - and for translations, as Google has proved, you can still get a useful part of the way there.

The main takeaway is that there's one massively important side benefit of search engines that has yet to be fully appreciated; they revolutionise linguistics. In fact, they turn it from a mainly qualitative area into a quantitative science.

We now have the tools to analyse language variations as they spread through time and geography, or to discover the common elements in every language, or to watch how language style depends on context, using as a data set the entire internet!

If there's one thing that makes us human, it's language. Computers will help us to understand ourselves!

Saturday, April 05, 2008

On Apple's CSS animation proposal

Apple recently published new proposals for CSS transitions and animations. Having spent some time reviewing approaches to animation on the web, I conclude that their animation proposal has serious shortcomings, and identify a better approach.

Animation - controlling the evolution of styles like position, colour, size, fonts, and layout - has always been a crucial gap for the open web. That's why it's been a key selling point for plug-ins such as Flash, albeit in a proprietary way that doesn't integrate well with the rest of the page. Animation is a good thing and should be brought to the web asap.

For more than ten years now, the W3C's answer to animation has been SMIL. But SMIL has a fundamental problem - it can only animate XML. On the web, you don't want to animate content - you want to animate style. Style is not stored as XML, or even as markup - it's stored as CSS. Finally, Apple has overcome the inertia with CSS animation. Now there is a chance to shape the way it works - hopefully this review will play a role!

There are two fundamentally different ways to evolve style - transitions and animations. In transitions, you don't know in advance what the before and after styles are, you just want to control how quickly the transition takes place for each style property. For example, you could say that background-color always takes two seconds to change, rather than being instantaneous as normal. In animations, you control both the before and after styles, plus the path between them.


Apple's model for transitions is clear and straightforward - you simply apply a transition rule to the relevant CSS properties - for example, perhaps there is a delay of two seconds whenever the div's opacity changes:
div {
   transition-property: opacity;
   transition-duration: 2s;
   transition-timing-function: linear;
Transitions enable a huge number of simple effects, from context menus that slide out on mouse over to page sections that fade out when closed.

I like this model because it's simple and orthogonal to all the other styles (you can't set the actual property values, only their timing), yet gives them even more power. Also, the new transition styles follow the proper cascading rules as they are applied through the DOM.

Apple's Animations

Apple takes a very similar approach with animations. Using the same opacity example:
div {
   animation-name: div-opacity;
   animation-duration: 2s;
   animation-iteration-count: 1;

@keyframes 'div-opacity' {
  from {
    opacity: 0%;
    animation-timing-function: linear;
  to {
    opacity: 100%;
Unlike transitions, animations set the exact values over time of the opacity style, using keyframes. This is where the problems arise.

The first issue is orthogonality. Keyframes provide a new way to set the div's opacity, away from its normal position (under the div selector). This adds unnecessary confusion to parsing and understanding the CSS document - there are now two ways to set a style. It also requires several new CSSOM interfaces to control keyframes via script.

As a result, keyframes have a much bigger issue - they don't cascade. Cascading is one of the most important characteristics of stylesheets - it's the C in CSS. Cascading sets a series of priorities for when to apply style rules, based on the DOM and where they are applied. Because keyframes are a totally separate part of CSS, cascading can't work its magic.

For example, what happens if opacity was set in both the div selector and the keyframe? You could set an arbitrary rule to give one location priority, but it would be just that - arbitrary. And how does opacity apply to any elements inside the div? Apple have proposed that keyframes don't cascade. But this removes much of the power of CSS.

A better approach to animation

There's a better approach to animation that respects both the orthogonality and cascading principles. I also think it's simpler - it certainly requires fewer lines of code. See an example below that does exactly the same as the animation example above:
div {
   opacity: calc(t / 2s * 100%);
There are two key elements to the solution:
  • The CSS3 calc function, which enables simple mathematics like multiplication and division.
  • The new standard variable t, which measures elapsed time in seconds, starting at t=0 when the style is first applied to the element
In the example above, opacity would start at t=0 with a value of (0s / 2s) * 100%, which is 0%. After exactly two seconds, opacity would have the value (2s / 2s) * 100%, which is 100%.

Notice that since t is measured in seconds, we need to divide by a time unit (in this case 2s) in order to get the units right. I've also multiplied by 100% to return a percentage unit accepted by opacity.

The benefits of this inline approach are that it maintains both orthogonality and cascading rules - in fact, animated styles cascade in exactly the same way as static ones. It's also easier (and much shorter) to read, and requires no additional CSSOM interfaces.

To provide the complete picture you need additional animation functions, plus a few discretionary parameters. Following Apple's approach, I recommend the following:

  • animate-ease(time, iterationcount=1, direction=normal)
  • animate-linear(time, iterationcount=1, direction=normal)
  • animate-ease-in(time, iterationcount=1, direction=normal)
  • animate-ease-out(time, iterationcount=1, direction=normal)
  • animate-ease-in-out(time, iterationcount=1, direction=normal)
In addition, the following functions provide more control
  • animate-step(time) which is the step function, returning zero when time<0s id="dh-l">
  • animate-keyframes(time0 value0, time1 value1, time2 value2, ...), which returns a curve smoothly connecting the points via a bezier function.This negates the need for a separate cubic bezier function.
For example, the following styles apply the same effect as above, but eased-in and stepped after 1s rather than linear:
div.ease-in {
   opacity: calc(animate-ease(t / 2s) * 100%);
div.step {
   opacity: calc(animate-step(t / 1s - 1s) * 100%);
The following style iterates linearly every second four times in a row, alternating directions each time:
div.iterate {
   opacity: calc(animate-linear(t / 1s, 4, alternate) * 100%);
The following style illustrates more complicated animations, by moving an image downwards with uniform acceleration:
div.gravity {
   top: calc(t*t / (1s*1s) * 1px);


Under this model, separate animations are implicitly synchronised. For example, consider the following animation:
div.projectile {
   top: calc(t*t / (1s*1s) * 1px);
   left: calc(t / 1s * 1px)
The instant a div element is given the projectile class, both animations will be set to t=0, and hence will be syncronised together.

On the other hand, the following animations will not be automatically synchronised:

div.moveright {
   left: calc(t / 1s * 1px);
div.movedown {
   top: calc(t / 1s * 1px);
var div1 = document.getElementsByTagName("div")[0];
div1.className = div1.className + " moveright";
div1.className = div1.className + " movedown";
The classes have been set at slightly different times, one after the other, and therefore the animations will begin at slightly different times as well.


Animation has the potential to turbo-charge style on the web. It's important that it's done in a way that enables the full power of CSS, including the principles of orthogonality and cascading. Apple's transitions model meets these principles, but their animations model does not, so I have proposed a replacement.

Thursday, April 03, 2008

Myspace Music

Finally, an iTunes competitor that might actually be worthy of the phrase - Myspace Music.

It's got context alongside the content - user reviews, shared playlists, band information, lyrics, etc.

It's got a business model that actually stands a chance (i.e. free to the consumer, with adverts alongside streaming music). What's more, they seem to have the right setup - Myspace and the record companies will share equity stakes in the business, so they will all share in its success.

Technically, it's not restricted by crazy DRM. I like the dual options of both streaming and download - why not provide both, and see which works best?

It sits atop a site already heaving with music discussions. In fact, you could say that a social network has finally provided something for its users to talk about!

Here's hoping they can compete with iTunes, Apple's proprietary platform. I don't see anyone beating Apple for the iPod / iPhone, but it would be nice to have more competition for the content!

Sunday, March 30, 2008

iTunes: the adverts

Surprising Apple rumours appeared last week from, of all places, the Financial Times. Apparently,

Apple is in discussions with the big music companies about a radical new business model that would give customers free access to its entire iTunes music library in exchange for paying a premium for its iPod and iPhone devices.

It's surprising because Apple has been so successful with their existing business model. Why would they go through the risk of changing it, especially if competitors like Nokia already have similar models in place?

I suspect that Apple sense a new market opportunity, and it comes from iTunes. If the iTunes store becomes free to consumers, then its usage will rocket by an astronomical amount - that's the basic law of pricing. Apple could monetize that usage by turning it into a website (rather than client application), and introducing adverts.

Apple's new browser, Safari 3.1, already contains the key components to get this done - the new HTML5 <audio> and <video> elements, plus offline file storage for your music collection. They've been circulating Safari as widely as possible - even on Windows - and now we know why. Apple could make iTunes far more 'sticky' for consumers (and hence get more ad money) by adding context to the music - user reviews, lyrics, recommendation lists, and artist news. For just $20 per iPod, Steve Jobs would be guaranteed one of the most lucrative websites on the internet.

So, is iTunes finally coming to the web? We'll find out by the end of this year.

Sunday, March 23, 2008

The BBC's iPlayer

What a success the BBC's streaming iPlayer has been. It's even managed to single-handedly increase streaming internet usage in the UK by 200% in one month! Not bad for an application strung together in just a few months.

I'm still not sure the BBC quite realises what a revolution it has started. The iPlayer frees the BBC from the tyranny of the channel, which has foisted on us all prime-time game shows, padding TV to fills schedules, minority interests at midnight, and endless repeats of every programme except the one you really want to watch.

I've written some ideas for enhancing the iPlayer down, to show just what's possible with this new platform.

Increase the time limit from seven days to seventy years
The BBC has the most incredible back catalog of any broadcaster in the world. But much of it is under historic rights agreements that prevent it from being freely available to the public. So the BBC must initiate an enormous program of identifying and publishing content that's already free, re-negotiating contracts to free up historic material, and ensuring that new material is produced under agreements that allow for endless iPlayer availability.
Make search better
The search function is pretty poor at the moment. Ideally it would be possible to search across actors, episodes, producers, time periods, or even scenes within a show, with the same ease of use as Google.
High Definition
The ISPs might not like it, but why not publish new material in a range of formats depending on bandwidth, including high definition?
Add context
The video themselves are not enough. As a basic next step, the BBC should embed each video in a page that also explains the credits (as per IMDB). Next, they could add trivia, photos, transcripts, editor's comments, links to related material, and space for user-generated comments. This adds enormous value to the material, making the website far more 'sticky' as users navigate around, discovering related material and forming communities around niche content.
Open up worldwide with adverts
I see no reason why the BBC shouldn't make their content available globally, especially if it pays for itself via adverts for users outside the UK. In fact, this could be a massive new revenue source for the BBC, at no expense to UK citizens.

Satellite TV is a dying industry

In the UK, BSkyB (the provider of Satellite TV) has been dominant for so long that it's difficult to imagine anything else - rival cable companies Telewest and NTL have even neared bankruptcy and been forced to merge. And yet, I expect the roles to be reversed five years from now, because of the web.

Satellite TV is not compatible with the internet, because it's a broadcast technology - TV aerials can receive signals from satellites, but they can't transmit anything back. That makes it impossible to browse the web - how can the satellite know which webpage you want?

BSkyB has two assets - a TV content business, and a satellite distribution pipeline. Its business model has always been to ruthlessly leverage each asset against the other, purchasing football rights to encourage satellite uptake, and then promoting new content to this audience.

As content moves to the internet, BSkyB's business model will fail. It will be left with a legacy asset - the satellite distribution pipeline - that's no longer relevant. It will have to compete in the TV content business on an equal footing with its competitors. And it will have many new deep-pocketed competitors, including Apple and Google (via YouTube).

Rupert Murdoch is an incredible businessman but he will struggle against competition like this!

Friday, March 07, 2008

Apple's new SDK

Well, Apple's new SDK was quite a surprise. It's not just a better version of Safari, though there is one coming. It's a native SDK with full-blown access to iPhone features like the touch screen, video, networking, and accelerometer.

What does this mean? Firstly, it's now clear that "touch" is a new platform, not just a new phone. We'll definitely now see more Apple "touch" devices - not just phones, but perhaps tablets and surfaces. All that SDK work is creating an ecosystem that other devices will slot into nicely.

Who will develop native apps? Apple showed an array of different providers, from the enterprise (Salesforce) to messaging (AOL) and gaming (Sega and EA Games). Personally I think gamers will be the most keen - they will love the accelerometer, advanced graphics and OpenGL tooling.

The last platform

Over the years we've seen some great platforms - Windows, Mac, and Linux come to mind. Now we have the Apple Touch platform. But the Salesforce demo was very instructive; why write an iPhone app when you can just publish a website?

If the Apple Touch SDK becomes very popular, it will be because it exploits the web's weaknesses (e.g. control over sensors such as accelerometers, and video quality animations). That's the other reason why gaming is a natural fit.

This surely won't continue for much longer. The web is closing the gap (e.g. recent work on an HTML 3D canvas element, or my sensors proposal).

Could the Apple Touch be the last great platform before the web subsumes even more? We'll just have to find out!

Thursday, March 06, 2008

Firefox on the iPhone?

Now we've seen the iPhone SDK, a quick thought - why not create Firefox for the iPhone?

Since we already have Firefox on Mac, it shouldn't be too difficult to port - and it would help Firefox developers strengthen their approach for multi-touch.

Personally I would love to use Firefox extensions on the iPhone...

Wednesday, February 27, 2008

Firefox distribution

There's no doubt that Mozilla is on a roll. Their main product - Firefox 2.0 - is still increasing market share every month. Firefox 3.0 is due out very soon, and looks incredible - it absolutely blows the socks off any other browser I've used. With Weave, Prism, Mozilla 2, Mobile Firefox, and Mozilla Messaging in the pipeline, they have a string of blockbusters lined up for years to come.

It's especially impressive given that Mozilla relies totally on downloads. All of their competitors come pre-installed on the major platforms (Internet Explorer on Microsoft Windows, Safari on Apple OSX, Opera on mobile phones). Mozilla have to work uphill, persuading every single user individually they need to download Firefox and register it as their default browser.

Mozilla is an organisation with an intense focus on its mission - to improve the web for everyone. They have a unique and powerful culture that non-technologists find difficult to understand - they are passionate enough to treat their mission as a moral campaign. Firefox is just a means to deliver this mission.

To achive their goals, Firefox must have a high market share, otherwise they can't influence the industry. Can downloads be enough? I think Mozilla should be more ambitious. Firefox has gained a reputation as a secure, well designed, fast, intuitive browser.

Taking the next step: distribution strategies

Mozilla should persuade OEMs to distribute Firefox as the default browser. Everyone in Silicon Valley knows that Firefox is better than Internet Explorer. The likes of Dell, HP, Lenovo, Acer and Toshiba can surely be persuaded this too, especially by offering a cut of Mozilla's search engine funding.

This would be money well spent. It would further the Mozilla mission by bringing the full power of the web to even more people around the world. Mozilla could target certain countries - for example China, where it only has a 2% market share but a freshly signed revenue agreement with a local search engine.

The arrangement could also apply in the mobile space, where default applications are even more entrenched. What about an arrangement to ship Mobile Firefox with Symbian, Nokia or Sony Ericsson?

Obviously, Mozilla should maintain their downloads channel. Starting an additional channel by signing agreements with manufacturers would take Firefox to the next level, helping them influence the industry with openness, standards and the power of the web.

The Big Switch

I finished reading Nicholas Carr's new book, the Big Switch, which describes the rise of the 'cloud' (web applications like Google or Amazon) using an extended analogy from the electricity industry 100 years ago.

Carr points out that companies originally had their own electricity departments generating power, but as the technology matured and economies of scale kicked in, they instead purchased electricity from dedicated utilities. In the same way, he argues that companies nowadays have their own IT departments managing software, but as web technologies mature, organisations will subscribe to websites managed by utilities instead.

It's a powerful argument, and already conventional wisdom in Silicon Valley. Carr states it eloquently and clearly to a wider audience. Executives will love the arguments; IT departments have always been expensive and very difficult to manage, and the prospect of simply subscribing to websites instead will remove many a headache.

Carr lists Google, Yahoo, Salesforce and Amazon as being at the forefront of this change. In fact, he implies that there is only space for three or four mega-suppliers of web applications. I reckon that's not true, and it's an area Carr could have spent more time.

For example, the financial services industry still spends far more on technology than the search engine industry. Some of that is spent on email systems and word processors, which could be procured from Google instead. But the vast majority is spent on trading, lending, sales, securitisation and investment systems; Google is not a bank and therefore can't compete with this. Banks will never give these systems away because in financial services, knowledge is power. Citi and HSBC belong to Carr's list of mega web suppliers - it's not just Silicon Valley!

The Big Switch is targeted at an executive level of readership, as you'd expect from a former editor of the Harvard Business Review. I think it hits the mark pretty well - it doesn't go into technical explanations (we're still missing that book!) but explains the likely social and organisational consequences of the web in a clear, engaging manner.

I wasn't so impressed with the other sections of the book, which debunk the techno-utopians who assume society can only benefit from the cloud, and explain how the web is becoming a form of Artificial Intelligence. It might be well put, but it's not really news - the web is hardly unique as a new technology in having winners and losers. Also I suspect Carr is overhyping the power of the web's AI.

Overall I was impressed with the style and subject matter. Carr has hit on a fundamental transformation in IT and the book will help business executives - and IT managers - understand and prepare for the changes to come.

Tuesday, February 12, 2008

Too many mobile operating systems?

Vodafone's CEO, Arun Sarin, has used his Mobile Wireless Congress speech to call for mobile OS consolidation. He's claiming it's a pain to develop for up to 30 different incompatible systems (though, in a clear reference to Microsoft, he also confirmed he didn't want just one).

Smartphone operating systems are such a new and rapidly developing field that it's not surprising there are so many. There will naturally be consolidation as the big players invest.

It's incredible to me that he just doesn't get the answer - develop in HTML, CSS and javascript! Modern browsers like Firefox 3 and Safari 3 contain all you need - including HTML5 offline storage - to deliver compelling applications. Sarin's developers are focused on the wrong layer in the stack.

The theme of the 2008 Mobile Wireless Congress is supposed to be internet applications. That's a start, but still not clear enough. Let's hope the 2009 Mobile Wireless Congress theme should be web applications.

Tuesday, February 05, 2008

Tabbed Browsing

Tabbed browsing has been one of the key recent improvements to the web. It's made it far easier to work with multiple pages - many people keep dozens of tabs open for days, waiting for the opportunity to read or complete them. It was one of the main selling points behind Internet Explorer 7.

And yet, tabbed browsing is terrible. You can't resize, reshape and move tabs, like you can normal windows - they're all stuck at the size of the browser window. You can't search across every tab. And they blatantly overlap with the taskbar, for those using Windows.

Is there a better approach than tabs? Sure - think how you organise pieces of paper on a desk. They're in piles, at various angles, and at any point you can bring them to the front and work on them. But hmm - pieces of paper tend to get lost or crumpled under others.

I still don't think anyone has properly implemented a simple, powerful, and intuitive interface for working with multiple documents visually. That seems ridiculous - what on earth have we been doing for so long!

We can guess at what a solution might look like - a multi-touch screen allowing document resize and zoom, a quick search function across open documents, some way to remember default document dimensions. Some combination of Mozilla SVG photos and Jeff Han's multi-touch.

In the meantime it's worth pointing out that, for all their advantages, tabbed browsers are only a quick and dirty fix to the problem of working across multiple documents.

Friday, February 01, 2008


So it happened. Microsoft finally took the plunge and made a offer to Yahoo! they couldn't refuse.

Microsoft's reasoning is straightforward - they want to catch up with Google in the search and advertising business, which will require tens of billions of dollars of capital investment in the next few years. Sharing that load is a no-brainer; this is a game where scale wins.

Though Microsoft are focusing on the first two elements of Google's tagline "search, ads and apps" with their acquisition, I find their apps strategy - "Live" far more interesting.

Live has never seemed coherent. There is Microsoft live, Windows Live, and Office live. There is Hotmail Live, not to be confused with Windows Mail Live. All of these products overlap in confusing ways with their traditional client software equivalents. It's an utter mess, and it still seems to be going nowhere, perhaps due to cultural problems - Microsoft still don't seem to get the web.

Similarly, Yahoo's apps seem to have no connections or synergy between them, and they have a serious "peanut butter" prioritisation issue. However, in Yahoo's case, they at least own some incredible assets (Flickr,, Yahoo Mail, Yahoo Music), and some talented people that truly understand the web.

Hopefully the merger will force both companies to list their apps and place them in a simple, overarching framework. For example, a matrix with content types (text, raster images, vector images, audio, video) versus functions (CRUD, publish, collaborate, version, syndicate, search, store). That would even beat Google at their goal of features, not products. Because every month they dither, Google will move even further ahead.

Wednesday, January 30, 2008

Browser as information broker

The phrase "browser as information broker" has been around for a year or so, and finally some of the vision is becoming reality.

My interpretation of that vision is that the browser will link data on a webpage to services that consume it. For example, if a date appears on a page you're browsing, you could drag it to your calendar application - or if a location appears, you can open it up in a map.

It's a powerful vision, and it's actually part of the Semantic Web mission. As Tim Berners-Lee explains, we've already gone from linking computers to linking web-pages. Now we need to take that next step - linking data.

There are three parts to solving this problem - semantics, services, and connections.


Firstly, web developers have to mark sections of a page with meaning - this is a location, that is a date, etc, in a standard way that computers can understand.

There are several active approaches

  • HTML itself has existing elements (e.g. the "unordered list" element, <ol>) which indicate meaning, and more are being added in HTML5 - e.g. the <time> element.
  • HTML has several attributes, especially "class", that can be used in a semantic way. People are trying to standardise class names and HTML structure to indicate data like dates and locations; this is called Microformats.
  • Groups can register protocols for various content types. For example, there is a common "mailto:" protocol in HTML links, which commonly opens up an email application. Protocols are established via internet standards.
  • The W3C is pushing an ambitious new language, RDF, as the foundation of its Semantic Web vision

In the wild, the first three approaches have good momentum, perhaps because they work well with existing technologies, though they seem to compete with each other. If they hit limitations, RDF will be the obvious choice!


People have to develop websites to manipulate data. Actually, a lot of this has already been done - what is Google Maps except a service to manipulate location data, or Microsoft Live Calendar except a service to manipulate dates & times?


The browser has to connect the user to relevant services when it spots data. For example, when it spots a location, it should present a nice interface that allows the user, if they desire, to view it in Google Maps.

The forthcoming Firefox 3 enables these connections for protocol handlers. It implements the HTML5 API for registering protocol handlers against a particular website, which tells the browser to use (for example) Yahoo! Mail whenever it sees a "mailto:" link.

It will be fascinating to see how this evolves. To become popular, web developers will have to be confident that high quality services exist around a protocol. What protocols will make the grade?

There are no (or very limited) automated browser connections for any of the other semantic approaches (HTML tags, microformats, and RDF). I would therefore predict that protocols will become the favoured approach to marking up HTML with extra meaning, with perhaps the exception of HTML5 <time>, which will work great with web forms.

Saturday, January 26, 2008

Browsers are slow

When Safari 3 was announced, the marketing pitch centered around its 'blazing speed'. Steve Jobs even used precious keynote time explaining a chart comparing browser execution speeds for a javascript benchmark.

That was great news, mostly because it refocus debate on the current shockingly poor state of browser performance. Take any site with a bit of Ajax code, e.g. Gmail, and it's guaranteed to be far slower than a client equivalent.

In the past, javascript speed didn't matter. If you're using a dial-up modem, then the limiting factor is bandwidth, and you don't notice javascript performance. If you're accessing a Web 1.0 site with limited javascript, there won't be a delay.

But the world has moved on. Modern client applications - such as games - ruthlessly use native processing power, including rich graphics and parallel threads. Browsers just haven't caught up.

It's encouraging to see, particularly in the Mozilla community, some serious debate about how to speed things up. Work has started on Tamarin, a project to compile javascript to native code on the fly. Firefox 3 will use the Cairo graphics software library, which can take advantage of hardware acceleration. And there's been great debate about making parallel browsers.

It'll be a few years before most of these efforts come to fruition. In the meantime, Moore's law and a few performance tweaks will help a bit, but we'll be left with great web applications unwritten due to performance concerns.

Thursday, January 24, 2008

Writing a browser in HTML

Browsers contain two basic components - a rendering engine (which displays HTML, CSS and javascript), and the chrome (the browser interface, including back button, URL bar, favourites, settings, etc).

Though web developers only worry about the rendering engine, users mostly care about the chrome. Tabbed browsing, the search bar, well organised history and favourites, and full page zoom controls are recent chrome innovations critical to improving the user experience.

But how do browser makers write the chrome? Not using HTML, CSS and javascript - it's like they don't trust their own code!

Instead, they use their own user interface frameworks - Mozilla, for example, has a markup language called XUL. If you look at XUL, it's pretty much a non-standard competitor for HTML5. It's been great for Mozilla until now, of course, but what's the point once you have HTML5 itself? Why maintain code for two separate markup languages?

Using HTML5 to program the browser chrome would make extensions, such as the popular Firefox extensions or even single-use applications like Prism, vastly easier to develop. They would also simplify browser code and reduce its footprint.

Finally, HTML browser chromes would be a real test of HTML5, CSS3 and javascript, overcoming the online / offline schism (the chrome appears even if you're offline) in a novel way.

As HTML5 gets stronger support by browsers, we may see a new tipping point, where HTML becomes the default user interface framework even for local client applications. We'll see an HTML browser chrome in the next few years.

Thursday, January 17, 2008

Thoughts on OpenID

Web single sign on has been the stuff of dreams for - well, for as long as the web has existed. Microsoft's much-derided Passport - placing all control in the hands of that institution - was the last serious attempt. Now, finally, we have an open, distributed standard that puts control with the user - OpenID.

Yahoo's implementation of OpenID is a massive filip for the standard. Although Yahoo is only a provider of accounts - it won't read accounts created elsewhere - yet it triples the ecosystem of OpenID accounts, making it ever more likely that the next generation of start-ups will consume these IDs.

OpenID has a key architectural advantage - usernames are URLs, not email addresses. That means you can tell someone your OpenID without getting spammed.

Trouble is, if you are, what is the Yahoo OpenID you'll want?, of course. And if you give that out, people will be able to guess your email account pretty easily...

I have no idea how Yahoo (or anyone else) will prevent this. Perhaps the secret is to have a different email provider to your OpenID provider! If someone asks your email address, it feels impolite to ask them to look it up at your OpenID URL!

It's a social issue as much as a technical one. OpenID has the chance to make lire on the internet so much better, let's hope it grabs its opportunity!

The TV and the Computer

The fight for the digital living room continues. Apple TV, the XBox 360, Microsoft's Home Server, and the set top box all compete to provide multimedia services to the family.

This is horribly wrong. I just can't see the value in having a whirring black box control center in the living room - it's a single point of failure, it's a closed solution (since everything else must plug into it), it's a bottleneck against content on the web, and it forces Dad to play system administrator!

As William Gibson said, "the future is already here, it's just not uniformly distributed". Look at the iMac. Take away the keyboard and mouse, and what does it look like? A TV.

Now imagine it only has one application - the browser - and that it boots up in 2 seconds, like an iPod. Your Flickr photos, Amazon Music and BBC iPlayer programmes are now available, on demand, from the web. You can purchase another TV, put it in the kitchen, and access the same websites - there's no need for a central controller or set top box.

For the remote control, all you need is a wireless mouse! Instead of pressing channel numbers, you navigate between your browser favourites. You can type a new URL or search query using a simple onscreen popup keyboard (unless you really want to connect a full wireless keyboard)

All this is surely feasible today. How much extra would it really cost to include a stripped down Linux OS with Firefox on a $1500 widescreen TV?

The future is putting browsers in TVs. I really don't think that even Apple and Microsoft will be able to stop it.

Tuesday, January 15, 2008

Apple's strategy: iTunes

With their latest Apple TV product, Apple's strategy is becoming ever clearer: tie everyone to iTunes.

Want to synch your iPod or iPhone? Use iTunes. Purchase new music? Use iTunes. Rent movies? Display photos on your TV? Store your calender, address book and notes? iTunes is Apple's answer to every question about content.

This unsettles me. Apple's customers are tying all their data into a proprietary, closed client application. You might be able now to import Flickr photos into iTunes, but what are your chances of ever exporting them back?

At a time when openness is not just a buzzphrase, but a basic principle of many in Silicon Valley, Apple are probably the only major company still seriously trying to build a walled garden. They truly do 'think differently'!

I personally hope their iTunes strategy doesn't succeed. Their fabulous hardware and awesome user interfaces - in particular, the iPod Touch - are beguiling users into data hell.

Companies like Yahoo and Amazon have a massive opportunity to build a competing stack, based in the browser using open technologies such as HTML, RSS, and even the forthcoming HTML5 audio and video elements. Ironically, these very technologies have superb support in Safari, Apple's browser.

With iTunes, Apple are betting against the web. Time will tell whether this strategy works.

Saturday, January 05, 2008

Feeds, set theory, and an Atom DOM

Some time ago I wrote a brief study comparing internet computer science with fundamental maths. I argued that they should coincide, because fundamental maths represents thousands of years of experience about modelling concepts - which after all, is what computer science is all about too.

Missing from the internet (I wrote) was one maths subject, probably the most fundamental of all - set theory, concerning unordered collections of objects. Basic set operations include cardinality (i.e. number of members), union, and intersect - remember those Venn diagrams!

I've belatedly realised, of course, that there is a very important use indeed of set theory on the web - RSS. Feeds are sets! Originally designed to be a blogging platform, RSS (or equivalently, Atom, its better formed sibling) is showing up in all sorts of other places (tagging, email / calendar apps, photo sharing, ...) because it executes perfectly such a simple and powerful concept. The members of a feed set are URLs, which can represent anything - that's why RSS is so powerful.

Indeed, libraries and services such as Yahoo! Pipes have emerged to offer many of the concepts of set theory, including the basics functions of cardinality, union, and intersect, plus slightly more advanced ones.

An Atom DOM

The one thing that maths teaches about sets are that they're critical to pretty much everything else. I would expect web developers to discover the same thing; I wouldn't be surprised if a native browser 'Atom DOM', offering the basic set functions, sprung up. After all, we already have a DOM for XML and HTML, the other two web formats!

What would an Atom DOM look like?

At its most basic level, you'd just need an object to represent the feed, exposing its properties and the elements in the feed, alongside perhaps the feed's cardinality. This alone would save lots of effort for Ajax developers!

For me, the methods of the feed object would be more interesting. Membership, subsets (perhaps created via user-defined filters), union, intersect, cartesian products, power sets, sorts - each would provide a wealth of opportunities for developers.

I doubt that an Atom DOM will exist for several years; the Atom working group has disbanded for a few years, having successfully published its version 1.0 recommendation.

But if an Atom DOM were implemented, it would be tremendously powerful for web developers. Thousands of years of fundamental maths can't be wrong!