Monday, December 31, 2007

The rise of Webkit

Webkit, the browser engine for Apple's browser Safari, has had an incredible six months.

  • First, the iPhone - the first mobile device with proper internet access that people like to use, based on Webkit
  • Safari was released for Windows, and bundled with iTunes, ensuring a huge distribution
  • Webkit is the foundation for Google's new mobile platform, Android, which will surely be massive next year
  • Nokia uses Webkit for its Series 60 browser in its flagship smartphones
  • The KHTML open source Linux developers announced they are merging back in to Webkit

From a standing start of 3% of browser market share at the start of the year, Safari is already up to more than 5%. I wouldn't be surprised if it reached 15% by the end of 2008, if its momentum (and Apple's) continues.

This is great for the internet. A popular open source project, a keen proponent of existing and forthcoming standards such as HTML5 and CSS3, the ability to innovate rapidly (including recently CSS animations and transformations!)

Good luck to Webkit for next year.

2008: the mobile web

My prediction for 2008: it'll be the year of the mobile internet, the year when it suddenly became obvious that mobiles need the same access to the web as any other device.

All the major device manufacturors will release businesses based on this idea:

  • Apple, with the iPhone and perhaps a new slimline tablet Mac using Webkit, the open source browser engine
  • Google, with the rollout of Android, their mobile platform using Webkit
  • Nokia, with their Symbian smartphones such as the N95 becoming a major part of their business, using Webkit
  • Mozilla, with their new mobile version of Firefox

In addition, there's the eagerly awaited spectrum auction in the US, the continued rollout of HSDPA mobile network speeds across the world, and moves around the world towards open network access.

Even on the internet, businesses have to be planned several quarters in advance, and the four players above have been scrambling to execute during 2008. That will widen the gap with those still working on their current businesses, chief among them Microsoft.

2008 will be the year that companies stopped spending time developing java-based mobile applications. They'll just post them on the web, and attract mobile users to browse to them instead. During 2008, we'll see the first mainstream mobile internet sites for

  • Address Book, SMS and Calendar
  • Music and Video players
  • Social Networking
  • Email

In other words, we'll see Silicon Valley catch up with the new opportunities presented by fast network access and sophisticated, easy to use devices.

Sunday, December 23, 2007

Microsoft's Internet Strategy

Microsoft is in a bind. It's just not competing with internet companies in search or advertising. It's having to compete against free and ever-improving open source applications like Firefox and Apache. Due to issues such as lack of standards compliance e.g. CSS, the company has lost its good reputation with web developers. It seems to be fighting too many battles, and hasn't convinced with its new suite of online applications, Live, which anyway could cannibalize some of its most profitable products.

Most of all, Microsoft hasn't figured out how to capture the internet wave. It's being totally left behind on the web.

I'd like to propose a solution. It's a web solution, and one that plays to values that Microsoft has always understood - "developers, developers, developers".

Why not offer a web-based version of Visual Studio - call it Developer Live - that hosts web applications? Go the whole hog - different language choices, bug databases, software configuration management, test environments on the fly, GUI design tools. Charge by memory & processor usage, with reduced rates for students.

This strategy takes advantage of their incredible expertise for development tools and server systems.

The market is enormous - every application in the world! It avoids open source, which can never pay for massive data centers. It builds a whole new developer ecosystem, and no-one else (except maybe Amazon) seems to be thinking about it.

Microsoft would then offer three things - apps (e.g. Office), development (Developer Live), and clients (e.g. XBox / Vista). That's much cleaner than today's sprawl. And most importantly, they would regain clear leadership of the most important platform in the world - the web.

Thursday, December 20, 2007

Open source's limitation

The last five years has seen the triumphant emergence of open source software. Linux, Apache, MySQL, and Firefox in particular have been huge forces for good across the industry.

By providing free reference code for key applications, open source has lowered costs for everyone else, encouraged innovation, and prevented vendor lock-in that seems so prevalent in IT.

Open source is also part of a fascinating cultural movement, spreading the wonderful idea that ideas - including code - should be free from control by any organisation or individual.

There is one limitation to this model - it only works for code (or system designs in general). If I want to actually run the application, I need to purchase hardware.

This might seem a trivial limitation, but it's already becoming important. Can you imagine an open source version of Google? No, because you would need to invest tens of billions of dollars to even run it! How about an open source Expedia, or Flickr, or Facebook? In fact, have you ever seen an open source website? Not even Wikipedia - if someone copied their code and created a clone website, they'd be pretty angry!

So far as I know, all these websites run on open source technology - Linux, Apache, PHP, etc. But they aren't open source themselves.

Some code only needs to be run once, for example, websites - the whole point of the URI is that you don't need two of them that do exactly the same thing! Other code can be re-used in many different places - I call this system software - operating systems and other applications 'under the hood'.

And that's the fundamental limitation of open source - in a world of web applications, it's only used as system software. When all applications are web applications, you won't have an open source office suite, an open source graphics package, or indeed any open source consumer applications except the browser / operating system which is local to every client.

It's not a bad thing - system software is what drives the entire internet - it just needs to be recognised that, on the web, open source's role will be restricted to under the hood.

Future of IT: data centres and web-based development tools

Continuing my series on the future of IT, I originally planned to write two more posts, one on the future of data centers, and the other on the future of application development. But they quickly become the same article, because I predict that the two will come together.

Current application development tools, like Eclipse or Visual Studio, cover only a small part of the end-to-end process. What about business requirements, analysis, design, configuration management, testing, collaboration, release management, bug tracking, and hosting? It's spread across a huge range of systems that don't integrate effectively.

It's been pointed out before, but probably the least automated process I can think of is application development. Imagine a website that stored your code and allowed you to edit it, compile it, manage software configurations and releases, maintain bug databases and create test environments on the fly, design user interfaces using WYSIWYG, and hosted the resulting application. Would you really go back to Visual Studio?

Following the Amazon web services model, developers won't need to know anything about the underlying hardware - they would just see their memory, CPU, network bandwidth and storage usage, and be charged appropriately for each. The data center is totally behind the scenes and provided by the development tool vendor.

Which brings me to the data center. In the last 18 months, the data center of the future has become very clear, and all the major vendors are rushing to deliver what can be described in one word: virtualisation. Instead of building separate storage, database, and processing environments for each application, why not just build a farm for each, to be used by any application as it needs it? Simply add capacity every month by plugging in a few more servers, based on demand. Then any application can use it on demand.

Call it data center 2.0 - the hardware has become totally commoditised, and the value has shifted to the management tools that plug everything together, which is why vendors like HP (with OpenView) and IBM (Tivoli) have been investing so heavily.

Data center 3.0 is what happens when the management tools become commoditised too, as they surely will very quickly. Only Amazon, with its web services, is really positioning for this. Data center 3.0 is when the value shifts to the only place left - the development tools. Data center 3.0 is when developers outsource their data centers.

Nicholas Carr is describing it in his book, The Big Switch. There will only be three or four major companies with public data centers globally - HP & IBM, plus a few. Each will invest tens of billions of dollars in computing equipment, and between them, they'll host most applications in the world.

In addition, a small number of massive corporations will maintain their own private data centers, in an effort to maintain a competitive edge. That will include the major investment banks, plus internet companies like Google. No one else will be able to compete with the shear investment and scale required.

The benefits for developers will be huge. Logging into and selecting your language of choice, having compilation and debugging reports done for you, test environments created on the fly via a web interface, a bug database linked to an automatically populated software configuration management tool, while not even have to worry about the data center - what an advantage.

For corporations, the model offers a way to avoid having to manage complex and expensive data centers, and avoid capital expense in favour of monthly bills that scale with use.

Traditionally, data centers have always existed for application hosting. In the future, they will be for application management - not just hosting, but development, problem, configuration & capacity management. That's where the value is. Development tools will become the developer's front end to data center 3.0.

Tuesday, December 18, 2007

Driverless cars

Driverless cars have caught the imagination for years; mostly the conversation turns to advanced technology (satellite navigation, laser collision detection, automatic parking), but for me it's even more interesting to guess how technology might turn business models upside down.

Once cars can drive themselves, taxis get far cheaper. That's because you're not paying a taxi driver's wage. Far more people will take taxis, which will mean there are more taxis on the roads, which in turn will shorten taxi waiting times, which will attract even more people to take taxis - a virtuous circle.

I don't mean to imply that no one will drive cars or own them - there's satisfaction and freedom in driving on open roads that can't be beaten. But perhaps not for the weekly slog to the shops, or the commute, or the evening out, or the occasional traveler.

What happens when a significant proportion of cars are driverless taxis? The rise of the taxi company - massive customers for Motown with the power to significantly negotiate prices down. What's the natural response? Motown buys the taxi companies. They will no longer sell machines - they'll sell the service of transporting people, it'll be Ford taxis versus Toyota taxis.

Of course, for the next decade at least, we'll see only "assisted driving", not "no driving" - better aids to navigation, steering, breaking, and cruising to help the driver concentrate on reacting to the road.

But the auto industry will surely learn the meaning of the phrase "disruptive innovation".

Saturday, December 15, 2007

HR and corporate identity management

I've followed the recent surge of debate on identity management with interest. Massive government data loss, new social network APIs, and gains for OpenID have shown the importance of identity management, and fundamentally changed its landscape.

Technology nowadays seems generally led by consumer markets, not enterprise ones. However, in the enterprise, many of the pieces are already in place, if unevenly distributed. For example, many corporations have purchased single sign on applications from the likes of IBM and CA. Similarly, HR suites from SAP and Oracle compete to automate the management of personal information, and most firms have a web-based corporate directory with people's contact details.

What's missing are the connections between these tools. The HR suite - Peoplesoft, for example - should contain the corporate directory. Employees should be able to go in and update details via the directory page about them. And this page should also define their Single Sign On identity, via OpenID.

In this picture, HR is identity management. A breath of Web 2.0 should hit monolithic HR applications!

Future of IT: the IT view

Continuing an occasional series, I'm taking a look at the IT department of 5-10 years from now.

Most IT departments break down into four components - client support, development, operations, and IT management. Each will be radically changed by new technology and business models, focused around the web.

Client Support

If all applications are web applications, then all you need is a browser with broadband web connection. Client computing will therefore become much more straightforward; no more configuring hundreds of registry settings and data on local c: drives. It also may not really matter which computer anyone has, since they all have browsers.

Similarly, with the focus on Software as a Service, application support will be given by third parties. I can only see a small role for client support within the IT department in future.

Software development

I already predicted that all applications will be web-based, and 80% of them hosted by a third party via Software as a Service - including the office suite, communications and collaboration.

Therefore, hardly any software development will take place inside the IT department. Instead, business analysts and architects will be responsible for configuring and integrating third party web applications, using the vendor-provided tools. Most of this will not involve code, except perhaps a smattering of regular expressions to manage data in two dominant formats; the relational database, and the Atom feed.

Any remaining software development will be enabled by an online tool. Imagine a website that stored your code, compiled it, helped you manage software configurations and releases, offered integrated bug databases and test environments, provided WYSIWYG user interface design, and hosted the resulting application. All client-based tools, like Eclipse and Visual Studio, would be replaced by web applications, with a full data centre to host your application behind them.

Software developers wouldn't need to know anything about the underlying hardware - they would just see their memory, CPU, network and storage usage, and be charged appropriately for each. The data center is hosted by the software development tool vendor and totally behind the scenes.


Most organisations will have zero servers to maintain, so won't need to spend much effort on operations. As for network operations, many devices will be connected to the internet via 3G or 4G wireless networks operated by third parties. Desktop devices will still be connected to a LAN, for the sake of speed, but that will plug directly into the internet. Hence network operations costs are likely to be low too.

IT management

IT management won't have to focus so much on operations - however, they will have to improve their vendor relationship management processes, because of their reliance on Software as a Service. IT managers will decide which vendors to use, and build a framework of best of breed web applications tailored to their business.

Because of this, IT managers will have less capital equipment to procure, and more subscription services to manage. They will have to learn the business more closely and take a more proactive role in determining the future of business oeprations.


The IT department, it has to be said, will be smaller in future. CIOs will rely on a network of partners and suppliers to achieve their goals, and will become much closer to the business as their role becomes business process strategists.

Monday, November 26, 2007

Personal Data

Last week, the UK government revealed that it had lost personal information on 25 million people - all 7.5 million families in the country. The information included names, dates of birth, addresses, national insurance numbers (like US social security numbers), and bank account details.

This catastrophe can hardly have been worse. It's potentially the worst loss to terrorists and criminals since the Jul 7th bombings, and even worse a loss to financial stability than Northern Rock. It abuses the trust of nearly 50% of the country.

What's more, the institutional failings revealed are breathtaking. Why on earth was an obviously untrained 23 year old even allowed access to the data? Why didn't anyone - neither HMRC, the NAO or the IT company that provided it - seem to care that it wasn't encrypted? How could it possibly have been sent via the unregistered post? And why weren't the lessons learned from many similar, if smaller scale, recent incidents in the same government department?

Once the dust has settled, and hopefully managers have been fired for rank incompetence, there are two basic lessons to learn - how to manage personal data, and how to secure identities.

Controlling personal data

I already blogged that citizens should own and manage their own data, not the government. Citizens should also be able to see which government departments are using their data, and how, and when. Any time the government wants more access to my data, they should have to personally ask me - because it's mine!

Proving your identity

In the age of MySpace and Google, whoever thinks that their date of birth is a secret? Or their home phone number and address? Because that's all you need to log into my phone banking service.

This security model is absurd, we can't trust our identities to supposed 'secrets' that can be discovered by anyone in the world in 30 seconds flat. We must find something more trustworthy - whether physical (e.g. fingerprints), or mental (e.g. a password) - or ideally, both.

It's outrageous that the government should abuse our trust by losing personal information about every family in the country. But it's also outrageous that this simple personal information is enough for serious identity fraud to take place.

Some good can still come from the HMRC catastrophe, provided we learn the lessons and build a new security model for the 21st century - a model that places data control where it belongs (i.e. with each individual), and that provides a safer way to prove identity.

Sunday, November 25, 2007


One of the universally accepted themes of IT in the last few years is "convergence". Markets are merging, technology is standardising, and gadgets are getting common components, all driven by the internet.

So what does "convergence" mean? Actually there are three different convergences taking place:

Device convergence

Firstly, devices. Phones, laptops, TVs, PCs, and even cameras and satellite navigation systems are increasingly sporting the same features (operating system, internet access, camera, touchscreen) and can access the same applications. Apple's use of Mac OS X across its complete product range - iPhone to 24" iMac - shows the way forward.

Device convergence means that the consumer electronics industry is becoming something more cohesive - call it the "internet device" industry. The industry-spanning strategies of Sony and Apple will win - expect mobile phone companies (e.g. Motorola) to merge with PC companies (e.g. Lenovo) to compete.

Despite this standardisation, hardware is actually getting more important, not less - note the success of innovative products like the Wii and iPhone!

Network convergence

The second type of convergence is for networks - the IP protocol has won. Broadband, phone line, cable, mobile have just become different ports into the internet. That explains the focus on the triple play (mobile, broadband, landline) - it's all becoming the same thing.

Networking is a utility industry, albeit one experiencing rapid technological evolution. The players - e.g. Vodafone, AT&T - are terrified of becoming utilities, but there's nothing they can do to stop it.

Application convergence

Silicon Valley has won this battle - applications are moving to the web. Simple old HTML and javascript have somehow beaten massive corporate creations like Java and dot net (not that many technologists outside Silicon Valley have realised it yet).

There are only two successful business models in Silicon Valley - adverts for consumers, and subscription (i.e. software as a service) for the enterprise.

Different industries

These three convergences are delivering three massive new industries - the internet device manufacturers, the networkers, and the application providers. They're very different industries, with different business models, regulations, capital investment patterns, and rates of change.

So there's one type of convergence that I really can't see working - convergence across industries. That includes Nokia (Ovi applications will never compete against Adwords), Google (what do they know about operating a network?), Vodafone (spare me the "Vodafone live!" closed garden). Even Apple, so great at hardware design and OS software, haven't yet built a hugely successful website application.

Next time you hear the "convergence" word, watch out for which industry is being mentioned. Though the internet might drive all three, they're very different!

Thursday, November 22, 2007

CSS Color Gradients

1. Introduction

This document proposes a new feature in CSS for creating color gradients. These gradients define an additional <color> value type for the CSS3 Color specification; as such, they can be used in place of a traditional color anywhere in CSS, e.g. as a background color gradient, a text color gradient, or a border color gradient.

The proposals are based on the SVG specification. However, CSS color gradients could style any markup element, including HTML.

2. color-gradient-linear

Linear color gradients are created using the color-gradient-linear color value, which takes several parameters:
color-gradient-linear(angle, offset1 color1, offset2 color2, ... , spreadmethod)

angle indicates the angle of the gradient from horizontal, moving clockwise so that an angle of 0 deg creates a gradient from left to right, and 90 deg creates a gradient from top to bottom.

At least one offset & color pair must then follow. offset indicates where to start the gradient, and can take any length unit. Draw a line through the containing element, following the gradient angle; offset takes the values 0% and 100% where the line hits the element's inside border.

For example, if angle=0deg, then an offset of 0% indicates the gradient starts at the left hand border. If angle=90deg, then an offset of 100% indicates the bottom border.

color can be any valid CSS3 color value, including rgba and hsla values that enable opacity.

The spreadmethod parameter, which is not required, can take three values - "pad | reflect | repeat". This indicates what happens if the gradient starts or ends inside the bounds of the target element. Possible values are: pad, which says to use the terminal colors of the gradient to fill the remainder of the target region, reflect, which says to reflect the gradient pattern start-to-end, end-to-start, start-to-end, etc. continuously until the target element is filled, and repeat, which says to repeat the gradient pattern start-to-end, start-to-end, start-to-end, etc. continuously until the target region is filled. The default value is "pad".


    /*linear gradient from green to blue, moving left to right*/
em { color: color-gradient-linear(0deg, 5% green, 95% blue) }
/*linear gradient from a transparent red to solid purple, top to bottom*/
em { color: color-gradient-linear(90deg, 5px rgba(100,100,100, 0.5), 200px purple) }
/*linear gradient from green to blue to yellow, right to left*/
em { color: color-gradient-linear(180deg, 3px green, 50% blue, 100% yellow) }
/*linear gradient from green to blue to green to blue ..., left to right*/
em { color: color-gradient-linear(0deg, 0px green, 10px blue, reflect) }

3. color-gradient-radial

Radial color gradients are created using the color-gradient-radial color value, which takes several parameters:
color-gradient-radial(center-x center-y, radius1 color1, radius2 color2, ... , spreadmethod)

center-x and center-y indicate the center of the radial gradient, offset from the top left corner of the element.

At least one radius & color pair must then follow. radius indicates where to start the gradient; it must be a positive length value, starting from the center of the circle. A value of 100% for radius indicates that the gradient is equal in length with the element's width.

The spreadmethod parameter works exactly as for color-gradient-linear.

    /*radial gradient from green to blue, center outwards, starting at the element's center*/
em { background-color: color-gradient-radial(50% 50%, 0px green, 50px blue) }
/*radial gradient from a transparent red to solid purple, center outwards, starting top left*/
em { background-color: color-gradient-radial(0px 0px, 5px rgba(100,100,100, 0.5), 200px purple) }
/*radial gradient from green to blue five times, center outwards, starting at the element's center*/
em { background-color: color-gradient-radial(50% 50%, 0px green, 20% blue, reflect)

Social Graphs and Unsocial Graphs

Tim Berners-Lee just wrote a wonderful note on the social graph. He points out that just as the "III" (net) connected computers, and the "WWW" (web) connected documents, the "GGG" (giant global graph) will connect real objects - including people.

So let's return to OpenSocial, Google's new social networking API. I already commented that it might (just) be an open API, but it certainly doesn't open the data - connections between people - which is the important thing. It shouldn't be only Myspace widgets that can securely access this information, but any website.

Tim's blog reveals a further issue - it's not just about connecting people, but connecting anything! OpenSocial doesn't seem to handle my CV, my possessions, my train tickets or what I ate for breakfast this morning. OpenSocial might be a small step forward, but it's not nearly the full answer - it only handles basic information about people.

Tim has clearly found a huge and important issue to resolve, and more people in the industry should be paying attention. Unfortunately, I just don't think they are.

Part of the issue is that he's way before his time - everyone is still focused on the document (HTML). Another issue is that his proposed solution, rdf, doesn't carry enough incentives for people to use it - I can't put adverts in rdf, I can't directly create compelling content with rdf, and anyway no applications take advantage of it yet. A third issue is that it's not clear how HTML (e.g. a Wikipedia entry on breakfasts) would sit alongside rdf (some XML technical markup that describes the same thing).

There is a way out. Create a new open data format standard that describes a person (Google have made a start on this already with GData), and add it to OpenSocial. Make this data format compatible with RDF, e.g. FOAF. And finally, hand the data over to the user - it's theirs to manage.

That way, at least one usage of RDF will take off - information about people and their connections. OpenSocial will gain new momentum, since the data will be free, not just the API. And finally we'll have a springboard for the graph - not just social ones, but any graph - to take off.

Monday, November 12, 2007

The new iPhone SDK will be Safari

Apple's recent announcement of a native iPhone SDK seemed a massive U-turn; first Steve Jobs promised pure web development, then he relented. But I suspect their native SDK will in fact be an upgraded version of Safari - web development on steroids!

There are several reasons why web applications failed the iPhone:

  • Connectivity - endless waits for page load over slow and uncertain EDGE networks
  • Presentation - browsers can't handle coverflow, smooth animation or rotations
  • Awkward audio and video - no flash plugins lead to messy javascript solutions
  • Memory access - you can't access the phone memory using javascript
  • Sensor access - you can't control the camera, microphone, touch sensor or proximity sensor using javascript
  • Cultural - people still expect the mobile web to be crap

Within the last month, a series of announcements have led me to believe that Apple will chip away at all of these issues.

Safari have already announced local SQL database support, which is part of the draft HTML 5 spec. Combined with other HTML 5 sections like caching, this would solve the connectivity and memory issues above, allowing offline access via local memory.

David Hyatt, the lead Safari developer, has also hinted (see comments) that perspective transformations (enabling coverflow) will shortly become available in Safari. CSS animations and affine transformations, including rotations, have also been added to the beta version of Safari.

Now, Safari have announced support for HTML 5 media, bringing first-class audio and video to the iPhone's browser.

Putting two and two together

I just can't see why people aren't putting two and two together! The only remaining technical issue with using Safari as a client development platform is access to the iPhone's sensors - so I fully expect Steve Jobs to announce a javascript API in January's Macworld.

It also provides another reason why Apple released Safari for Windows; they're building a browser competitor to Windows, and they need maximum distribution to persuade developers to use the new web SDK.

For Apple it makes good sense to convert the browser into an OS; they sell hardware, and they get to ride the internet wave. Safari as the client development platform is the classic disruptive innovation!

Wednesday, November 07, 2007

OpenSocial isn't open enough

Google's OpenSocial has attracted enormous publicity over the last week - it's seen as the entry of the big beast into social networking, fighting Facebook and Microsoft by transforming the rules of the game.

But I honestly don't think it has changed the rules of the game. The API is only 'open' in the sense that lots of companies have signed up to using it - it does pretty much the same as Facebook's API, albeit using HTML rather than Facebook's weird proprietary markup.

It's the basic premise that I disagree with - that social networks are 'container' applications, within which every other application is hosted, as a 'widget'.

The problem with apps hosted INSIDE social networks is that you get data silos - not everyone I know is in Myspace, and they never will be.

Instead, we need apps that work ACROSS social networks, gathering the relevant friends and details from each to provide the complete picture - e.g. a complete address book.

Writing one of these apps is not about data storage, it's about data aggregation from all across the web. For example, a photo editing website should allow you to import your friends and colleagues from ANY social network, to allow you to collaborate on a picture. That's not possible with OpenSocial.

Of course, I'm sure OpenSocial will be extended in future. Since Brad Fitzpatrick is reportedly behind both OpenID and OpenSocial, I wouldn't be surprised if we see some immediate progress on this front from his employers Google. If my URI for OpenID is the same as my URI for OpenSocial, then whenever I log in someplace, it gets automatic controlled access to my details.

So OpenSocial gets us some of the way there, but the vision is still not complete - it may be a relatively open API, but it currently enforces a closed data model.

Web Communication

Technology is there to enable people to do things. So, if you want to write a successful application, it's worth asking what things people like to do! And there's one answer that has proved itself time and time again - people like to communicate.

The killer apps of the internet have always been communication tools - starting with email, moving to webmail, instant messaging, blogging, Twitter streaming, and most recently social networking.

So how far has the web come in enabling communications? I put chart together to work this out.

Communication types
Timing View Add Method
Asynchronous Private Private Notebook
Asynchronous Private Public Email / Voicemail
Asynchronous Public Private Blog
Asynchronous Public Public Wiki
Realtime Private Private Dictaphone, CCTV
Realtime Private Public Instant Message
Realtime Public Private Twitter / TV / Radio / Webcam
Realtime Public Public Phone / video conference

The most obvious thing to note is that every cell in the table has something in it, so there are no obvious gaps - but many are filled with very recent technology, and it's still changing very quickly.

Secondly, the table drives home just how important and useful the Atom Publishing Protocol is - starting from blogging, it is spreading to all the asynchronous types of communication. That shows some foresight from Google, who have based their architecture around the standard.

Thirdly, realtime communications technology seems less mature. That's not a surprise - HTTP was designed for the asynchronous request / response pattern. So it will be interesting to see how this piece develops.

Monday, October 29, 2007

Future of IT: the business view

IT is getting more and more important to every business. It not only automates their operations, but also defines the face of the company to its customers, via its website. Whether executives want to squeeze more productivity out of the workforce, or move into new product areas and geographies, IT is the basic enabler. Many businesses are now in fact IT companies; think of your basic current account bank, which sells an online application for financial management.

So what will corporate IT look like, in ten years time? Here are some predictions

  • There will be three shapes of computer: mobile, portable, and office; but all will consist of a browser and fast internet connection.
  • Everything will be web-based. That means you will be able to connect to anything (your office phone, documents, email, business applications, etc) from anywhere in the world, by simply typing in the relevant URL.
  • Everyone will procure an online communications, collaboration and office suite from a third party. That includes voice, email, wiki, document editing, and social networking. Google Apps will be used to create more documents than Microsoft Office and to access more voicemails than AT&T.
  • 80% of software (e.g. HR, CRM, ERP) will be online services procured from third parties. Salesforce's revenues will go up by 50% annually for ten years.
  • The other 20% will be online applications developed in-house, to provide a competitive advantage.

Sunday, October 28, 2007

SVG effects in CSS: Webkit transformations

Amazing news from webkit - they've implemented CSS transformations on their nightly builds!

The web's inability to do transformations - especially rotations - has been an embarrassing failure for years now. I believe the fact that vertical text is not possible has been a major driver behind proprietary "rich internet applications" like Flash.

What's more, the Safari folk seem to have done it properly. They're not relying on SVG foreignobject support, like Firefox; they haven't restricted themselves to simple rotations, instead allowing general affine transformations; and they've started with a simple approach that doesn't affect layout.

Cherry-picking SVG

No doubt they were able to implement this quickly because they've already done it once, in SVG. SVG is great as an image format, and for complicated shapes and paths; but there are surely parts of it that can be brought to the web, via CSS.

For example, how about colour gradients - why not extend the CSS3 color spec to enable the following:

background-color: gradient(red 5%, green 55%, yellow 95%)
which would color the relevant HTML elements with a gradient starting (left to right) with red at an offset of 5%, green at an offset of 55%, etc.

All this would take is to define a new color type in the CSS3 spec, which takes other colors (including opacity via hsla and tbga) as parameters. It could then be used for background colors, border colors, and wherever color is used elsewhere in CSS.

Cherry-picking SVG and putting it in CSS is a great approach. First, it adds style to the web, in a way that properly separates content from presentation. Second, it's tried and tested, so there should be less arguing about the specs. Third, it builds skills and knowledge of key SVG features, building momentum behind the full SVG spec.

Well done, Safari - may the other browser makers learn from your approach.

Thursday, October 25, 2007

Blogs and online word processors

What's the difference between Google's Blogger and Google's Docs?

After all, they're both online text editors. They're also both document publishers, though by default Docs doesn't display your document to the world. They both have general document management, including tagging, though Docs also has version control. They both allow collaboration, though Blogger does it through comments and Docs through 'sharing'. They both allow insertion of images and hyperlinks, though in addition Blogger allows videos and Docs allows tables.

None of these differences are major - in fact, I can see most of them being eliminated at some point through general upgrades.

So why do Google have two different applications?

Different uses, same app

They're different because of their history - they're from different cultures. Word processing comes from years of office work, with deeply embedded notions like folders, separate files, and the A4/letter paper size. Blogging comes from an online free-flowing diary format, technically minimalist, constantly added to, and aligned to the computer screen.

That doesn't excuse the basic user interface confusion between the two systems, for example the different ways to edit the underlying HTML. At some point, surely, the solutions must converge.

I can't see that the functionality requirements are different - it's just there are two different uses. Perhaps it's fine they're branded differently, like Proctor & Gamble owning several major washing powder lines to segregate the market.

While blogging tools start to take on advanced word processing features, document tools like Google Docs will improve in "content management" - versioning, publishing, collaboration, tagging, etc.

Surely one tool that can handle both blogging and word processing will emerge soon.

Monday, October 22, 2007

SVG in browsers

A call to arms today from Mark Pilgrim - improving the level of SVG support in Firefox 3, the forthcoming browser. The features in focus include the use of SVG in img elements.

If taken up, this would be excellent news. SVG still has a chance of becoming the most popular vector graphics language, which would be remarkable given the level of investment in Flash, Silverlight and JavaFX.

It would be truly wonderful if SVG became a first class web citizen. There are some important, but technical debates about exactly how it could fit in, especially in the non-XML HTML 5 world. I feel that, much as CSS can be placed inline with HTML but is best positioned in a separate document, the same applies to SVG. That's because of the principle of separation of content from presentation - SVG is fundamentally about presentation (in fact much of SVG, for example rotation, gradients and filters, should be incorporated into CSS so it can be applied to any element).

Regardless of the technical details, it is great to see the defenders of open standards rise to the challenge of proprietary competition, and enable the next cycle of web innovation.

Thursday, October 18, 2007

Social address books

Over the last few months or so, Silicon Valley has finally cottoned on to the power of the address book. They've figured out that social networking sites are really glorified address boooks.

Mark Zuckerberg of Facebook was the first to really get it - his term is the social graph, an online resource of people and relationships between them. He turned this data on people and their relationships into an online platform, and allowing third parties to access it has turned out to be an incredible success.

The logic of Google's purchase of GrandCentral and Jaiku, not to mention Myspace's deal with Skype, points firmly in the same direction.

After all, the most important thing in our lives is our relationships with other people and with the communities we belong to. These relationships are what drives us, what makes us laugh or cry, what makes our lives rewarding or successful. Any tool that helps us maintain and develop these relationships is incredibly valuable - the address book was the start, and social networking is another advance.

Currently, everything is tailored around the technology: we have a paper address book for house addresses, an email contacts list for email addresses, a phone address book for phone numbers, an IM address book for IM addresses, etc.

Let's focus on people instead; it's far more natural! So there should be just one address book, integrated with all of these technologies. I could find my sister's homepage, and click to call her, text her, or email her, all within the same application.

That's just the start, because the details for each contact should be maintained by them, rather than me. So I could also look up the friends of my friends, or where they are today. I could post messages, or pictures to my address book entry, to keep everyone up to date. So could anyone else!

And I could create an organization home page, linking all the people into it and giving them immediate access to common photos, documents, and chat.

There are several themes here

  • Extra data: friends of friends, communities
  • Convergence of communications methods into a single web application
  • Convergence of collaboration and communication - the social network IS my photo editing application

Initially, I thought social networks had a touch of "fad". In fact, they're the next great phase in software: applications that enrich our relationships. And that can only be a good thing!

Sunday, October 07, 2007

URL syntax and folders

The directory, or folder, is a central feature of every operating system. Windows Explorer is one of the most used applications on the planet, showing the contents of each folder and allowing document navigation.

And yet, on the web there is no such thing as a "folder". Click on "" and you don't see a list of all the documents in the finance folder - you see a webpage. If you're looking for a particular item, you don't navigate a hierarchy - you type a search.

The only enforced hierarchy is in the domain name - "www" is part of the "yahoo" domain inside the "com" top level domain. This hierachy is part of DNS, and is used so that the correct server responds to your browser.

"/" is just another character

Check out the URL Do you think that on Jon's server, there is a "2007" directory, under which there is a "05" directory? No - the complete path "2007/05/24/restful-web-services/" is just a string parsed by the server, so the relevant resource can be displayed programmatically.

"/" is just another character in the URL path. Jon could have used "." instead, or even removed it altogether: "20070524restful-web-services". Everything is flat on the web.

The only reason why Jon added the "/" is convention - when people see the URL, it helps them understand what they might get if they click on it. "/" is a common convention to indicate hierarchy.

So the "/" does NOT represent directory structure.

What if I want web folders?

The main thing that folders provide is the ability to view the set of documents in a collection (e.g. all the pictures in an "images" folder).

On the web, there's a technology that solves just this problem: RSS. And it's far more flexible than a one-dimensional directory hierarchy; you can have RSS feeds for

  • all photos of green flowers
  • all photos of any green objects (including flowers)
  • all photos of flowers of any colour
Try doing this with folders - it's impossible!

In that respect, the URL and RSS feeds can support every navigation method: search, tagging, and hierarchy.

No more folders?

There are two major issues with folders. The first has already been mentioned: they are one dimensional, and therefore don't support modern techniques such as search and tagging. The second issue is that they're not scalable; if you've got a million (or even a billion) documents, navigating up and down the hierarchy to find your file takes way too long.

So it's a great thing that web was designed without folders. Instead, a combination of URL syntax conventions and RSS provides far better flexibility and power.

Friday, October 05, 2007

Adobe revisited

Six months ago, I said Adobe had better put together an internet strategy quickly, or risk going under. They delivered.

Adobe have a new internet strategy, and very impressively managed it is - they're driving pretty aggressively to use their new AIR / Flex platform to build an array of online services. Just this last month we've had announcements about online Photoshop Express, Online visual programming, online word processing, online file sharing, and online voice, messaging and presence.

Apart from the sheer breadth and depth of these developments, they're impressive for being cleverly tailored to expose specific browser weaknesses: animation, pagination, and realtime communications.

The browser within the browser

Most people see Microsoft's Silverlight and Sun's JavaFX as Adobe's chief competition. But they'll have a hell of a job catching up with Adobe's 93% market share, not to mention their best in class development and design tools.

No, the real competition for Flash is the browser itself. Adobe have two to three years until most browsers come with native video and audio players, and increasingly powerful layout and programming engines. They're rushing to fill this gap with their own proprietary technology, hoping to sell design and programming tools, not to mention Flash-based web applications.

Flash is the browser within the browser - displaying text and images, in addition to diagrams, sound and videos. Some websites - e.g. Sony Ericsson - leave little room for HTML.

So Adobe look to be winning the "Rich Internet Application" market. The question is, how will RIAs fare against straight HTML / javascript?

Betting against the internet?

Adobe claims to be betting that the internet will beat client applications. Technically, they're correct - they're relying on TCP/IP, HTTP and the URI. But they're not betting on the web, because the web is about HTML and CSS, not proprietary Flash plugins.

HTML/CSS vs Flash will be a fascinating race. It'll take about two years to know the winner; will Ogg/Theora beat Flash video? Can SVG ever compete for vector graphics? Can browsers handle animation natively? Will flash text and layout ever displace HTML?

My heart says HTML/CSS will win; my head says it will come down to the wire.

Thursday, September 27, 2007

Holdout client applications

If the web is so great, why are there still client application hold-outs?

Look at the popular PC applications of recent years: Skype, IM, Office 2007, iTunes, games. Why have they followed the client approach?

ApplicationWhy client?In two years
Skype, IM Skype and IM require real-time communication between two parties. That's something that the HTTP request-response model of the web will never solve Browser plug-ins are incorporating real-time technologies like XMPP - this will remove the need for separate client applications
Office 2007 Office 2007 has exploited its massive installed base, a far better user interface from the previous version, and certain browser weaknesses in page layout and editing. Google Docs, Zoho and Wikis are already encroaching on Office's territory. These businesses will expand massively in the next couple of years, forcing less dominant Microsoft to lower prices.
iTunes iTunes relies on an offline model, using downloads to iPods. It also relies on DRM, which browsers are not equipped to handle. Both of these dependencies are crumbling (iPods are getting Wifi access, and DRM is fading) - I wouldn't be surprised if iTunes was replaced by soon.
Games Most games rely on advanced graphics and animations, which browsers are not designed to support I can't see browsers competing in the next few years, except perhaps for simple games like Tetris

Conclusion: except for games requiring powerful graphics engines, the web will continue to replace today's common client applications.

Saturday, September 22, 2007

Hardware and Software

Throughout their histories, Microsoft and Apple have had different strategies. Microsoft sells only software (WIndows, Office) but Apple sells hardware too (iMac, iPod, iPhone).

For a long time, Microsoft had the better strategy. It could focus on one thing - software - and build a dominating position in the space where most of the technology innovation happened. It could then dominate an ecosystem of hardware partners to help grow the PC market at double-digit growth.

Times have changed, leaving Apple's approach looking better (at least on the client side):

  • New form factors arose - e.g. the smartphone - that require different types of software, focused on battery power and communications
  • Hardware innovation became as important as software innovation, for example the iPhone touch screen, camera phone, satellite navigation, and Wii controller, leaving Microsoft struggling to catch up
  • The establishment of common standards (e.g. HTML, ODF, USB) and common user interface paradigms removed barriers to change for the end user
  • Open source competed with much proprietary software, but not with proprietary hardware
  • PC operating systems became commoditised; what you really need is a good browser, and a few basic productivity apps

The biggest change is surely the internet - the client OS is becoming the browser. That means on the client side, hardware is just as important than software.

News in: Microsoft reconsiders its software only approach.

An integrated business model of making both device and software could make sense, executive tells investors at tech conference. Microsoft said on Tuesday that it is "not unreasonable" for the company to introduce a mobile phone combined with features of its Zune digital music player to compete with Apple's iPhone....

If that's true, Microsoft will have to turn the company on its head (again). And what about making their own laptops and PCs? Somehow I can't see it...

Friday, September 14, 2007

gPhone based on Google Gears

The only way I can understand current gPhone rumours is if the gPhone is based on Google Gears.

The gPhone is Google's allegedly forthcoming mobile software platform. It's very unclear what it'll actually do, but it sounds like an API for Google and third party providers, presumably for applications such as maps, emails, photos, and videos.

But isn't Google an internet company?

Trouble is, this doesn't sound very like Google. Didn't Eric Schmidt say "don't bet against the internet"? What are they doing building an API for the client? Can't they continue their strategy of developing stripped-down websites in HTML for small mobile browsers?

Two major problems with mobile browsers are the lack of bandwidth and the intermittant connections. That's where Google Gears comes in - I think it was designed with phones specifically in mind.

That 8Gb phone memory is just a cache!

For example, your phone's photo application could be a link to If you're in a Wifi or HSDPA zone, the site will be pulled up over the net (and synched with your phone). If the connection speed is slower, then the local cache from your phone memory will display instead, using Google Gears.

Similarly for email, maps, and even your phonebook contacts - to access them, you visit an internet site, but if the connection speed is too slow, the local cache is displayed instead via Google Gears.

Now it makes sense

Using Google Gears, gPhone application development uses just URI, HTML, and javascript. All gPhone apps sit on the internet, and data is stored locally when your mobile browser accesses the site.

Google is still betting on the internet - in fact it's making it work in an environment where network speed is uncertain.

Wednesday, September 12, 2007

Web Art

Walking through Greenwich market in London recently, I passed lots of stalls selling jewellery, cards, fabrics, and various pieces of art.

It occurred to me that "web art" - digital multimedia produced by an artist and published online - is still in its infancy (with the exception, perhaps, of photography).

Where is the art exchange selling beautiful SVG clockfaces for my desktop or wristwatch, or abstract paintings for my digital photoframe? Where can I purchase thoughtful and arty online cards, or "moody" background videos for plasma screens?

This must be partly down to artists preferring traditional physical materials - and to be honest, given current display and design technology, I don't blame them. But in a couple of years, the market will be there.

Webifying art

Art could be the next area to be turned upside down by the internet. If every piece of virtual art has a URI, this means perfect copying - if someone creates the next Mona Lisa online, everyone can see the "original" perfectly in their browser, and download it.

It's similar to the music industry, - business models will have to change. What will it mean to "own" virtual art? Will anyone pay for it? How will the artists' rights be protected? In music, some suggest that the web will eliminate the record company - but art has never had the equivalent of record companies, and it still may have serious issues.

Of course, there will still be offline art. But it's already clear that the internet will change the world of art, just as it has for music.

Saturday, September 08, 2007

Browser-based wristwatches

Digital watches have never been the most fashionable accessory. But I think that will soon change, as display technology has developed in leaps and bounds.

Imagine if your watch contained a screen like the new iPhone, showing an arty clockface that could be touched to reveal more (e.g. date/time, news headlines, weather, your latest calendar appointments).

You could set this all up by registering at a website on your PC and customizing the look and feel - for example, setting the background to be a family photo.

The technology is already all there - touch screen, 3G data, browser displaying HTML / SVG. Finally, the digital watch can overcome its geekiness.

Sunday, September 02, 2007

Does Apple get the internet?

It might seem a silly question, but does Apple really understand the internet?

After all, they're best known for producing hardware and personal operating systems (iPhone, iMac, iPod, Mac OS) and media applications (QuickTime and iTunes). These are all client-side; is just an old-fashioned download site, and is their only attempt at a modern web application, languishing as the 800th most popular site.

Apple has had no success in converting iTunes into a community site, where people could share recommendations, music gossip and events, or post their own music. And with broadband now prevalant, why not store your music at, rather than on your c: drive - that way, you could access it from any computer, or directly from your iPhone.

Finally, if Apple is not careful, sites like Photobucket, Picasa and the forthcoming will steal its tradition in graphics.

Proving they understand the web

Even Apple isn't immune to Silicon Valley start-ups, especially those competing with their key music and graphics applications.

Apple have shown enormous flexibility in the last few years, transitioning to Intel processors and moving to touch screens from their famous clickwheel. They'll have to demonstrate it again by moving to web applications.

I've written some success measures for Apple, to demonstrate how far they've got to go in making use of the web:

  • Replace iTunes with, a browser-based social application
  • Enable direct iPhone access to
  • Create the best photo editing site

Are Apple ready to create the next great online applications for graphics and music? If not, they'll be limited to selling internet devices.

Saturday, September 01, 2007

Projects are social networks

Project management is natural for the web. That's because it's all about continually creating, maintaining and communicating project information - i.e. "collaboration" - so that the project remains on track. No one's found a better collaboration system than the web.

Project management systems have been web-based for several years now - I've had experience with Planview and several internally developed applications. But they suffer commmon flaws - data is hard to find, hard to update, and too complicated (especially around time management, approvals and project workflow).

What's missing is Web 2.0 - i.e. the use of simple, social web technologies. This will bring less focus on formal workflow, and more focus on straightforward collaboration. What could be easier than a Wiki-based project homepage, or a social network containing the project team?

Below are some ideas for improvement that I haven't seen in any existing project management system.

RequirementWeb Technology
communicate project informationWiki
maintain project planWiki (using SVG / VML for Gantt view, hyperlinks for dependencies)
track status changesWiki versioning
staff notificationsRSS feeds
time managementMicroformat integration with calendar
system integration (e.g. with a financial or CAD tool)RSS mashups or open web APIs

One unexplored opportunity is microformats. A project management system that expressed project plans using microformats would express the who, when and where in a machine-readable format, so all sorts of possibilities open up - linking to mapping software, people's calendars, or a corporate directory, for example.

Another relatively unexplored opportunity for project management software is RSS (or Atom). Staff would subscribe to receive notifications when key pieces of information change (for example, project risks or milestone changes).

And the work breakdown structure (a.k.a. plan) is just a widget that uses microformats to integrate with people's calendars and maps.

Projects are social networks

What I like about these ideas is that they are simple ways to directly support project managers using today's technology.

Enterprise Project Management systems started out as monolithic client-server applications. What they're turning into is social networks, because that's what a project team is!

Friday, August 24, 2007

Bill Gates on Personal Computing

You can tell Microsoft still don't understand the web, because they make comments like this:

IBM is no longer at the center of the computer industry, he asserted, for two reasons. First, the industry is now centered on personal computing. "As much as IBM created the IBM PC, it was never their culture, their excellence," he said. "Their skill sets were never about personal computing."

Actually, personal computing dates from the 1990s. We're now entering a different era - call it "web computing" or "social computing", where the focus is on collaboration and networks of people.

Facebook and Flickr are not personal computing - the world doesn't sit on your C: drive any more.

Times have changed, and you have to ask whether Microsoft's "culture", "excellence" and "skill sets" are keeping pace.

Wednesday, August 22, 2007

Multi-touch and DOM events

The iPhone has highlighted a weakness with the HTML DOM - it has keyboard events and mouse events, but nothing for touch screens.

You can try to kludge touch screens using the mouse interface, or if you're simply looking to code drag-and-drop or resize, you can wait until HTML 5 comes along and use its new attributes "draggable" and "resize".

But if this doesn't work, you're stuck. And for other user interfaces, such as accelerometers in the Wii, the situation is even worse.


So it's time to resuscitate my sensors proposal, which provides a framework for any sensor, not just a touch screen.

What it takes is a single new HTML DOM javascript function - document.sensors() - which returns an XML document of all sensors that the page has access to, e.g.:

<sensors xmlns="">
<keyboard shift="" ctrl="" alt="" ins="" value="ab"/>
<mouse x="20px" y="30px" left="down" right="none" middle="none"/>
<touch pressure="30" x="150px" y="50px"/>
<temperature value="23C"/>
<video src="file://c/program%20files/webcam/"/>
<accel x="2" y="0" z="0"/>
<location latitude="37.386013" longitude="-122.082932"/>

This document is constantly being updated with the latest values - here, you can see there is a keyboard, mouse, touch screen, thermometer, webcam, accelerometer, and GPS. The namespace would define a standard basic set of sensors, but I'm sure that extra ones could evolve over time.

So, any web designer can access the latest status of the environment, in an extensible, standard way, using javascript.

In the case of a touch screen, you could place an onchange() listener on the element, to be notified of any movement. You could also watch out for multiple elements, if several fingers hit the screen.


Some of the sensory information is private, so various security measures should be in place to protect prying eyes from getting access to it. I've listed a few ideas:

  • Only pages with focus can see the sensory data
  • Browsers allow site-specific permission settings for each sensor
  • Webcam and background sound files can't be posted back to web servers


The simple suggestion above provides a much more comprehensive approach to human computer interaction than today's HTML DOM events. It handles not just keyboards, mice and touch screens, but all types of sensors - from thermometers to accelerometers to webcams - in a standard, extensible way. And it brings the web into many new areas - from security cameras to production line control to satellite navigation.

Wednesday, August 15, 2007

CSS and XPath selectors

Many people have noticed the similarities between CSS selectors and XPath - and it's fair to say that XPath is far more powerful.

In fact, XPath can do pretty much anything that CSS can, plus much more.

select the parents of all paragraphs:
select alternate list items:
//ul/li[position() mod 2 = 0]
select table cells with a value less than 10
//tr[number(td) < 10]/td

Yet another CSS wish - XPath stylesheet selectors

No existing browsers have XPath enabled inside stylesheets. But wouldn't it be great if they did?

All it takes is a new CSS selector - XPath(string), where string is an XPath expression.

For example, selecting paragraphs containing the word 'Chris':
XPath("//p[contains(., 'Chris')]") {border: 1px solid black}

Of course, XPath doesn't itself handle pseudo-elements or pseudo-classes like :hover - but we can mix and match the XPath function with other CSS (e.g. colouring any hovered paragraphs containing two hyperlinks):
XPath("//p[count(.//a) = 2]") :hover {background-color:red}
And it can work with other selectors too, e.g. finding <ul>s directly underneath any <p> element, and shading them alternate colours:
p XPath("ul/li[position() mod 2 = 0]") {background-color: white}
p XPath("ul/li[position() mod 2 = 1]") {background-color: silver}

Following the usual CSS rules, if the XPath function contained mis-formed XPath, the style could be ignored. Also, it would be ignored if it returned anything other than element nodes (i.e. no attribute nodes or strings). And the default starting node of the XPath query is the document element, unless clarified by preceding CSS selectors.

Why not implement it?

As you can see, just by adding a single new selector, the CSS language is extended in so many ways.

XPath has already been agreed as a W3C recommendation, and has already been implemented in the major browsers.

I therefore see no good reason, other than inertia, why such a powerful new feature can't be added as soon as possible!

Browser User Interfaces

You can tell we've really begun to understand the web in the last five years, because while browsers have got more powerful, their user interfaces are now simpler and clearer.

This isn't always the case; think back to the office suite wars in the 1990s, when Microsoft and others added endless cluttered options, menus and functions to every release of their word processors and spreadsheets.

See the screenshots below of IE4 versus IE7 - the later release is much more streamlined.

The next version of Firefox, v3.0, is simplifying even further by merging History and Bookmarks into a much more powerful, unified interface: Places.

How far can this go?

Even further! I've listed some ideas below which would simplify and extend whole areas of browser design:

Show the current time in the browser bar. When clicked, it opens your pre-defined calendar site (e.g. google calendar), and you can also drag text onto the clock from a webpage (e.g. "meet in Canary Wharf tonight at 7pm") to add events to your calendar.
For devices with GPS. Show the current location in the browser bar (e.g. "Canary Wharf"). When clicked, it opens your pre-defined map site (e.g. google maps), and you can also drag text onto it from a webpage (e.g. "Canary Wharf") to view that place in a map.
Subscribe to an external CSS stylesheet to control default and override settings like font name, font size, link underline, and audio volume. The stylesheet is cached locally and can be edited in a user-friendly manner via the subscribed website. Mozilla could set up their own stylesheet site as a start.
Page Analysis
Display page file size, security information, "view source", "view HTTP headers", error messages, and spelling or grammer checks. This could be done by posting the page to a subscribed analysis website, if security allows.
Tabs and Windows
Drag hyperlinks onto the tab bar to open them in a new tab, or drag them outside the browser window to open them in new window.

The goal is to reduce the browser interface down to a very few, powerful functions.

There's a common theme with the ideas above: functionality that used to be part of the browser - e.g. default fonts, page information - is now provided by a website. You enter the website details in the browser, and the browser hands over the appropriate information.

Websites are likely to be much better than the browser at certain tasks, because of the speed of application development and deployment on the web, the power of HTML and mashups, and the funding of Silicon Valley. It also allows browser makers to concentrate on page rendering, their core competency.

Security Considerations

Of course, the problem with handing features over to websites is security.

The most obvious example is browsing history - this would be much more powerful if it was integrated with your search engine. The problem is, uploading personal information to a web server is arguably even more insecure than storing it on your local computer.

Until privacy is improved on the internet, browsers will have to retain certain functions, like browsing history.


Experience has taught me that a few, simple, deep principles are always far better than many shallow ones. When I see something very complicated, I know it's just as likely to be a weakness in the design as in my understanding.

So it's great that browser makers are able to simplify their products, while extending their features.

Thursday, August 02, 2007

Folders and File Categorisation

Microsoft's Vista, like XP, comes pre-built with folders called "My Documents", "My Photos", "My Music", "My Videos", and even "My Scans".

They're useful for about 10 mins, and then you run into issues - it stops you from categorising your files any other way, for example by project or by customer. Something is very wrong.

The best solution is multi-dimensional categorisation - being able to view your files by document type, project, customer, or any other cut.

However, since WinFS was removed from Vista's feature list, it isn't possible to do this. But for online storage, like Google Apps, these features should be straightforward - right?

Google gets it wrong too

Wrong. There's Google Docs & Spreadsheets, Google Photos (a.k.a. Picasa), Google Videos (a.k.a. YouTube), Blogger, and many other tools.

But there's still no way to group your files together into a "project folder".

They've made a few stabs in that direction - for example, you can attach Picasa images to Blogger entries, and Google Search in the US now works across file types. But in general, it's even further behind Microsoft.

How it should work

The obvious technology for combining all these file types is HTML.

Imagine creating an online project homepage, with links to the appropriate photos, videos, blogs, emails, spreadsheets, or even calendars. You could either upload new files, or link to ones previously uploaded to sites like YouTube.

The key point is that every perspective - project, file type, author, etc - should have its own homepage, allowing you to view or edit appropriate files. Many of these files will appear in several different places - the author's homepage, plus the project homepage, plus YouTube - but that's ok, because it reflects life!

Why WinFS didn't work

Looked at this way, it's obvious why WinFS failed, and Vista still has those pre-built folders like "My Photos". You simply can't categorise files without web technology - the URL, the hyperlink, the homepage, mashups, even the Wiki.

It's also a reminder that we're still at the foundation stages of computing - categorising files is a basic requirement that no one has truly accomplished yet.

Data mismatches

Over the years, each of the traditional system tiers - database, web server, browser - has grown more powerful, reliable and manageable.

The problem is, they still don't work with each other well!

I still haven't seen an elegant way to create object oriented code from queries of relational tables (although LINQ comes closest).

And using objects to manage semi-structured, hierarchical HTML doesn't work nicely either - that's why the DOM is so ugly.

Finally, placing (X)HTML neatly in a relational database is very awkward, although that hasn't stopped vendors from attempting it.

Different forms of data

That's because they all use different approaches to model data - table, object, and document.

Each approach has many advantages, each requires a different technical skill and personality type to use, and each works best in different circumstances. Unfortunately, they don't work together particularly well.

Can this last?

At the moment there are a few creaks, but no cracks. The creaks are
  • The success of scripting languages like PHP in managing documents, rather than formal object oriented code
  • Buzz around REST, which uses URLs and HTTP to store (and even edit) data, hiding the underlying relational database
  • Relational databases increasingly outputting XML, rather than proprietary data

Another approach - REST, XQuery

It is now (just) possible to use the document approach throughout every tier. It makes code enormously easy to write and maintain, and it fits perfectly into the web.

You wouldn't want to do this for data-intensive applications, such as handling financial market data. But for document-intensive web applications, such as social networking, blogging and photo-sharing, it's perfect.

The idea is to follow the REST approach:

  • carefully construct a URL for every resource important to your site
  • decide which resources require create, review, update, and delete (CRUD) permissions
  • enable HTTP PUT, GET, POST, and DELETE commands against these URLs

Even if these resources are eventually stored in a relational database, this approach totally shields the relational viewpoint in favour of the document.

You can even write most of the server-side code in XQuery. The advantage of doing this is that it fits perfectly with (X)HTML and REST - you can GET documents, extract the relevant parts using XPath, and insert them into page markup using straightforward inline code. No object orientation in sight!

After all, server-side code does four things:

read / update a databaseREST, HTTP
get / set HTTP headersXQuery functions
manage sessionsXQuery functions
construct HTML outputXQuery, HTML

It's very useful if you've got a huge number of URIs (as per REST) - you just have a central application that parses the URI and returns the appropriate mashed-up resources. XQuery is good at this parsing and returning.

One for the future

Unfortunately, the technologies behind REST and XQuery are still very immature and there isn't much support from libraries, documentation, or tools.

And given the immense standing base of relational databases and object-oriented code, and their use in so many different areas, I can't see their support diminishing soon.

That's ok - the point is that new ideas are still bubbling forward for improving developer productivity. SQL and OOP both pre-date the web; they have survived well, but it's always worth taking a step back and asking if there's a better approach.

Harry Potter technology: animated paintings

There's a great scene in the film Harry Potter and the Order of the Pheonix when Hogwart's caretaker, Argus Filch, is taking down an old tudor oil painting. As he twists the painting to remove it, the men in ruffs get angry, shaken from side to side, and eventually fall off the bottom.

It's visually stunning, but also emotionally engaging - it gives the viewer a real connection with the men in the painting. Imagine owning a photo frame that did this!

In fact, it must already be possible to create this effect for real, perhaps using the iPhone (since it has tilt sensors).

Surely picture animation is the next huge area for art - a way to break out of the static image and into lifelike, arresting motion. Why shouldn't the next Lucian Freud create animated paintings?

It's also one of the first digital art forms that isn't a direct copy of an analog one - unlike photography or film cartoons, you simply can't do it using paper. And it seems an even better idea than that other Harry Potter gem, the whereabouts clock.

All we need now is an open standard that describes such animation - I don't think SMIL really cuts the mustard...

From financial services to financial management

I don't think banks really help me manage my money. They might store it securely, give me interest, enable transactions, and offer statements, but that's a short list of services - it's not the whole picture.

For example: what about allowing tagging and categorizing of each statement entry, so I can see how much I spend in the supermarkets or on the gas bill each month?

What about helping me with my tax? Or offering reporting and trending? Or providing financial planning tools? Or customized email / SMS alerts (or even customized financial transfers) when I reach various thresholds?

Drill-down and comparison

I've got online accounts with various different utilities and retailors, but I've forgotten many passwords and don't get around to visiting them very often.

So, imagine if I could click an entry in my statement, and be taken to the relevant website - e.g. click on an entry from the water company, to see my online account there. It just requires my bank to know the URL for common providers (or even store my usernames and passwords).

That way, my bank is truly helping me organize my finances and track my expenditure. It's moving from offering basic banking services to helping me manage my bills.

I could even pick a utility or shop, and view a timeline with all the money I'd spent there. It could link to a shopping comparison site, to help me understand the competition.

Financial Management is more than Financial Services

I don't think they understand this, but banks are a long way from truly enabling people to manage their financials. They're stuck in a decades-long cycle of improving efficiencies, and it's time to open up new capabilities for their customers.

Sunday, July 29, 2007

The mobile web: Fast Pipe, Always On, Get Out the Way

There have always been tensions in between mobile operators and manufacturors, because both want control over phone services like email, news and sports readers.

Now, a third sector has entered the mix - Silicon Valley. Google, Yahoo and Microsoft are all racing to get their own services and applications - such as search engines or mapping tools - installed on phones.

Consequently, there's confusion. Amid much publicity, Vodafone recently annouced they were automatically reformatting web pages to fit mobile screens - and it quickly became clear that they were removing every advert and breaking many sites.

Fast Pipe, Always On, Get Out the Way

There's only one clear approach to solving this, best summarised by Tim Bray, and it relates to mobile operators: Fast pipe. Always on. Get out the way.

In other words, mobile operators must resign themselves to being pure utilities, providing wireless data at ever-increasing speeds. They're wasting time and money creating their own software services for consumers, because Silicon Valley understand better what to provide, can innovate faster, and can scale better.

Back to the future...

The mobile industry will go the same way as the PC industry. Initially, the only applications available were installed locally on the client. Then, basic web browsers became available. As network speeds increase, web applications became ever more popular. Eventually, people will live in the browser, wondering why any other applications even exist.

Even Silicon Valley seems to be forgetting some of these lessons. Yahoo! Go is a java download for mobiles that displays email, internet search, and maps. Why aren't they just re-formatting their websites instead, especially as mobile browsers like Safari 3 and Opera 9 support features like Ajax?

Mobile operators ARE broadband providers

The recent rush of mobile operators into providing home broadband is no surprise. It's a chance for them to learn the ropes before the mobile industry becomes fully IP-based too.

Unfortunately, they've failed to learn a key lesson of the web: IP is stupid, and intentionally so. All the power lies at the edge, with the content. Providing good content and services is a very different business to managing an IP network, and I think they'll struggle to do both.

Thursday, July 26, 2007

New types of computer: display and desk

For years, there have been three basic form factors for computers - the desktop, the laptop, and the PDA / smartphone. Now, finally, Moore's law and new display technology are changing things.

I'm not referring here to the iMac, Tablet PC or iPhone - these follow traditional factors, albeit in a new way. I'm also not talking about the underlying functionality, which is converging for all factors towards internet access, phone, and camera.

Instead, I'm saying computers will start to flourish in new environments. There are two in particular I've got in mind - the display, and the desk.


We'll soon see many display computers for the home. These are computers used for displaying information or art - usually wall-mounted, or sat on a table or mantelpiece.

The first examples are digital photo frames. Already, some are adding Wifi, to display photos from a home PC or from a photo sharing website like Flickr.

Within a few years, they'll have touch screen web browsers too, enabling them to display any website.

Imagine a display in your kitchen showing a clock, your up to date calendar, the latest weather information, and the news headlines.

Or imagine a display in the living room connected to the British National Gallery website, rotating through their art collection.

Or even imagine a display on the bedside table, showing a clock during the night, and tuning in to a TV to wake you up with the morning news.

Although they'll be standard web browsers, people won't use them much to browse - mostly, they'll be connected to a single website, refreshing regularly to show the latest content.


When you're in the office, you want more than a computer desktop - you want a computer desk. Watch the Microsoft Surface demonstration to get the idea, and imagine if your entire office desk was a touch screen computer. This replicates the physical paper and files scattered around your desk for the virtual world.

Personally, I can imagine a computer desk being easier to use at an angle rather than horizontal, so you could reach documents further away.

I'm sure the software and processing power for this is here today. We might have to wait a while for the display technology, though - after all, it requires massive, durable, high resolution touch screens. But the first prototypes are coming out now.

Fitting our lifestyles better

People have been talking about the digital home for a long time. Finally, the vision is becoming clearer - touch screen web browsers, connected to personalized services in the cloud (the home server was a red herring).

Now we have this vision, and most of the technology required to achieve it. It's a question of fitting it to our lifestyles - whether it's in the office, the kitchen, or the living room.

Wednesday, July 25, 2007

SVG as image format

There are two common methods for adding SVG to a page - inline, via <svg>, and externally, via <object>.

The <object> element is bad. It's not semantic - it may as well be called <other> or <miscellaneous>. Although useful in the short term for displaying SVG, I would hope that this use will diminish.

The inline <svg> element is also bad, for the same reason - it's not semantic. It's the equivalent of having a <jpg> element, rather than using <img> - it's named after the format, rather than the purpose.

From the semantic perspective, there are three potential uses of SVG.

  • Foreground images: use <img src="x.svg"> to point to an SVG file
  • Background images: use CSS "background-image" to point to an SVG file
  • Inline with connected DOM: use <iframe> to point to an SVG file.

These are much better because they re-use existing semantic elements.

Unlike foregrounds or inline images, backgrounds should not enable any user interaction - events (e.g. mousedown), hyperlinks, pseudo-classes (e.g. :hover), etc. Some people say javascript should be turned off - this might be a rough and ready first implementation, but some javascript might be appropriate (e.g. random placement of shapes, or animation), so long as the "no user interaction" rule is followed.

The other advantage of <img> and CSS background-image over <svg> is that you don't need to use XHTML. Standard HTML gets round a whole series of issues with mime types, browser control, and backwards compatibility.

Advantages of SVG as image format

SVG images fill a lot of gaps with HTML styling:

  • rounded rectangles, circles, and any polygon
  • fancy borders (arcs, swirls, etc)
  • opacity, color gradients and filters
  • shape hyperlinks and :hover, rather than pixel maps
  • interaction via the DOM (for foreground images)
  • scaling of background images multiple backgrounds (in one SVG)
  • background text (e.g. graffitti, murals, etc)
  • intricate website 'themes' to each page

The possibilities for graphical designers are huge.

Browser support

I'm very pleased to see that the next version of Opera will support SVG images via <img> and background-image. Unfortunately, it's not on the schedule for either Firefox 3 or Safari 3, although it's an aspiration for both teams.

There are four possible methods of using SVG in a webpage - <svg>, <object>, <img>, and CSS backgrounds.

The SVG implementation status for Firefox and Safari is marked at around 55%. Personally, while they only support two of these four methods, I'd hold them at half this - 22%.

Microsoft still don't get it

quote this week from their CEO:

Ballmer explained that Microsoft already is rearchitecting its core platform to be more of a Web-centric one. As he told the Partner Conference audience, “the programming model stays .Net and Windows.” But beyond that, Microsoft is is redoing its products and business models from scratch.

How do they expect to be Web-centric, if they're using tools for programming client-server? What's wrong with HTML, javascript, RSS, and dynamic server-side languages like PHP? Google must be laughing all the way to the bank.

Tuesday, July 10, 2007

Free your data: use HTML tables

Recently I've concluded that spreadsheets are not an optimal way to manage tabular data.

That's quite a claim - Excel earns billions for Microsoft every year, and the recent upsurge for OpenOffice is about standards and open source, not about form and functionality. Even Google has faithfully represented the spreadsheet as an online application.

So, let's review the problems with spreadsheets:

  1. Page layout, text, and multimedia. You can't simply relegate this to MS Word - people want to annotate, explain and display their data professionally. Spreadsheets are awful at this.
  2. WYSIWYG. Spreadsheets display endless rows and columns, no matter the amount of data.
  3. Semantic. You can't distinguish headers, footers, or captions, except through styling. That prevents spreadsheets from being properly computer readable.
  4. Storing data. it's hidden in a binary or zip file, among heaps of formatting, configuration data, etc.
  5. Linking to external data especially on the internet - if you can only analyse your own data, you're missing a lot out - e.g. mashups.
  6. Publishing data If you give someone a spreadsheet, they can edit all the cells - unless you rely on hopelessly insecure password protection!
  7. Collaboration - online discussions, versioning, and synchronous editing

The first three issues are fundamental and inherent with any spreadsheet. The final three are inherent with client-based spreadsheets, but could be partially solved using online tools like Google Spreadsheets.

Using web standards for tabular data

Let's take a step back and categorize everything that spreadsheets do with tabular data, and whether there are any internet technologies with equivalent functionality:

Spreadsheet capabilities, Web technologies
CapabilityDescriptionWeb technology
Store Visually in cells, semantically in data format HTML tables
Transform Sort, group, filter, pivot, consolidate, chart DOM / XSL
Style Borders, shading, text formatting CSS
Model Functions, calculated values, goal seek Javascript, XPath

All these use cases - with the possible exception of conditional formatting, where CSS falls short - can be simply achieved using HTML, CSS, XSL, and a bit of javascript. The browser can beat both MS Excel and OpenOffice - we can free our data.

So what does it look like?

The solution is an editable HTML table inside a web page. The table contains all the spreadsheet functionality you need - sorting, grouping, functions, etc - but rather than taking up the whole page, it's just part of the page, and only contains the amount of cells you need. This allows analysts to surround their data with website text, images, or video (solving problem 1).

Only the appropriate number of rows and columns are displayed in the table - if you want more, you can add them (solving problem 2). This makes the page much more natural and avoids existing problems with people getting lost at the 64,000th row.

HTML tables are the best semantic way to store tabular data (solving problem 3), since there are a range of elements - from rows, to columns, to header and footers and captions - to label the contents. And because it's a web page, all sorts of collaboration, linking, and publishing techniques are immediately available (solving problems 4-7).

New use cases

There are plenty of new opportunities once you use the web to manage tabular data. None are particularly feasible using spreadsheets:

  • mashups
  • new widgets, e.g. maps
  • embedded microformats (e.g. addresses, calendars, etc)
  • version management (rather than endless versions on corporate C: drives)
  • publishing and read-only tables
  • extensibility - new functions & transformations
  • using the web as a database, e.g. DabbleDB

Web Data Management

I don't think this even requires a new online application. You can easily imagine it being part of a blogging website - when you insert a table into your blog entry, the spreadsheet functionality immediately becomes available.

It's just a matter of imagination and time before this happens - and personally, I can't wait!

Friday, July 06, 2007

Semantic text editors and Wikis

Most text editors nowadays (including Microsoft Word) provide a huge range of different styles for the user to present their documents - bold, alignment, indent, border, background-color, spacing, font color, and many more.

As a result, their range of semantic elements is poor. As Ian Hixie says, "People think visually. Trying to ask a Web designer to think in terms of (e.g.) headers instead of font sizes is just something that WYSIWYG implementers and UI researchers simply haven't solved yet."

Too much style, too little semantics

This doesn't matter too much in a stand-alone Word Document. But in a collaborative environment - like a corporate portal, or a web community - it's much more important. It enforces a consistent look and feel, it allows for re-styling of the entire site if required, it reduces storage size, and it aids search engines.

The solution may be the Wiki. The Wiki designer selects site-wide CSS styles for each HTML element, with some options where necessary. For example, they may select:

  • standard paragraph font type, color, size
  • three types of <emp> element, with bold, italic and underline
  • standard header elements
  • standard list types
  • various different table options, e.g. header rows and cells, data rows and cells

Then end users must choose the relevant elements, rather than using arbitrary styling. By removing unnecessary options - for example, enforcing the Arial font - Wikis are also actually easier to use.

The semantic text editor is an old vision. But new ideas about content management and user collaboration - specifically, Wikis, which will grow massively in usage - offer a solution.