Thursday, May 24, 2007

Software Development = Process Improvement

The point of enterprise applications is to enable business processes.

Great developers understand their business process from end to end - the different departments and partners involved, the miss-steps and hand-offs, the data flows and human tasks at each stage. In fact, often they understand it better than the business managers!

And the main role of many CIOs now is to own the process for improving business processes. This requires strong business skills and understanding of best practice in the marketplace.

Automating Process Improvement

So it's pretty ironic that probably the least automated process I can think of is process improvement - it includes business requirements, analysis, design, coding, configuring, testing, communications, and release. Developers are meant to be the experts at automating processes, and they can't even figure out their own!

As a result of this, you need a translation layer ("business analysts") between the business and software developers. The fact you need a translation layer really bugs me - it should be transparent.

Developers all focus on programming languages and code techniques (Java versus .Net, or dynamic versus static, or REST versus WS-*), but this misses so much in the end to end process. For example, what if there was a website that

  • hosted your application
  • provided page design and mock-up functionality
  • provided workflow design ability between different pages
  • supported version management and code trunks
  • created test environments on the fly
  • produced automatic test scripts
  • provided a security and permissions module
  • handled auto-backups and disaster recovery
  • handled Atom data sources

Viewing Process Flow

Developers want to create master process engines to direct workflows (e.g. Business Process Management Systems, or complex BPEL XML), but this misses the central lessons of the web: the URL, the stateless web page, the hyperlink, and the browser "back" button.

I'm envisaging something a bit different: a 'page sorter view' for web pages, similar to Microsoft Powerpoint's ability to view presentation slides together. This could be used by site designers to:

  • check completeness of the site
  • ensure a consistent look and feel
  • check which pages hyperlink to others (perhaps represented by arrows)
  • check which pages view / edit data sources
  • check page permissions
  • view workflow from page to page
This way, much of the site design can be done before any development is even started, and in time for the business to validate it - it's a much better process for process improvement.

You could even enable a REST style by guiding designers to select appropriate URLs up front, and promoting Atom for data sources.

Plenty of Room for Improvement

Code writing is only a fraction of the end-to-end process for process improvement, which is generally very immature and un-automated.

Just as Software as a Service (SAAS) providers are shaking up industries like account management, HR and Finance, I can see SAAS versions of Eclipse and Visual Studio taking over the market - not just for developers, but for designers and the business too.

Only when developers have automated their own processes will they be able to turn their full power to enabling the business.


The internet is eroding many traditional application categories, and enabling many new ones.

So it's worth revisiting what types of applications there are - which is another way of saying, "what is software for?".

1. Content Management

"Content" could be text, images, diagrams, videos, or sounds, whether produced by professionals or otherwise. Think of emails, phone calls, spreadsheets, organisation charts, books, newspaper articles, TV news, blogs, and shopping lists.

"Management" includes

  • creation (e.g. camera)
  • editing (e.g. photo red-eye elimination)
  • collaboration (e.g. discussing which bits to airbrush)
  • versioning (e.g. knowing who, when and how it was edited)
  • distribution (e.g. emailing photos, or publishing them to a website)
  • consumption (e.g. viewing in a browser or digital photo frame)
  • search (e.g. Google Images)
  • storage (e.g. Photobucket online storage)
Microsoft Office ruled Content Management for over a decade. Now, the industry is moving towards the internet for distribution, web browsers for consumption, Google for search, and a host of online services - "Web 2.0" - for editing, collaboration, versioning, and storage. Although the revolution started in the consumer world, content management is moving to the enterprise too.

2. Consumer Services

Many applications provide services (above and beyond content) for the general population. Examples include banking, shopping, dating, gaming, and travel websites.

Consumer services was the focus of Web 1.0 and the first dot-com boom, with the awful moniker "disintermediation". Companies built websites on top of their back-end Process Management systems, to provide a cheap new way to reach their customers.

3. Process Management

Process software is what keeps organizations ticking. "Process" includes HR, Finance, Sales, Supply Chain Management, and a host of industry-specific processes. "Management" involves driving workflow, enforcing business rules, managing data, providing reports and charts to track success, and offering tools to improve the process further.

SAP and Oracle are the major providers of Process Management software, but many Software as a Service (SAAS) suppliers are challenging their business models using the internet as a distribution channel. Major corporations employ armies of software developers instead, to ensure a competitive advantage.

4. Environment Management

Software is also used to sense and control the physical environment. Examples include thermostats, ABS braking in cars, production line robots, talking child's toys, oil rig drill systems, and vending machines.

This is the area that the web hasn't yet reached - although TCP/IP is often used for device communications, and HTML displays are used as dashboards.

5. System Software

System software is used internally in the IT industry to power the four application types above. Examples include operating systems, anti-virus tools, databases, and storage management systems.

Wednesday, May 16, 2007

Business Intelligence

Two years ago, the business intelligence market was pretty static. At the bottom end, tools like Microsoft Excel, ChartFX graphs and Crystal Reports were suitable for most data analysis. At the top end, Business Objects and Cognos ruled, incorporating data warehouses, OLAP cubes and sophisticated data querying functionality.

Changes at the top end...
Recently, the top end has completely changed as vendors realise they need to understand the business processes being analyzed. So it's merging with Business Process Management software - note the recent purchases by Tibco of Spotfire, and by Oracle of Hyperion. That's because there are common industry metrics and reports - like the click-through rate in the advertisement industry - which analysts want to see out of the box. Vendors also get a chance to move up the food chain and offer companies not just reports, but business advice too.

Opportunities at the bottom end
Meanwhile, the bottom end of the market has remained static. I think there is a massive opportunity here for innovative new tools based in the browser.

That's because of two recent trends:

  • Most data is now available on the web, and if it isn't, many tools (e.g. bloggers) can easily get it there.
  • Browsers now support either SVG or VML, vector graphics formats that can be manipulated using javascript, in addition to Flash.

Internet analysis
Imagine a website that you could log in to, type in a reference URL for your data, and immediately see all manner of customizable graphs and charts based on it. You can see it being part of an online suite of content management applications, alongside word processing, spreadsheets, and presentations.

There are plenty of advantages compared to (for example) Microsoft Excel charts

  • You can easily analyse data stored elsewhere - in realtime - simply by typing in a URL
  • No need to download a client application, it can be done online
  • You can support a community of analysts, sharing chart types, techniques and advice
  • You can build up a library of chart types and styles, allow people to create favourites and apply them to many different data sources

Moving business intelligence online
Many client applications are starting to move online - think of Google Docs, or MySpace blogs, or HP Snapfish - but they always take on a new twist. Often, this is in collaboration, storage and search.

Low-end business intelligence - graphs, charts, and reports - is no exception. Expect a flurry of tools as Silicon Valley realises the power of web graphics.

Monday, May 14, 2007

Paper versus computers

The paperless office has been an IT dream for decades. But despite huge leaps in technology, it's still permanently a few years out, and most people prefer to use both, depending on the scenario.

In order to understand why, and figure out whether, why or when this will ever change, I've listed the pros and cons for computers and paper.

FeatureBest mediumAutomated
Distribution costsComputer1990s
Marginal costsComputer1990s
Document CopyComputer1990s
Validation (spellcheck, form values)Computer1990s
Document Editing (delete, move sections around, etc)Computer1990s
Store & searchComputer2000s
Reading qualityPaper2010s

You can convert formats from computer to paper by printing, and vice versa by scanning. This helps you gain the benefits of that medium, but the conversion process is not perfect.

Recent changes

  • Accessible - search engines can now crawl documents and automatically extract important data, because of open formats such as HTML. Browsers can display data according to the user profile (e.g. large fonts).
  • Store & Search - search engines have made a massive difference to tehe ability to find documents, and online services such as Photobucket and Google Documents enable online storage of information

Likely to change in the next five years

  • Doodles - via pen / touch interface and standard vector graphics (Flash, Silverlight or SVG formats)
  • Handwriting - via pen / touch interface, with OS support
  • Collaboration - office suites and content management applications will be integrated with new collaboration features such as Wikis, Blogs, and Voice over IP.
So, soon you will scribble much fewer notes and diagrams on bits of paper as it becomes much more natural to use computers for these scenarios. However, you'll still need to print documents out to read them in high quality, since I can't see display technology getting as good as basic paper.

Paper is still better than monitors in many ways. When was the last time you quickly doodled a diagram on your computer? Or scrunched up your monitor to fit in your pocket? But videos, hyperlinks, storage, and search are all much better on a computer. With the advent of wikis, blogs, instant messaging, and other technologies, it's getting easier to work together on computers too.

Paper will only be eliminated when computers have the edge for every feature and every person. This is not likely to happen soon, and in any case most people are very happy working in a world that combines the two.

Maths and web technology

Like many in the IT industry, I studied maths and science at university. I got used to dealing with concepts fundamental to understanding the world, such as sets, functions, and logic - the same concepts that John Von Neumann and Alan Turing used to define the first computers.

You might think these concepts would carry over to modern computer science - but actually, it's surprising how often the IT industry forgets them, to its own massive detriment.

This might be down to Silicon Valley's eternal optimism that it can rip up the rule book and invent new ways of doing things. It might also be down to the random-walk way in which innovation occurs, or the legacy of many quick patches and minor tweaks.

Either way, I've put together a map of internet technology against the fundamental areas in maths; the gaps show there are many opportunities for improving IT, especially in the areas of equality, geometry, and dynamics.

Sets, Lists, and Trees
A set is probably the most fundamental mathematical concept - an unordered collection of objects. Basic set operations include order (i.e. number of members), union, and intersect.

Sets are only rarely used on the web. Instead, special cases of ordered sets (i.e. lists), or hierarchical sets (i.e. trees), or linked sets (i.e. graphs) are used instead, most obviously in javascript arrays, HTML structure, and search engines respectively.

That's because these extra properties give them more power, and cover most of the use cases. So in this area, web technology maps to maths pretty well.

Equality, Functions and Logic
The equals sign is used so often in HTML and javascript that you might think equality is pretty much covered. But it's not - the equals sign is used to temporarily assign a value to a variable, rather than provide a definition that applies over time.

For example, there is no direct way to say "keep the width of this HTML table at double the value typed in the input box", so that whenever the value changes, the table automatically re-jigs itself. You can do this in spreadsheet formulas without programming events, so why not in javascript?

What's needed is a functional approach to web programming - see my earlier blog entry for more details.

Without a decent foundation for equality, it becomes needlessly complex and awkward to program events and animation. You also end up with many statements controlling variable values, when really only one will do.

One basic area of logic that the web doesn't cater for is automatically re-arranging equations. In the example above, if you manually stretched the HTML table, then the value in the input box should change in order to maintain the equality.

Integers and real numbers are both examples of fields in maths - sets of numbers with two standard operators defined (addition and multiplication) and identities for each operator (0 and 1 respectively).

I haven't seen any general approach to fields in a programming language - but by covering common special cases (such as integers and real numbers) using data types, I think they're ok.

The obvious other special cases are complex numbers and multi-dimensional fields, e.g. three dimensional vectors. The first is pretty rare except in physics, the second can already be accomplished by manually creating a new data type and overloading the addition, multiplication and equality symbols.

The web is pretty poor at basic geometry. Even though it's only two dimensional, HTML restricts itself to static vertical and horizontal coordinates - you can't specify angles, even though the concept is two and a half thousand years old!

SVG is a bit better, in that it defines basic linear transformations (rotations, scaling, and shears). But there's no way to naturally apply these transformations to HTML elements, even in a compound SVG + HTML document. And even in SVG, you can't specify paths using a function like sin(x) - you have to produce a list of pre-calculated points, and rely on the renderer to join them up using a Bezier curve.

Finally, SVG is really missing a trick in not allowing curvilinear coordinates. These allow pages and page elements to be stretched and squished in arbitrary ways. It's just the thing for graphic designers!

Calculus and Dynamics
Calculus is obviously not possible on the web, except by manually creating complicated javascript libraries. There is only one area where it creeps in - in SMIL, you can edit the speed of an audio or video element, and do basic animations.

This is probably because calculus has a reputation as a very technical subject, and the business value is not immediately clear.

But there is one area where calculus' business value is immense - animations. You simply can't use speed and acceleration variables without some understanding of how they relate - which is governed by calculus.

So I recommend a couple of advanced XPath functions

  • speed(node_value) - sets / returns the rate of change of the node value
  • accel(node_value) - sets / returns the acceleration of the node value
These functions really aren't that complicated, but they allow you to implement vastly more powerful dynamics than SMIL - as an earlier blog entry discusses.

They're also reliant on functional programming, where an equation holds true over time. For example, imagine
accel(//div1/@css:left) = //gas_pedal/@value
which would accelerate div1 to the right by the amount held in the gas_pedal node, which could alter with user input.

You simply can't do this using SMIL!

In mathematics and computer simulations, you often model situations with several possible outcomes. In javascript, the Math.random() function returns an unbiased random number between 0 and 1, which can be used in the modelling.

This single feature is already enough for very powerful models. For example, imagine writing this:
< div width="100 * Math.random() ^ 2"> Hello World < /div>
which assigns a probability distribution to the width, biased towards values near 0px.

Web Technology can learn from mathematics
Even the simple mapping above shows that there are some big opportunities for further development of the web. Because they're based on fundamental mathematics, they're guaranteed to stand the course of time.

Authors of current web specs (e.g. SVG and SMIL) should look at geometry, dynamics, and equality, and integrate centuries of learning into their approach.

Friday, May 04, 2007


This week's Economist contains some predictions about machine-to-machine wireless communications. Most of the devices mentioned seemed to be sensors - whether used by the military, civil engineers, security guards, doctors, or retailers.

This got me thinking about how sensors can be handled on the web. We're all used to dealing with mice and keyboards - what about location or pressure sensors, or thermometers, accelerometers and gyroscopes, which are already being integrated into phones?

Providing sensory data to the web
My idea is that browsers should pull together all this information and make it available to web pages, in a standard way. Sensory information is becoming more and more important, especially in the mobile web, where knowledge of location, direction, and acceleration are vital to display great web pages.

For example, imagine if the following XML fragment was accessible via a javascript sensors() function:

<sensors xmlns="">
<keyboard shift="" ctrl="" alt="" ins="" value="a"/>
<mouse x="20px" y="30px" left="down" right="none" middle="none"/>
<touch pressure="30" x="150px" y="50px"/>
<temperature value="23C"/>
<video src="file://c/program%20files/webcam/"/>
<accel x="2" y="0" z="0"/>
<location latitude="37.386013" longitude="-122.082932"/>

Here, the browser is presenting all the information it can find about its environment from connected sensors - the A button is down on a keyboard, the mouse is being clicked, the screen is being touched, the temperature is being read, there is a connected webcam, the device is being accelerated, and it knows its position. All defined in a (fictitious) standard XML data format.

Using sensory data
Different devices have different sensors - the Wii has an accelerometer, the Nokia N95 has GPS, my phone has a camera - so the sensory data will be different in each case. And there may be privacy implications - you might configure your browser to grant location data only to the emergency services and your favourite map website.

So the web developer's first step will be to parse the data to find out which sensors are available. They could do this using XPath - for example sensors('//accel/@x') only returns a value if there is an accelerometer.

Imagine using the following javascript:

window.setTimeout($(div1).innerHTML = sensors('//location/@latitude'),10);

which in a single line, updates the div1 tag to contain up to date latitude information every 10 milliseconds.

The possibilities are endless

  • satellite navigation in the browser
  • scroll web pages using acceleration
  • pen doodling on the web, using a touch screen and SVG / VML
  • website games using local web cams.

Personalising your pages
Sensory data is the ultimate way to personalize web pages. They can react in realtime to the local environment that web page visitors are experiencing.

There is currently no standard framework for accessing this data - but the simple ideas above would bring the web to the next level.

Online receipts and my bank account

I always lose my receipts - they're small bits of paper that get trapped in bags and thrown out, or lost in draw clutter, or blown out of my wallet.

Apart from costing me money when I need to make a claim, this puts me at risk of identity theft, since receipts often contain my full bank account details.

So, why not move to using web receipts? That way, I can confidently shred my paper receipts immediately, in the knowledge that the data is secured online. And it brings plenty of benefits, such as accessibility, search and storage, and hyperlinks from my online bank statement.

Online Receipts and my Bank Account
The idea is simple. I already have online logins with plenty of retailors, for example my local supermarket (Tescos). Whenever I make a purchase at Tescos, either online or in the store using card details it recognizes as mine, it should create a receipt web page.

I can imagine logging in to Tescos and being able to view details for every purchase I've made, each at a unique URL. I can treat each URL as a receipt by printing it off.

And what's more, Tescos could pass this URL to my bank, so it appears in my online bank statement.

Then I could browse my bank statement online, find an entry I didn't quite understand at Tescos, and click to be taken directly to the receipt for more information (via Tesco's login page).

No new technology
The beauty of this plan is that it doesn't rely on any new technology.

Retailors already have websites, and already store details of every purchase in their databases. All they have to do is put these details on their website so that only the purchaser can access them.

And banks already have online statements. There is often even a rarely used field for each transaction that could be used to store a URL.

And there's an easy migration path, with clear incentives. The old way of doing things still works, but people would prefer using retailors if their receipts were online, and would also prefer using banks that provided hyperlinks from their statement to their receipts.