Monday, February 19, 2007

Touch-screen displays on the web

Recently I saw a fabulous demo of touch-screen displays in action.

In the demo, the user is shown manipulating shapes on the screen with both hands - squeezing images, grabbing multiple shapes simultaneously and pushing them together, and simultaneous drag-and drop.

This reminded me of Steve Job's iPhone demo, and my comments at the time that HTML can probably handle multi-touch UIs, but javascript might struggle.

Experience tell us that developers need several different methods to handle user interaction. There should be a simple method with default behaviours, and a more detailed method giving fine control; and there should be a declarative approach for XML developers, and a procedural approach for those that prefer scripting.

It's clear that several things will have to change before websites cater appropriately for touch screen displays.

Firstly, we need more support in CSS for simple effects like drag and drop (our simple, declarative method).
Secondly, we need more declarative support for animation (fine-grained, declarative method).
Thirdly, if there is no mouse, there is no right mouse button - so we'll need to rethink the approach for context sensitive menus. Microsoft have innovated here with the new 'ribbon' interface in Office 2007 - I'll save this piece for another post.

CSS user interaction

CSS styles fit the bill perfectly for a simple, declarative approach. If we add a series of user interface CSS styles, the user gets a consistent experience, and the developer doesn't have to worry about endless code:

  • draggable = "no | yes" - elements with this style can be moved across the page via user interaction.
  • resizable ="none | x | y | preserveAspectRatio | all" - elements with this style can be re-sized via user interaction, along either or both axes.
  • zoomable = "no | yes" - elements with this style are containers (e.g. <html> or <div> tags) and zooming commands are available on the contents of the container.
  • pannable = "no | x | y | all" - elements with this style are containers (e.g. <html> or <div> tags) and panning commands are available on the contents of the container (e.g. panning around Google Maps). This could be scrollbars, or some other user interface method, depending on the browser.

For each of these styles, the exact user interaction method doesn't matter to the web developer - it could be a mouse, a touch screen, voice commands, or something else, as set by the browser or the operating system. In some cases (e.g. touch screen) there could be multiple user interactions at the same time; that's all handled by the browser. All the web developer need care about is setting the appropriate styles.

Declarative animation

Anyone who's tried to program drag and drop knows that the DOM is painfully awkward at tracking certain user interactions - but imagine dragging two objects on a touch screen simultaneously! Which event object would you use?

The real pain here is for events like mousemove. These are "continuous events", a contradiction in terms which reveals the flaw in the underlying approach. For continuously evolving features, languages should use Functional Animation instead (see my previous post).

Imagine if the browser maintained user interaction state (mouse position, touch screen location, etc) in a read-only XML file directly accessible to developers. For example:

<pointers>
<pointer status="active" screenX="100" screenY="100" elementref="div0" relativeX="5" relativeY="5"/>
</pointers>

For the mouse, there would only be one <pointer/>, with "active" status when the mouse was down, and "inactive" when up. For touch screens, there would any number of <pointer/> elements (including zero), each representing a finger or stylus touching the screen. The elementref attribute stores a reference to the element that the pointer is currently over, the relativeX and relativeY commands store the location relative to this element, and the screenX and screenY elements store the location relative to the screen.

Once you have this file, you can do functional animation based on it. For example, using the XForms <bind> tag:

<bind infoset="id('img1')" calculatewhen="//pointer[@elementref = 'img1' && @status='active']">
  ./@css:left = "//pointer[@elementref = 'img1']/@screenX;
  ./@css:top = "//pointer[@elementref = 'img1']/@screenY;
</bind>

which once activated, binds img1 to the evolving location of the mouse / stylus.

As you can see, this approach avoids the need to use javascript at all - event handlers and declarative functional animation are enough.

Touch screens are the future

Computer mice have been around for so long that it's tempting to see them as a permanent fixture in computing. But actually they're pretty uninuitive - remember seeing someone using a mouse for the first time?

As touch screens spread, web developers will be faced with an interesting set of challenges, which are best overcome using a few simple CSS tags, and a declarative approach.

1 comment:

stelt said...

on (dis)continuous events:

Skipping a pixel or a few is usually not a problem at all for mousemove.
Totally skipping passing an object (mouseover+mouseout) can be a problem, which does happen (or events are not fired at the border, but somewhere else).
Can't faster hardware or smarter software fix this?
You don't want to bother the application developer with how non-continuous the cursor is i think.