Friday, October 22, 2010

My views on usability of the touch interface devices for blind users

 

In the last post I talked about usability of button interface devices such as DVD player. And I promised that I will discuss touch interface devices for the blind users. But before we talk about touch interface devices, I need to talk about a few usability principles.

 

First usability principles is that users should be able to figure out how to work with the interface with least amount of documentation. In certain scenarios avoiding manuals is not possible, but the designers should try to reduce the need for it. For example some assistive devices provide suggestions about the commands that can be used with the interface element in focus.

 

Second principle is to provide the feedback about the state of the system. Have you ever used a water tap with 2 inputs? One input is cold water and another is the hot water. It is easy to mix if you get an immediate feedback to mix water for correct temperature. But now try the same with a shower. In a Shower, there is a slight  delayed feedback, and that delay often makes it difficult to mix the correct temperature. So you can imagine What would happen if we do not get a feedback at all.

 

Third principle is to keep things simple, i.e. one interface element should not be used for more than one function. I had provided an example of such a usage in my last post, but there are many systems that use the same button for multiple functions.

 

Forth principle is to use conventions that are obvious for most of the users. For example up arrow is often used to move up, or increase, and down arrow is used to move downwards or decrease. Could you think of an exception of such a system? In one of my air conditioners, up arrow was used to decrease the temperature and down arrow was used to increase it. And on top of it, down arrow and up arrow were placed side by side, confusing us further. Often right and up are used to increase, and left and down are used to decrease, but best is to use up for increase and down for decrease.

 

Given the principles above, we can now focus on the touch screen. First of all, it is very difficult for a blind users to know what is there on a screen. And then touch screen which might instantly activate something, may create havoc.

 

So for the blind user, simple touch should not activate any item on the touch screen. A touch screen should only speak the item at the focus, making it easy for the blind person to explore the screen. Apart from speaking the item at the focus, the system should also inform the user how to work with it. For example, how to activate the current item, or how to change the touch mode so that advance touch commands can be issued. It is very important that default mode is very simple and is understandable by people who are not tech savvy.

 

For example, if the user touches the screen or navigates to a button, the system may say “Next button, double tap to activate, or  triple tap to switch touch modes”

 

There must be a trade off between making it simple vs. making it useful. Very simple Interface may be easy to use, but almost all modern systems have so many functions, that making it totally simple is not possible. So the system may be designed to be easy for majority of users and for advance users, it may provide more options. But more options should not confuse beginner users.

 

We can take clue from the design of some remote controls. Some remote controls provide half a dozen buttons that are most frequently used, and buttons for the advance options are hidden from users in a separate compartment covered with a lid.

 

For our touch interface, Such an option can be created by adding multiple modes that can be changed by users. Such modes may be simple and if possible, they should be context dependent. For example all the commands that are related to reading / editing should be available when the focus is in edit box.

 

One challenge is that touch interface depends on the availability of hardware features. For example, Windows Mobile devices could not use any gesture that required multiple touches. Another such hardware dependant option is found in iPhone and Android, in which one can just shake the device to perform cancel command.

 

We can also use some conventions. Users often  understand some basic gestures such as  slide up for moving up, slide down for moving down, slide right to move forward and slide back to move back. As mentioned in the example above, whenever such gestures can work, the system should let the user know about them.

 

Another convention is to present complex information, such as that available in web pages, in document format. This convention has evolved because speech is sequential. When one is working with speech, one can’t focus on multiple items at once, so information should be laid out sequentially. . Due to this limitation, a blind user is most productive if information is available line by line and character by character. It can also be useful if during navigating line by line or character by character users also get to know about the structure of the document. But information has to be sequential. Another reason for such a line / character array is the certainty. User knows that the commands that are require  to find something are limited, and simple commands can be used to scan the entire layout. Because a blind user can’t just glance at the screen, he / she may have to scan the entire screen before finding the item of interest.

 

One very powerful tool is to provide a help mode with the touch gesture interface. In help mode, user may issue a gesture, and the system lets the user know what command can be issued with such a gesture. In another mode (training mode) the system may ask the user to perform the specific command, and then let the user know whether the command was correctly performed or not.

 

As always, I would be interested to know your views about these issues. Usability of touch interfaces has become very important, because, whether we like or not, touch interfaces are going to stay.

 

PS: My son is now able to use the button of the DVD that I mentioned in my last post.

 

 

Monday, October 11, 2010

usability from a toddler's perspective

 

I must clarify that I am not a toddler, but I am writing from a toddler's perspective (whatever I know by observing my son). Alright, let's get to the matter that I want to talk about. The following discussion is about the usability of a DVD player.

 

My son, Namish, is now 3 years old (I should say 3 years young), and he has been operating a DVD player from the age of 2. But he is unable to use the new DVD player that we got a couple of months back. So, why a child who could use a DVD player when he was 2 years old is not able to do so when he is 3 years old? Anyone who has observed a child closely would know that the understanding of a 3 year old is much better than a 2 year old. One argument could be that the new DVD player has a new interface, so he is taking time to learn the new interface. But it should not take him more than 2 months to understand it. Another argument could be that we are blind so we can't tell him the signs for play / stop or next/ previous. But we certainly showed him by pressing those buttons.

 

In my view it is due to somewhat bad design. There are 2 concepts for the word 'design': One, design equals aesthetics and Two, design equals usable. In terms of aesthetics, the new DVD player is pretty good looking. It has a single button that has all 4 functions in one. In other words, it is a 4 directional joystick button.

 

The top of the button is play, and the bottom is pause. The left of the button is back, and the right is forward. This is very simple for people who are familiar with modern digital devices, but for those who are new to these gadgets, it is quite confusing.

 

Our earlier DVD player was designed such that buttons were paired for functionality. So there were 2 buttons, one very long button for play / pause, and second long button for back / next. That design was also slightly confusing, but there was a line in the middle to mark the difference. So he could easily learn to use those buttons.

 

IF device manufacturers follow the rule of one button per functionality, the life for many of us could be simpler, including for the little once among us. Device manufacturers are now moving away from buttons to touch interfaces.

 

That brings me to the touch screen based devices. The simple rule is users touch the item of interest and activate it. This rule is very simple for people who can see, but it is very complicated for people who are blind. Blind persons do not get feedback and do not know what to do with those items. I will try to cover this in my next post.

 

And if in the meanwhile, you have some ideas, please share with me.