My views on usability of the touch interface devices for blind users
In the last post I talked about usability of button interface devices such as DVD player. And I promised that I will discuss touch interface devices for the blind users. But before we talk about touch interface devices, I need to talk about a few usability principles.
First usability principles is that users should be able to figure out how to work with the interface with least amount of documentation. In certain scenarios avoiding manuals is not possible, but the designers should try to reduce the need for it. For example some assistive devices provide suggestions about the commands that can be used with the interface element in focus.
Second principle is to provide the feedback about the state of the system. Have you ever used a water tap with 2 inputs? One input is cold water and another is the hot water. It is easy to mix if you get an immediate feedback to mix water for correct temperature. But now try the same with a shower. In a Shower, there is a slight delayed feedback, and that delay often makes it difficult to mix the correct temperature. So you can imagine What would happen if we do not get a feedback at all.
Third principle is to keep things simple, i.e. one interface element should not be used for more than one function. I had provided an example of such a usage in my last post, but there are many systems that use the same button for multiple functions.
Forth principle is to use conventions that are obvious for most of the users. For example up arrow is often used to move up, or increase, and down arrow is used to move downwards or decrease. Could you think of an exception of such a system? In one of my air conditioners, up arrow was used to decrease the temperature and down arrow was used to increase it. And on top of it, down arrow and up arrow were placed side by side, confusing us further. Often right and up are used to increase, and left and down are used to decrease, but best is to use up for increase and down for decrease.
Given the principles above, we can now focus on the touch screen. First of all, it is very difficult for a blind users to know what is there on a screen. And then touch screen which might instantly activate something, may create havoc.
So for the blind user, simple touch should not activate any item on the touch screen. A touch screen should only speak the item at the focus, making it easy for the blind person to explore the screen. Apart from speaking the item at the focus, the system should also inform the user how to work with it. For example, how to activate the current item, or how to change the touch mode so that advance touch commands can be issued. It is very important that default mode is very simple and is understandable by people who are not tech savvy.
For example, if the user touches the screen or navigates to a button, the system may say “Next button, double tap to activate, or triple tap to switch touch modes”
There must be a trade off between making it simple vs. making it useful. Very simple Interface may be easy to use, but almost all modern systems have so many functions, that making it totally simple is not possible. So the system may be designed to be easy for majority of users and for advance users, it may provide more options. But more options should not confuse beginner users.
We can take clue from the design of some remote controls. Some remote controls provide half a dozen buttons that are most frequently used, and buttons for the advance options are hidden from users in a separate compartment covered with a lid.
For our touch interface, Such an option can be created by adding multiple modes that can be changed by users. Such modes may be simple and if possible, they should be context dependent. For example all the commands that are related to reading / editing should be available when the focus is in edit box.
One challenge is that touch interface depends on the availability of hardware features. For example, Windows Mobile devices could not use any gesture that required multiple touches. Another such hardware dependant option is found in iPhone and Android, in which one can just shake the device to perform cancel command.
We can also use some conventions. Users often understand some basic gestures such as slide up for moving up, slide down for moving down, slide right to move forward and slide back to move back. As mentioned in the example above, whenever such gestures can work, the system should let the user know about them.
Another convention is to present complex information, such as that available in web pages, in document format. This convention has evolved because speech is sequential. When one is working with speech, one can’t focus on multiple items at once, so information should be laid out sequentially. . Due to this limitation, a blind user is most productive if information is available line by line and character by character. It can also be useful if during navigating line by line or character by character users also get to know about the structure of the document. But information has to be sequential. Another reason for such a line / character array is the certainty. User knows that the commands that are require to find something are limited, and simple commands can be used to scan the entire layout. Because a blind user can’t just glance at the screen, he / she may have to scan the entire screen before finding the item of interest.
One very powerful tool is to provide a help mode with the touch gesture interface. In help mode, user may issue a gesture, and the system lets the user know what command can be issued with such a gesture. In another mode (training mode) the system may ask the user to perform the specific command, and then let the user know whether the command was correctly performed or not.
As always, I would be interested to know your views about these issues. Usability of touch interfaces has become very important, because, whether we like or not, touch interfaces are going to stay.
PS: My son is now able to use the button of the DVD that I mentioned in my last post.