A reflection on experience with designing NUI

For the NUI class project, we decided to use the MYO armband as our mid-air input technology, for symbolic input .We imagined the scenario as symbolic input for wearable technologies where a physical keyboard cannot be used, screen space is an issue and voice commands might be socially awkward.

The MYO seemed to be a good fit for input in such a scenario because it’s minimally intrusive and can be used anywhere since it is worn on the hand and reads muscle activity, it is not confined by a physical workspace in terms of its sensing ability.

It came with its own set of issues, as listed below.
There are very few simple poses that the MYO can recognize
MYO is not any exception to the problem of live mic like other mid-air input technologies. We needed to design a reserved gestures to work around the live mic.
When using the MYO, there is also no physical frame of reference.
The MYO also seems to be very sensitive to the initial calibration.
For a subset of the poses , (which happen to stress the muscles quiet less in comparison with other poses) , the MYO produces a lot of false negatives.

We had to overcome/compensate for all of these issues to make it a more usable interface . For instance, we combined the accelerometer and gyroscope data with the pose information to come up with a bigger vocabulary of gestures to support more interactions.

In one of our initial designs we considered using just one pose , the registering of that pose, and then movement till the pose was lost was understood as input.But this design, we realized would not allow for continuation. So in our subsequent designs we decided to include rotation for increasing the power of the gesture and reduce ambiguity.

Also, we had to design our system to avoid the poses that the MYO most often failed to recognize, for more frequent tasks, but these were the poses that were fairly simple and less fatiguing (the pinky pose).

We tried to use redundant gestures to enhance primitive recognition. We designed the system to interpret one gesture for the same purpose at all times. For example, an open fist would mean one level up depending on the state of the system.

To deal with the problem of the lack of reference, we had to keep the gestures relative to one another and not defined as absolute. And MYO conveniently offers itself to this purpose since it is not limited by physical space.

At a high level design perspective, we understood the importance of interaction flow . We had to make sure to use gestures that went well with each other so it was more usable and not cumbersome and essentially slowing the user down.

Specific to being a symbolic input, we wanted the system to be easily learn-able and transition the user from novice to expert quickly, we designed for the sequence of gestures to always be the same so it can be memorized and easily performed .

The system aims for visual independence with time,So initially, for scaffolding purposes,the mapping between the gesture and the visual feedback in terms of what symbol is being selected for input is carefully designed.

From designing this NUI, it has become very clear that there is quiet a delicate balance between leveraging what the input technology has to offer and compensating for its limitations and constraints by designing around it, to keep the interface effective and usable at the same time.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s