The motivation to design a novel text input technique for the class project came from trying to observe common trends in newer NUI applications such as Google glass. As we started talking about what kinds of problems are often encountered in NUI scenarios, we realized that text input is still very much depending on traditional methods, for instance, when a physical cannot be supported due to application constraints , the closest form of text input is most likely a virtual keyboard that is operated using touch sensing or gestural selection by pointing. These techniques suffer from inherent difficulties such as lack of force feedback, or visual feedback due to spatial constraints in displaying the virtual keypad. If not these, generally, the technique is difficult to understand and master and demands some considerable cognitive effort.
We wanted to come up with a novel design that strikes a balance between performance , in terms of accuracy and speed of typing , and usability in terms of ease-of-use and cognitive demands associated with the technique.
Our brainstorming process initially started out with us discussing the following questions:
what are the very general problems in gestural interfaces or touch interfaces in a very broad sense?
what are the specific tasks that need to be supported in gesture or touch based systems?
Problems associated with each of the techniques?
What work is already done in terms of addressing these issues ?
At the end of all of these topics, we decided that text input is something that could be applicable in a wide variety of scenarios and it would be very useful if we could design some natural user interface that addresses at least some of the open issues.
Some of the resources that we looked at in the process :