Analysis of Prior work for in-air symbolic input

Text input and as an extension symbolic input is an essential form of interaction with almost any type of computer systems ranging from PCs to PDAs to maybe even wearable technology. Our attempt at designing a gesture based natural user interface is therefore justifiably not the first of its kind, given the significance of symbolic input. This blog post is an attempt at analyzing prior work in this area.

Gesture based symbolic input yields itself as a very natural alternative system in scenarios where traditional keyboard-based systems become cannot be employed and/or are difficult to use. Examples of such type of scenarios include small screen devices, typically wearables, large collaborative display systems etc. We can see in literature, some gesture based symbolic input solutions suggested for these scenarios such as the usage of  pinch gloves where a pinch between the thumb and the finger along with the orientation of the hand (inward or outward) was used to map to a particular character, tracking a fingers movement in space (position) to map to a particular character using a leap motion device etc.

The pinch glove technique [1] draws from the users familiarity with the QWERTY keyboard layout and also a very few arbitrary gestures were used in this system that are not too hard for the users to memorize. The physical feedback of the thumb pressing against a different finger also represents similarity to pressing a key on a physical keyboard. This technique proved to be sufficiently less error prone,to be used for text input in an immersive VE.

Thumbcoding[2] is a similar approach which makes complete use of the distinct states that the human hand can gesture based off of the 3 phalanges on each finger barring the thumb, and the closure states ( if the fingers are open or close together) but the performance of such a system might largely depend on the effectiveness of the tracking technologies.

The motionInput [3] simply attempts to extend the 2-d touch interface as seen on mobile screens etc to a 3d gestural input, where the users finger is tracked in space and the position is mapped to a character. They have found this system to be easy to learn but also tedious and slow and very much requiring visual feedback.

The work by Scott frees and others[4], demonstrates the use of drawing the letters to input characters using a stylus and a hard surface , although not gestural, this system can be viewed as relevant because it leverages the users knowledge of the shape of the characters and hence while designing a gesture based system, keeping the users already acquired knowledge can help in developing gestures that are extremely intuitive to the users.

The dasher technique [5] uses continuous input again takes users intent to type a letter ( maybe by tracking users eye or mouse input or hand tracking as shown in the video- https://www.youtube.com/watch?v=YSSADq6rCu0) and combining it with a predictive model .

The ring (https://d2pq0u4uni88oo.cloudfront.net/projects/841093/video-350596-h264_high.mp4) exactly maps handwriting as a gesture input.

It is evident that in order to make the design usable and error free and fast, we need to design gestures that are quick and easy to perform, be remembered and recognized , make use of a good predictive model that frees the user from having to type all of the characters out, and also we need to take into consideration the dependence on visual feedback during input.

References:

[1] Bowman, Doug A., Chadwick A. Wingrave, J. M. Campbell, V. Q. Ly, and C. J. Rhoton. “Novel uses of Pinch Gloves™ for virtual environment interaction techniques.” Virtual Reality 6, no. 3 (2002): 122-129.

[2]Pratt, Vaughan R. “Thumbcode: A Device-Independent Digital Sign Language.” In Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science, Brunswick, NJ. 1998.

[3]Qiu, Shuo, Kyle Rego, Lei Zhang, Feifei Zhong, and Michael Zhong. “MotionInput: Gestural Text Entry in the Air.”

[4]Frees, Scott, Rami Khouri, and G. Drew Kessler. “Connecting the Dots: Simple Text Input in Immersive Environments.” In VR, pp. 265-268. 2006.

[5]Ward, David J., Alan F. Blackwell, and David JC MacKay. “Dasher—a data entry interface using continuous gestures and language models.” InProceedings of the 13th annual ACM symposium on User interface software and technology, pp. 129-137. ACM, 2000.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s