The first design studio helped us present our initial design ideas with respect to using the MYO. The idea is to use hand gestures to achieve symbolic text input that can be used in combination with any wearable-technology.
We discussed our ideas to be based off of the minuum keyboard for touch surfaces, that takes advantage of the user’s previous knowledge of the qwerty keyboard layout, and it is designed so that the user need not be accurate about the letter that he intends to type, Owing to good prediction algorithms. The first question that came up was how can we allow the sloppiness if we had a similar keyboard for typing using in-air gestures. We realized that we are going to need good predictive algorithms to suggest words based on current selection and have auto-complete like features, that will make the typing faster.
We had questions about how the MYO detects, and how many gestures it can uniquely identify and raw data that can help us have gestures, that define the nature of action depending on the speed of the arm movement etc. These question helped us realize that we needed to know more about how the MYO processes the data before we settle for designing the vocabulary for gestures.
We also had suggestions to look up techniques such as the Dasher to identify many possible design ideas. We had a very simple gesture-set for our idea like a side slide of the hand for letter-group selection, a closed fist for choosing the letter , an arm slide for word completion etc. While we did this activity of identifying potential gestures that will be fast at the same time not fatiguing, we realized that we might need a different layout of the letters, gestures for capital letters, numerical characters, punctuation etc.
So our next steps would be to consider all of these suggestions feedback and identify more issues and work-around for the same.