I made a hand-waving gesture during a conference call and realized that the people dialed-in to the call were missing out on my communications. The advances in electronic deciphering of visual images has probably produced a system that could identify a human form and create 3D model's of that form's
stance and movement. That system could go one step further and interpret that data and spit out phrases.
For example:
- "John shows extreme derision" when you flip the bird.
- "John has no idea" when you shrug.
- "John is talking about this topic holistically" when you make a 'whole-earth' wave of the hands
A portable version could, also, be useful for blind people. They could wear the camera on the front of their shirts (or in their belt buckle) and it could whisper in their ear.
The first generation (after it was purchased by Microsoft) would only support West-Coast English gestures, but as development progressed it would have changeable 'culture' templates (kind of like Internet Explorer.) Maybe even a GPS module that automatically changed your template based on locale.