While struggling to walk and text on a touchscreen smartphone (HTC HD2 if you're interested), I thought "there must be a better way". With the advent of the Kinect and various motion sensor technologies, I think there is.
I had a dig around on the 'net, and <link1> is the closest thing I could find
to what I'm thinking of.
A unit mounted on the wrist, containing the following:
3 cameras on the 'palm' side looking at fingers, a la Kinect, to detect positions of fingers
Internal solidstate inertial sensors/gyros, for relative motion sensing
Digital compass, for 'absolute' directional alignment
3 'outward' (down, left, right) cameras, for visual overall motion sensing (see also <link2>)
A ring of pressure sensors on the inside (against the skin), detecting the tendon movement, for extra redundancy on finger sensing
A ring of little LED lights for calibration (more on that later).
I envisage this as used in conjunction with a 'glasses' style display (normally transparent OLED or something), that also contains motion sensor, cameras and calibration LEDs. The LEDs are so the wrist unit and the headset can 'find' each other to calibrate their relative positions (the way the Playstation Move uses a camera and the glowing sphere). I think the processor etc would be a box in your pocket, wired to the wrist and head units (most people would probably have a wrist unit on each hand).
So! The wrist unit(s) allow a full virtual input (calibration required) at a fingertip level of detail, with the option to look at what you're doing, or not, with the headset, which (being able to sense motion as well) provides a fixed (relative to whatever you define as "fixed") display. As all parts know where all other parts are, if you lift your hand in front of the glasses, it can be shown (and interact) virtually in the exact place it is physically. Virtual keyboard, virtual pen&paper, a display screen of whatever size, shape and position, 3D modeling, it's all possible.
Whether you're standing in an office or walking down the street, this system would give complete freedom of input choice.
Link 1: Image-based data glove
http://www.slidesha...tation-presentation The closest real-world concept. [neutrinos_shadow, Jan 24 2012]
Link 2: TinyMotion
http://www.cs.berke...arch/fp143-wang.pdf Using a camera for motion input (basic level) [neutrinos_shadow, Jan 24 2012]
WTCTTISITMWIBNIIWR?
http://www.youtube....watch?v=NwVBzx0LMNQ [DIYMatt, Jan 24 2012]
Please log in.
If you're not logged in,
you can see what this page
looks like, but you will
not be able to add anything.
Destination URL.
E.g., https://www.coffee.com/
Description (displayed with the short name and URL.)
|