Today, I came across an interesting patent titled ‘Multi-touch-movement gestures for tablet computing devices’ from Microsoft. In the modern touch devices, a user can interact with the device to execute a panning operation by touching the surface of the touchscreen with a single finger and then dragging that finger across the surface of the touchscreen surface. Similarly, he can perform a zooming operation by touching the surface of the touchscreen with two fingers and then moving the fingers closer together or farther apart. Similarly, sensors such as accelerometer are used to augment the experience. For example, when a device is rotated to landscape mode, the UI automatically adjusts itself to the new orientation. Microsoft is now proposing a new way to interact with the device using multi-touch-movement (MTM) gesture.
Functionality is described herein for detecting and responding to gestures performed by a user using a computing device, such as, but not limited to, a tablet computing device. In one implementation, the functionality operates by receiving touch input information from at least one touch input mechanism in response to a user touching the computing device. The functionality also receives movement input information from at least one movement input mechanism in response to movement of the computing device. The functionality then determines whether the touch input information and the movement input information indicate that a user has performed or is performing a multi-touch-movement (MTM) gesture. In some cases, the user performs an MTM gesture by grasping the computing device with two hands and establishing contact with one or more surfaces of the computing device with those hands. The user then moves the computing device in a prescribed path, and/or to achieve a prescribed static posture. For example, the MTM gesture can be defined with respect to any type of tilting, flipping, pivoting, twisting, sliding, shaking, vibrating, and/or tapping motion.
The functionality can invoke any behavior in response to determining that the user has performed (or is performing) an MTM gesture. More specifically, in one case, the functionality performs the behavior after a discrete gesture has been performed. In another case, the functionality performs the behavior over the course of the gesture, once it is determined that the gesture is being performed. In one case, the functionality can modify a view in response to determining that the user has performed an MTM gesture. In another case, the functionality can invoke any function in response to determining that the user has performed an MTM gesture. In another case, the functionality can perform any type of control operation, e.g., by using the gesture to specify a parameter, a path, a range of values, etc. The control operation may affect any designated item in any manner. Still other types of behavior can be associated with gestures.
Source: USPTO 8,902,181