Microsoft files a rash of patents on computer mind control

Reading time icon 5 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

There has been somewhat of a revolution in brain-computer interfaces due to new machine learning techniques, which are now able to reconstruct images you are thinking off from your visual cortex for example by directly reading your neurological activity.

Microsoft being Microsoft has thought of ways these non-invasive techniques could be used to control a computer.  They have posted a number of patents addressing the point, which include:

CHANGING AN APPLICATION STATE USING NEUROLOGICAL DATA

 Computer systems, methods, and storage media for changing the state of an application by detecting neurological user intent data associated with a particular operation of a particular application state, and changing the application state so as to enable execution of the particular operation as intended by the user. The application state is automatically changed to align with the intended operation, as determined by received neurological user intent data, so that the intended operation is performed. Some embodiments relate to a computer system creating or updating a state machine, through a training process, to change the state of an application according to detected neurological data.

The patent suggests by reading a user’s brain activity an application may automatically execute the user’s intended action.

Slightly less ambitiously the following patent suggests users could use your neurological activity as an analogue control for a PC, such as controlling the volume of the PC or moving a mouse.

CONTINUOUS MOTION CONTROLS OPERABLE USING NEUROLOGICAL DATA

Computer systems, methods, and storage media for generating a continuous motion control using neurological data and for associating the continuous motion control with a continuous user interface control to enable analog control of the user interface control. The user interface control is modulated through a user’ s physical movements within a continuous range of motion associated with the continuous motion control. The continuous motion control enables fine-tuned and continuous control of the corresponding user interface control as opposed to control limited to a small number of discrete settings.

Microsoft also suggests the brain activity could simply change the mode of a PC (e.g changing to tablet mode or brain control mode).

MODIFYING THE MODALITY OF A COMPUTING DEVICE BASED UPON A USER’S BRAIN ACTIVITY

Technologies are described herein for modifying the modality of a computing device based upon a user’s brain activity. A machine learning classifier is trained using data that identifies a modality for operating a computing device and data identifying brain activity of a user of the computing device. Once trained, the machine learning classifier can select a mode of operation for the computing device based upon a user’s current brain activity and, potentially, other biological data. The computing device can then be operated in accordance with the selected modality. An application programming interface can also expose an interface through which an operating system and application programs executing on the computing device can obtain data identifying the modality selected by the machine learning classifier. Through the use of this data, the operating system and application programs can modify their mode of operation to be most suitable for the user’s current mental state.

Most interestingly Microsoft suggests brain activity could be used to discern items of interest in user’s visual field when using a head mounted display such as the Microsoft HoloLens.

MODIFYING A USER INTERFACE BASED UPON A USER’S BRAIN ACTIVITY AND GAZE

Technologies are described herein for modifying a user interface (“UI”) provided by a computing device based upon a user’s brain activity and gaze. A machine learning classifier is trained using data that identifies the state of a UI provided by a computing device, data identifying brain activity of a user of the computing device, and data identifying the location of the user’s gaze. Once trained, the classifier can select a state for the UI provided by the computing device based upon brain activity and gaze of the user. The UI can then be configured based on the selected state. An API can also expose an interface through which an operating system and programs can obtain data identifying the UI state selected by the machine learning classifier. Through the use of this data, a UI can be configured for suitability with a user’s current mental state and gaze.

What is interesting about this, of course, is that a user would already be wearing something on their head, which could also read their neurological signals via EEG or other modality.

The inventors from Microsoft appear to be drawn from the Surface and HoloLens team, though one of them have left for PerceptivePixel.io.

The patents were filed as recently as May 2017 and published a few days ago. It is not known if Microsoft intends to use these ideas for as assistive technology or if this is something they intend for everyone, but this does give us a taste of the future where hands-free control no longer means voice only.

User forum

0 messages