Ignorance is bliss, and it’s often the most ignorant who make the surest decisions, not being encumbered by the knowledge that they could be wrong.
In many situations, this is all fine and good, but at the current level of self-driving car development having a Tesla confidently crash into a fire truck or white van (both of which happened) can be rather dangerous.
The issue is that self-driving cars are just smart enough to drive cars, but not to know when they are entering a situation outside their level of confidence and capability.
Microsoft Research has worked with MIT to help cars know exactly when situations are ambiguous.
As MIT news notes, a single situation can receive many different signals, because the system perceives many situations as identical. For example, an autonomous car may have cruised alongside a large car many times without slowing down and pulling over. But, in only one instance, an ambulance, which appears exactly the same to the system, cruises by. The autonomous car doesn’t pull over and receives a feedback signal that the system took an unacceptable action. Because the unusual circumstance is rare cars may learn to ignore them, when they are still important despite being rare.
The new system, to which Microsoft contributed, will recognize these rare systems with conflicted training and can learn in a situation where it may have, for instance, performed acceptably 90 percent of the time, the situation is still ambiguous enough to merit a “blind spot.”
“When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution,” said Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory.
Read much more detail at MIT News here.