One of the reasons why self-driving cars and mini-helicopters are not delivering online purchases is that autonomous vehicles tend not to perform well under pressure.
A system that can flawlessly parallel park at 5 mph may have trouble avoiding obstacles at 35 mph.
Andrea Censi, a research scientist in MIT's Laboratory for Information and Decision Systems, thinks the solution could be to supplement cameras with a new type of sensor called an event-based (or "neuromorphic") sensor, which can take measurements a million times a second.
Censi said in a regular camera, one has an array of sensors, and then there is a clock.
He said that if one has a 30-frames-per-second camera, every 33 milliseconds the clock freezes all the values, and then the values are read in order." With an event-based sensor, by contrast, "each pixel acts as an independent sensor.
Censi said when a change in luminance - in either the plus or minus direction - is larger than a threshold, the pixel says, 'I see something interesting' and communicates this information as an event. And then it waits until it sees another change.
Also Read
Davide Scaramuzza, of the University of Zurich present the first state-estimation algorithm, and Censi's algorithm supplements camera data with events reported by an event-based sensor, which was designed by their collaborator Tobi Delbruck of the Institute for Neuroinformatics in Zurich.
The new algorithm's first advantage is that it doesn't have to identify features: Every event is intrinsically a change in luminance, which is what defines a feature. And because the events are reported so rapidly - every millionth of a second - the matching problem becomes much simpler.
There aren't as many candidate features to consider because the robot can't have moved very far.