Engineers at the University of Washington developed a way to track people by using an algorithm that trains the networked cameras to learn one another's differences.
The cameras first identify a person in a video frame, then follow that same person across multiple camera views.
This detailed visual record could be useful for security and surveillance, monitoring for unusual behaviour or tracking a moving suspect, researchers said.
"Tracking humans automatically across cameras in a three-dimensional space is new," said lead researcher Jenq-Neng Hwang, a UW professor of electrical engineering.
Also Read
With the new technology, a car with a mounted camera could take video of the scene, then identify and track humans and overlay them into the virtual 3-D map on your GPS screen.
Researchers are developing this to work in real time, which could help pick out people crossing in busy intersections, or track a specific person who is dodging the police.
"Our idea is to enable the dynamic visualisation of the realistic situation of humans walking on the road and sidewalks, so eventually people can see the animated version of the real-time dynamics of city streets on a platform like Google Earth," Hwang said.
The researchers overcame this by building a link between the cameras. Cameras first record for a couple of minutes to gather training data, systematically calculating the differences in colour, texture and angle between a pair of cameras for a number of people who walk into the frames in a fully unsupervised manner without human intervention.
After this calibration period, an algorithm automatically applies those differences between cameras and can pick out the same people across multiple frames, effectively tracking them without needing to see their faces.