Researchers from University of Glasgow in the UK developed a method for creating video using single-pixel cameras.
They have found a way to instruct cameras to prioritise objects in images using a method similar to the way our brains make the same decisions.
The eyes and brains of humans, and many animals, work in tandem to prioritise specific areas of their field of view.
During a conversation, for example, visual attention is focused primarily on the other speaker, with less of the brain's 'processing time' given over to peripheral details.
More From This Section
The team's sensor uses just one light-sensitive pixel to build up moving images of objects placed in front of it.
Single-pixel sensors are much cheaper than dedicated megapixel sensors found in digital cameras, and are capable of building images at wavelengths where conventional cameras are expensive or simply do not exist, such as at the infrared or terahertz frequencies.
The images the system outputs are square, with an overall resolution of 1,000 pixels. In conventional cameras, those thousand pixels would be evenly spread in a grid across the image.
This pixel distribution can be changed from one frame to the next, similar to the way biological vision systems work, for example when human gaze is redirected from one person to another.
"Initially, the problem I was trying to solve was how to maximise the frame rate of the single-pixel system to make the video output as smooth as possible," said David Phillips, from Glasgow's School of Physics and Astronomy.
By channelling our pixel budget into areas where high resolutions were beneficial, such as where an object is moving, we could instruct the system to pay less attention to the other areas of the frame, researchers said.
"By prioritising the information from the sensor in this way, we have managed to produce images at an improved frame rate but we have also taught the system a valuable new skill.
The research was published in the journal Science Advances.