The device provides feedback - either tactile or audible - that guides the user's finger along a line of text, and the system generates the corresponding audio in real time.
Roy Shilkrot, a Massachusetts Institute of Technology graduate student in media arts and sciences and colleagues tested several variations of their device in a study with vision-impaired volunteers.
One included two haptic motors, one on top of the finger and the other beneath it. The vibration of the motors indicated whether the subject should raise or lower the tracking finger.
The researchers also tested the motors and musical tone in conjunction.
More From This Section
There was no consensus among the subjects, however, on which types of feedback were most useful. Researchers are now concentrating on audio feedback, since it allows for a smaller, lighter-weight sensor.
The key to the system's performance is an algorithm for processing the camera's video feed, which Shilkrot and his colleagues developed.
Each time the user positions his or her finger at the start of a new line, the algorithm makes a host of guesses about the baseline of the letters.
But most of them tend to cluster together, and the algorithm selects the median value of the densest cluster.
That value, in turn, constrains the guesses that the system makes with each new frame of video, as the user's finger moves to the right, which reduces the algorithm's computational burden.
In the study, the algorithms were executed on a laptop connected to the finger-mounted devices.
In ongoing work, Marcel Polanco, a master's student in computer science and engineering, and Michael Chang, an undergraduate computer science major participating in the project through MIT's Undergraduate Research Opportunities Programme, are developing a version of the software that runs on an Android phone, to make the system more portable.