Scientists have developed smart glasses with stereo-vision that allow users to text a message or type in key words for internet surfing by offering a virtual keyboard for text and even one for a piano.
K-Glass is an even stronger model of smart glasses reinforced with augmented reality (AR) that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015.
The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for internet surfing by offering a virtual keyboard for text and even one for a piano.
More From This Section
Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimised for wearable smart glasses.
Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realise a natural user interface (UI) and experience (UX), such as user's gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.
As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands.
This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.
The stereo-vision camera, located on the front of K-Glass 3, works in a manner similar to three dimension (3D) sensing in human vision.
The camera's two lenses, displayed horizontally from one another just like depth perception produced by left and right eyes, take pictures of the same objects or scenes and combine these two different images to extract spatial depth information, which is necessary to reconstruct 3D environments.
The camera's vision algorithm has an energy efficiency of 20 milliwatts on average, allowing it to operate in the Glass more than 24 hours without interruption.
The research team adopted deep-learning-multi core technology dedicated for mobile devices to recognise user's gestures based on the depth information.
This technology has greatly improved the Glass's recognition accuracy with images and speech, while shortening the time needed to process and analyse data.
In addition, the Glass's multi-core processor is advanced enough to become idle when it detects no motion from users. Instead, it executes complex deep-learning algorithms with a minimal power to achieve high performance.
"We have succeeded in fabricating a low-power multi-core processor that consumes only 126.1 milliwatts of power with a high efficiency rate," Yoo said.