Researchers at Google have developed a new artificial intelligence system that can accurately predict the risk of heart diseases by scanning images of people's retina.
The discovery may point to more ways to diagnose health issues from retinal images, researchers said.
"Using deep learning algorithms trained on data from 284,335 patients, we were able to predict cardiovascular risk factors from retinal images with surprisingly high accuracy for patients from two independent datasets of 12,026 and 999 patients," Lily Peng from the Google Brain Team wrote in a blog post.
More From This Section
While doctors can typically distinguish between the retinal images of patients with severe high blood pressure and normal patients, the algorithm could go further to predict the systolic blood pressure within 11 milimetre Hg on average for patients overall, including those with and without high blood pressure.
The algorithm was fairly accurate at predicting the risk of a cardiovascular event directly, said Peng.
"Our algorithm used the entire image to quantify the association between the image and the risk of heart attack or stroke," she said.
Given the retinal image of one patient who later experienced a major cardiovascular event (such as a heart attack) and the image of another patient who did not, the algorithm could pick out the heart patient 70 per cent of the time.
This performance approaches the accuracy of other cardiovascular risk calculators that require a blood draw to measure cholesterol.
"We opened the 'black box' by using attention techniques to look at how the algorithm was making its prediction. These techniques allow us to generate a heatmap that shows which pixels were the most important for a predicting a specific cardiovascular risk factor," said Peng.
For example, the algorithm paid more attention to blood vessels for making predictions about blood pressure, as shown in the image above. Explaining how the algorithm is making its prediction gives doctor more confidence in the algorithm itself.
Traditionally, medical discoveries are often made through a sophisticated form of guess and test - making hypotheses from observations and then designing and running experiments to test the hypotheses.
However, with medical images, observing and quantifying associations can be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real images.
"Our approach uses deep learning to draw connections between changes in the human anatomy and disease, akin to how doctors learn to associate signs and symptoms with the diagnosis of a new disease," Peng said.
This could help scientists generate more targeted hypotheses and drive a wide range of future research.
Disclaimer: No Business Standard Journalist was involved in creation of this content