An AI-driven mobile health algorithm uses a phone camera to measure oxygen levels in blood vessels

WEST LAFAYETTE, Ind. – You may already be using your smartphone for remote medical appointments. Why not use some of the onboard sensors to collect medical data? That’s the idea behind the AI-driven technology developed at Purdue University, which could use a smartphone camera to detect and diagnose medical conditions like anemia faster and more accurately than highly specialized medical devices designed for the task.

“There are at least 15 different sensors inside your smartphone, and our goal is to leverage those sensors to enable people outside of a doctor’s office to access healthcare,” said lead researcher Young Kim, professor and associate research director in Purdue’s Weldon School of Biomedical Engineering . “To the best of our knowledge, we believe we have demonstrated the fastest hemodynamic imaging available using an off-the-shelf smartphone.”

While a smartphone camera is handy, it only captures measurements of the red, green, and blue wavelengths of light in each pixel, limiting its medical use. Hyperspectral imaging can capture all wavelengths of visible light in each pixel and could be used to detect various skin and retinal diseases, as well as some types of cancer. Researchers are exploring applications of hyperspectral imaging in healthcare, but most of the work is aimed at improving specialized devices that are relatively bulky, slow and expensive. By combining deep learning and statistical techniques with their knowledge of light-tissue interactions, Purdue researchers are able to reconstruct the full spectrum of visible light in every pixel of an ordinary smartphone camera image. A lab with expertise in mobile health’s patent-pending approach could improve access to healthcare.

As reported in PNAS Nexus, the team tested their method using commercially available hyperspectral imaging devices to collect information about the movement of blood oxygen in the eyelids of volunteers, in models designed to mimic human tissue and in a chick embryo. The results show that the smartphone camera generated hyperspectral information faster, cheaper and just as accurately as that captured with specialized equipment. The smartphone approach can produce images in a single millisecond that traditional hyperspectral imaging would take three minutes to capture.

Kim said the work reported in PNAS Nexus focused on developing the hyperspectral imaging algorithm for smartphones and not on specific applications. But in other studies, the team used their approach to measuring blood hemoglobin for tissue oximetry and inflammation. Kim’s lab used a computational approach that the researchers dubbed “hyperspectral learning.”

The process begins with a smartphone camera in an ultra slow-motion setting that produces video at around 1,000 frames per second. Each pixel in each frame contains information about the color intensity of red, green, and blue. The information is fed through a machine learning algorithm that derives full-spectrum information for each pixel. This is used to take measurements of blood flow, specifically the amount of oxygenated and deoxygenated hemoglobin in each pixel. These hemodynamic parameters can also be used to create images and videos showing their subjects’ oxygen saturation over time.

As with traditional machine learning, the team trains its algorithms on a dataset, feeds it smartphone images and the corresponding hyperspectral images, and tweaks the algorithm until it can predict the correct relationship between the two datasets. However, by building the algorithms with equations derived from tissue optics — an approach sometimes called “informed learning” — the researchers require a far smaller training data set.

And while traditional hyperspectral imaging devices must collect massive amounts of data, limiting either spectral or temporal resolution, the team’s approach starts with video files hundreds of times smaller than hyperspectral image files, allowing them to maintain a high standard on both fronts.

“Usually there’s a trade-off in gathering that information in an efficient way. But with our approach, we have high spatial and spectral resolution at the same time,” said Yuhyun Ji, first author and a graduate student in Kim’s lab, which is currently working to apply this method to other mobile health applications such as cervical colposcopy and retinal fundus imaging.

Kim disclosed his innovation to the Purdue Research Foundation Office of Technology Commercialization, which has filed an intellectual property patent. Industry partners interested in advancing or commercializing the innovation should contact Patrick Finnerty, Senior Business Development Manager, [email protected] at 2019-KIM-68586.

MHealth Hyperspectral Learning for Instantaneous Spatiospectral Imaging of Hemodynamics was prepared with support from the National Institutes of Health and the Ralph W. and Grace M. Showalter Trust.

Author/Media Contact: Mary Martialay, [email protected]

Source: Young Kim, [email protected]