CSL professor researches video monitoring of patients

7/20/2020 Lizzie Roehrs, CSL

Written by Lizzie Roehrs, CSL

During the ongoing COVID-19 pandemic, a major concern for healthcare professionals has been a shortage of space to treat patients. In states with a high number of cases, intensive care units (ICUs) continue to fill to capacity, leaving healthcare workers exhausted and patients forced to be admitted elsewhere. CSL and ECE research professor Narendra Ahuja and his team are working to develop a way to monitor a patient’s condition through video without using special instruments. 
CSL professor Narendra Ahuja
CSL professor Narendra Ahuja

“Another dimension of our work is to do this all in an informal, non-professional setting,” says Ahuja. “This would include the patient doing the checking at home or the doctor monitoring him/her remotely via a videocon, using commonly available smartphones.”

Ahuja and his team are working with a group of clinicians who deal with the challenges posed by COVID-19 on a daily basis. These challenges include patient-to-physician ratios, a lack of available hospital beds, and the risk of spreading the virus via close contact.

“Our solutions could help clinicians discharge a patient who is not showing symptoms, knowing that this person could be monitored remotely and called back if our solutions predict in time that their health might deteriorate,” says Ahuja.

The initial focus of the research is COVID-19, but the methodology will be more general and useful to other diseases as well including cardio-pulmonary ailments.

“In addition to helping with tele-health, the AI methods we develop are likely to push state of the art of AI itself through development of new approaches, algorithms, and protocols,” says Ahuja.

The workhorses behind these developments are algorithms from the fields of computer vision, audio analysis, and machine learning. The overall objective of these fields is to use cameras and microphones to derive information about the world. This means using cameras in the same ways that humans use their eyes and ears to understand their surroundings. Ahuja says these fields have been the focus of research of the computational side of his project team. The aim in this case is to assess and monitor the health of a patient.

Through audio data, various sounds from the human body can be used to assess health. For example, a cough can signal many things by its length, intensity, frequency, etc. Methods have already been developed for automated recognition of respiratory diseases such as pneumonia, asthma, croup, and chronic obstructive pulmonary disease (COPD). Acoustic features such as articulation rate, effort, and auditory roughness can give clues to health of a patient, as can the pronunciations of vowel sounds and other speech patterns.

Video monitoring can measure physiological parameters such as blood pressure, heart rate, and respiratory rate. For example, heart rate and lengths of heart beats can be estimated from measurements of subtle head motions caused in reaction to blood being pumped into the head, from hemoglobin information via observed skin color, and from periodicities observed in the light reflected from skin close to the arteries or facial regions. Aspects of pulmonary health can be assessed from movement patterns of chest, nostrils and ribs.

Ahuja’s research could have vital impacts on the current pandemic and might also mean important changes in the medical field as a whole.


Share this story

This story was published July 20, 2020.