close
close

Yiamastaverna

Trusted News & Timely Insights

AI model predicts a patient’s condition with almost perfect accuracy based on facial expressions
Washington

AI model predicts a patient’s condition with almost perfect accuracy based on facial expressions

In a study recently published in the journal Computer ScienceResearchers investigated the use of advanced machine learning methods to recognize facial expressions as indicators of deteriorating patient health.

Their results suggest that the developed Convolutional Long Short-Term Memory (ConvLSTM) model can predict health risks with an impressive accuracy of 99.89%, which could improve early detection and improve patient outcomes in the hospital.

AI model predicts a patient’s condition with almost perfect accuracy based on facial expressionsStudy: AI-based visual early warning system

background

Facial expressions are crucial to human communication because they convey emotions and nonverbal signals across different cultures. Charles Darwin was the first to explore the idea that facial movements reveal emotions, and later research by Ekman and others identified universal facial expressions linked to specific emotions.

The Facial Action Coding System (FACS), developed by Ekman and Friesen, became an important tool in studying these expressions by analyzing the muscle movements involved. Over time, the study of facial expression recognition (FACS) has expanded to areas such as psychology, computer vision, and healthcare.

Various models and databases have been developed to improve automatic facial expression recognition, especially in clinical settings. Recent advances include the use of convolutional neural networks (CNNs) and other machine learning techniques to recognize facial expressions and predict health status.

These developments are particularly beneficial in healthcare, as accurate detection of emotions such as pain, grief and fear can help detect deteriorating patient conditions early, thereby improving care and treatment outcomes.

About the study

The study used a systematic methodology to develop and evaluate a Convolutional Long Short-Term Memory (ConvLSTM) model for recognizing facial expressions, especially those that indicate deterioration of the patient’s condition. The process included three main phases: dataset generation, data preprocessing, and implementation of the ConvLSTM model.

First, a dataset of three-dimensional animated avatars with various facial expressions was created using modern tools. These avatars were designed to mimic the faces of real people with different characteristics such as age, ethnicity and facial features.

Each avatar performed specific expressions related to health deterioration, resulting in 125 video clips. The First-Order Motion Model (FOMM) was then used to map these expressions into static images from an open-source database, expanding the dataset to 176 video clips.

Facial expression areas showing whether the patient's condition is deteriorating or not. (a) The left avatar shows a neutral expression, which is delimited by the blue rectangles. (b) The right avatar shows the terminal deterioration status, which is delimited by the red rectangles.Facial expression areas that indicate whether the patient’s condition is deteriorating or not. (A) The left avatar expresses a neutral expression, which is delimited by the blue rectangles. (B) The right avatar shows the final stage deterioration status, which is delimited by the red rectangles.

The dataset was then subjected to thorough preprocessing, which included face detection to isolate and focus on the facial regions. The dataset was split into training data (85%) and test data (15%), and oversampling techniques were applied to balance the training data.

Finally, the ConvLSTM model, which integrates convolutional layers with LSTM cells, was proposed and implemented to capture both spatial and temporal dependencies in the video sequences, enabling accurate prediction of facial expressions over time.

Frames of a video example after using FOMM to transfer facial expressions from avatars to real face images.Frames of a video example after using FOMM to transfer facial expressions from avatars to real face images.

Results

The study introduced a model designed to detect specific facial expressions associated with deterioration in the patient’s condition. This model was trained and tested on a dataset generated from avatars that simulated five facial expression classes. These expressions were designed to represent a wide range of facial features, skin tones and ethnicities, mimicking patients at risk of clinical deterioration.

The model’s performance was evaluated on several key metrics. It achieved high accuracy (99.8%), precision (99.8%), and recall (99.8%), demonstrating that the model could correctly identify the relevant facial expressions.

Accuracy measures how often the model correctly predicts expressions, while precision and recall focus on the model’s ability to correctly identify positive expressions without false positives.

The ConvLSTM model proved to be better than other methods used. It was able to recognize patterns in facial expressions over a longer period of time, which is crucial for accurately assessing the patient’s condition.

Further analyses included the use of a confusion matrix and receiver operating characteristic (ROC) curves to evaluate the performance of the model, especially when dealing with imbalanced data sets. The model was also tested on unknown data and showed consistent accuracy across different expression classes.

These results suggest that the ConvLSTM model is very effective at predicting the deterioration of patients’ condition based on their facial expressions. However, the authors acknowledge that future studies using real patient data are needed to confirm these findings in practical scenarios.

Conclusions

In the study, a deep learning-based model was developed using the ConvLSTM architecture to detect facial expressions indicating deterioration of the patient’s condition, achieving a high accuracy of 99.89%.

The model was trained on a synthetic dataset representing five specific facial expressions associated with the risk of condition deterioration. The system showed promising results, highlighting the potential of using advanced computer vision and machine learning techniques for early detection of patient deterioration.

However, a major limitation of the study is that it is based on synthetic data rather than real patient data. Ethical concerns prevented the collection of real patient data, especially from intensive care and critical care units.

Finally, the researchers emphasized that future work with real-time data is needed to further validate the model and integrate it into other medical assessment systems to improve patient outcomes.

Journal reference:

  • AI-based visual early warning system. Al-Tekreeti, Z., Moreno-Cuesta, J., Garcia, MIM, Rodrigues, MA Computer Science (2024). DOI: 10.3390/informatics11030059, https://www.mdpi.com/2227-9709/11/3/59

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *