Mental health is a critical aspect of overall well-being, encompassing emotional, social, and psychological states. However, mental health disorders (MHDs) are highly prevalent, affecting approximately 970 million adults globally in 2019, or one in every eight adults, with figures significantly increasing after the COVID-19 pandemic. In the United States, an estimated 22.8% of Americans experience mental illnesses. Effectively managing these conditions requires timely and comprehensive assessments for accurate diagnosis.
Despite the widespread prevalence, current diagnostic and treatment methods for MHDs suffer from several deficiencies, including challenges in diagnosis due to comorbidities (the presence of multiple disorders) and overlapping symptoms, which complicate accurate clinical assessment. Furthermore, reliance on patients’ potentially erratic recollections and a scarcity of mental health specialists exacerbate these diagnostic difficulties. To address these challenges and improve early intervention, artificial intelligence (AI) methods have emerged as promising tools, capable of efficiently analyzing large volumes of data, recognizing patterns, and generating precise predictions.
This study, titled “Explainable artificial intelligence systems for predicting mental health problems in autistics,” introduces a novel and robust framework called the Explainable Mental Health Disorders (EMHD) model. The primary goal of EMHD is to leverage machine learning (ML) algorithms combined with Explainable Artificial Intelligence (XAI) to identify mental health disorders, particularly in young children and toddlers, within the autistic population. The paper emphasizes that while AI models show great promise in identifying MHDs, they often lack explicit explanations for their findings, which is crucial for clinicians to interpret results and improve diagnostic accuracy. XAI aims to bridge this gap by providing transparency and interpretability to AI model decisions.
The EMHD model consists of two main components: an ensemble model (known as Voting) and Explainable AI. The Voting ensemble incorporates various feature selection techniques, including Mutual Information (Mutinfo), Analysis of Variance (ANOVA), and Recursive Feature Elimination (RFE), to classify mental health disorder datasets. For explainability, the model integrates Shapley Additive Explanations (SHAP), a widely recognized XAI technique that offers insights into the importance and influence of different features on the model’s predictions.
The proposed EMHD model demonstrates superior performance across all evaluation metrics, achieving perfect scores for accuracy, precision, recall, and F1-Score of 1.0. This exceptional performance is highlighted in comparison to other baseline models, including KNN and LSTM models. Beyond its high accuracy, the study emphasizes the potential of XAI within the EMHD framework to provide personalized and actionable insights to mental health professionals working with autistic individuals, empowering healthcare providers with crucial information for improved patient outcomes and tailored interventions. This research specifically aims to address the urgent mental health disorder crisis in Saudi Arabia and significantly enhance early MHD diagnosis.
Reference for this article:
Atlam, E.-S., Rokaya, M., Masud, M., Meshref, H., Alotaibi, R., Almars, A. M., Assiri, M., & Gad, I. (2025). Explainable artificial intelligence systems for predicting mental health problems in autistics. Alexandria Engineering Journal, 117, 376–390. https://doi.org/10.1016/j.aej.2024.12.120

