Latest Health IT News

WHO: AI Models in Mental Health Services, Research Require Evaluation

According to the World Health Organization, many artificial intelligence models used in the mental healthcare arena have yet to gain credibility and must be further evaluated.

AI for mental healthcare.

Source: Getty Images

By Mark Melchionna

- Although there are potential benefits associated with using artificial intelligence (AI) to treat mental health conditions, many models require a review for bias, inaccuracies, over-optimism, and more, according to a recent study from the World Health Organization (WHO).

According to the WHO, more than 150 million people in the WHO European Region were recorded as living with mental health conditions. The COVID-19 pandemic exacerbated this issue in many ways, and as a result, AI solutions to address these conditions have been growing.

AI can be used in the treatment of mental health conditions in numerous ways, including to identify and monitor these conditions. Further, AI can leverage healthcare data, like EHRs, medical images, and handwritten clinical notes, to automate routine tasks and support clinical decision-making

But the growing use of AI in healthcare has highlighted a need to assess AI use in mental health research. Thus, researchers from the Polytechnic University of Valencia, Spain, and WHO/Europe, examined the use of AI for mental health disorder studies between 2016 and 2021.

“Given the increasing use of AI in health care, it is relevant to assess the current status of the application of AI for mental health research to inform about trends, gaps, opportunities and challenges,” said David Novillo-Ortiz, PhD, regional adviser on data and digital health at WHO/Europe, and co-author of the study, in a press release.

They found that AI is primarily used in the study of depressive disorders, schizophrenia, and other psychotic disorders, resulting in an unbalanced application.

Further, AI involves the complex use of statistics, mathematical approaches, and high-dimensional data. The study found that there were major flaws in how the AI applications processed statistics, as well as infrequent data validation and lack of evaluation of bias risk. This could lead to issues related to bias, inaccurate interpretations of results, and excessively high levels of optimism regarding AI performance.

“Artificial intelligence stands as a cornerstone of the upcoming digital revolution. In this study, we had a glimpse of what is to come in the next few years and will drive health-care systems to adapt their structures and procedures to advance in the provision of mental health services,” said Antonio Martinez-Millana, PhD, assistant professor at the Polytechnic University of Valencia, and co-author of the study, in a press release.

Also, there is a lack of clarity regarding reporting on AI models and limited communication among researchers. This makes the models harder to replicate. 

“The lack of transparency and methodological flaws are concerning, as they delay AI’s safe, practical implementation. Also, data engineering for AI models seems to be overlooked or misunderstood, and data is often not adequately managed. These significant shortcomings may indicate overly accelerated promotion of new AI models without pausing to assess their real-world viability,” said Novillo-Ortiz, in the press release. 

Similarly, a WHO policy brief from February 2022 detailed various steps that could help eliminate AI issues in senior care. In the brief, researchers described how remote patient monitoring for community care and the production of drugs for aging patients are the most common uses of healthcare-focused AI for seniors.

But, they also noted that the risks of AI use in senior care mainly relate to ageism. Caregivers can take various steps to lower these risks, such as including optimal design and data collection.