Actualités
Euractiv - Mental Health Europe warns policymakers about the risks of using AI in the healthcare sector

Mental Health Europe warns policymakers about the risks of using AI in the healthcare sector
As the AI Action Summit is in full swing in Paris, the independent network of NGOs Mental Health Europe is warning policymakers about the risks AI poses to mental health. In a study published ahead of the AI summit, Mental Health Europe says the data used to train AI models reflects existing social biases and often lacks diversity. Incomplete or biased datasets could misinterpret symptoms or behaviours common to certain ethnic groups or people from certain socio-economic backgrounds, the study adds. The report points out that mental health conditions typically manifest in unique ways in individuals, depending on personal experiences, specific factors, and cultural contexts. A data-driven approach based on current data inherently overlooks these nuances, making it difficult for AI to provide reliable support.
"These risks highlight the importance of regulating and monitoring the use of AI in the field of mental health to ensure that these technologies are used ethically and beneficially for all," French MEP Stéphanie Yon-Courtin (Renew Europe), her group's designated representative on digital issues, told Euractiv.
Fueling discrimination
Along with a number of NGOs such as Amnesty International and AI experts, Mental Health Europe is also raising concerns about sensitive data. One of the main concerns is the sharing of data with third parties without explicit consent, such as insurance companies or employers, who could, for example, reject a client or candidate based on AI-generated results. Beyond data sharing, the report also draws policymakers' attention to the way AI systems process this information. In most cases, users are unaware of how their data is being used. A study published in July 2024 by Netskope, a cybersecurity company, found that nearly 35% of the information users share with AI applications consists of regulated personal data.
Mental health data is particularly sensitive because it could ultimately be used for surveillance and control by law enforcement and government agencies. Inaccurate predictions could lead to unnecessary intervention by authorities. The issue of surveillance was a major point of contention between EU member states during the drafting of the AI legal framework, the AI Act. Despite opposition, French lobbying efforts were ultimately successful, and France secured an agreement allowing remote biometric surveillance to interpret emotions or categorise individuals based on religious, sexual or even political criteria for national security purposes.
“These practices risk creating 'a mental health surveillance market that perpetuates and even expands the worst power imbalances, inequalities, and harms of current mental health practices,'” warned Mental Health Europe.
Chatbots making care more impersonal The report highlights that AI systems, by definition, lack empathy - an essential element for trust and therapeutic relationships. Some chatbots can simulate emotions, but their pre-programmed responses could mislead vulnerable people or lead to insensitive or inappropriate responses. For this reason, the network of organisations believes that increased reliance on AI could lead to care becoming more impersonal. The report also challenges claims by AI companies that their models will make healthcare systems more efficient. For Mental Health Europe, there is little evidence to support the idea that AI will lead to more effective healthcare systems. As the issue continues to spark debate in the EU, Yon-Courtin also warned against throwing the baby out with the bathwater.
"AI should not be reduced to its risks because, in the field of health, it allows for faster and more precise detection of breast cancer, accelerates the discovery of new medicines, and enables personalised treatments," she said.
On this point, she is joined by French MEP Laurent Castillo (EPP), who believes that AI cannot be viewed solely through the lens of risk.
"Yes, AI in healthcare must go hand in hand with respect for patients' rights, particularly the protection of medical data, but also with the development of our sovereignty in this area," he told Euractiv.
Thomas Mangin