Written by: Lilian de Jong, co-founder Dutch AI Ethics Community ([email protected])
On September 17, the Dutch AI Ethics Community gathered at Neude Library to explore the question: What happens when GenAI meets mental health*?* Through three rounds of fishbowl conversations, experts and audience members shared their insights, worries and hopes.
The experts
Three experts facilitated the discussions:
- Floortje Scheepers is a professor of Innovation in Mental Healthcare and head of the psychiatry department at UMC Utrecht. Her work focuses on innovation in mental healthcare, including the use of AI to support both patients and professionals. She takes a critical look at AI in psychiatry but also sees opportunities to improve treatment through AI and Big Data. Floortje brought her clinical and research perspective to the fishbowl discussion.
- Jannes Burger has worked as a Rijks-I trainee at the Dutch Data Protection Authority (Autoriteit Persoonsgegevens (Dutch DPA)), where they contributed to a report on the risks of AI chatbot apps as virtual friends and therapists. Jannes has a strong passion for ethical issues around AI and explores how this technology affects our society at large. Jannes brought their policy and ethics perspective to the fishbowl discussion,
- Mourice Schuurmans is the founder of ObsessLess, an AI-powered app designed to make affordable OCD care immediately available to anyone who needs it. Having lived with OCD himself, he knows firsthand how difficult it can be to get timely treatment. He built ObsessLess to give people support in the moments they need it most. Mourice brought the builder and advocate perspective to the fishbowl discussion.
Round 1: Comfort or illusion?
We started with a this conversation starter: If someone feels comforted by an AI companion, is that a therapeutic success or a dangerous illusion?
- Some argued: if it feels real, it can work. Human presence isn’t always needed for comfort.
- Others warned: AI only makes guesses based on past data. It cannot provide lived, physical human experience.
- Concerns were raised about over-reliance on “black box” data systems and the risk of losing human connection.
- At the same time, people noted gaps in mental healthcare today: long waiting lists, high costs, lack of diversity in psychiatry. For some, AI could offer a first step or temporary relief.
Tension: Is AI comfort “good enough” when human care is out of reach or do we risk creating a fragile society?
Round 2 – AI, vulnerability, and relationships
Next round started with the question: How could AI companions change the way vulnerable people experience relationships?
- Someone pointed out that vulnerability is shaped by social environment and isolation.
- Many pointed out how identities are becoming fragmented (“work self” vs. “friend self”), which AI could amplify.
- Some saw AI as a diagnostic tool, not a therapist. Others stressed that bias in data (often based on white, Western psychology) could cause harm if used as therapy.