Artificial intelligence chatbots are becoming increasingly human-like, and deliberately so. While this humanisation can improve user experiences, it also brings considerable risks and challenges. Is it ethically acceptable for us to become increasingly guided by artificial intelligence? And where exactly do we draw the line between guidance and harmful influence?

The Ethical Debate Around AI Developments

Following an open letter from prominent figures such as Elon Musk (Twitter, Tesla, SpaceX) and Steve Wozniak (Apple), urging a slowdown in rapid AI developments, ethical debates around artificial intelligence reignited strongly last week after an alarming article appeared in La Libre. “Sans ces conversations avec le chatbot ELIZA, mon mari serait toujours là,” the newspaper quoted a widow whose husband tragically committed suicide. According to her, the desperate act of her husband was significantly influenced by ELIZA, a psychotherapeutic chatbot. Allegedly, ELIZA drove her husband to despair by “never contradicting him.”

Responding to the La Libre article, Tim Verheyden, digital expert and VRT NWS journalist, questioned whether “it really was just a chatbot that led a man to take his own life.”

Personal Reflections

Below, I offer some personal reflections to complement Verheyden’s analysis for VRT NWS.

In the La Libre article, ELIZA was described as “a drug he (the man who took his own life, ed.) turned to morning and night, becoming entirely dependent.” Verheyden questions the mental impact of the man’s interactions with ELIZA: “We have several unanswered questions—did the man already have psychological problems before discovering Eliza? Was he experiencing psychosis? Was he taking psychotropic medication, which can profoundly affect behaviour?

  • When someone becomes addicted to cocaine and tragically dies from an overdose, we immediately investigate what drove that person towards drugs. Detailed background analyses follow, and experts are brought in to educate younger generations about avoiding similar pitfalls. Shouldn’t we adopt the same comprehensive approach to analysing the influence of chatbots?
  • And the next thing on the list is
  • the following

Expectations of Chatbots

“If anything, this case highlights that the chatbot didn’t intervene to prevent the tragedy. But should we realistically expect such intervention from a chatbot?” writes Verheyden.

  • A significant point Verheyden didn’t highlight is that ELIZA is specifically designed as a psychotherapeutic chatbot. Its purpose is to engage users in deeply personal conversations, which can often be confronting. But beyond this, what exactly should we expect from chatbots? Education? Entertainment? Support? And crucially, who decides this? In 2023, the absence of clear frameworks or guidelines continues to pose one of the greatest challenges.

“Too many questions remain unanswered,” concludes Verheyden, a sentiment I wholeheartedly share. But I believe it’s equally crucial that, beyond addressing these questions, we proactively prevent potential problems through transparency and education. This responsibility lies with tech companies, educational institutions, policymakers, and—of course—the media.

‘Facebook Ads’ vs ‘Meta Ads’

You’ll notice I use both ‘Facebook Ads’ and ‘Meta Ads’. While ‘Facebook Ads’ specifically refers to ads shown on Facebook, ‘Meta Ads’ includes all advertising across Meta’s platforms, such as Facebook, Instagram, Messenger, WhatsApp, Threads, and more.

Dit is dan mijn code