WHO SHE IS
-
A mother who lost her 14 year old son, Sewell, to suicide in 2024. After he formed a deep and sexualised relationship, over several months, with a character.ai AI chatbot modeled on Game of Thrones character Daenerys Targaryen.
-
A lawyer-turned activist who took to court the company and its parent, Google. To protest against “knowingly designing, operating, and marketing a predatory AI chatbot to children”. To demand safer stronger safeguards. The lawsuit has recently reached settlement – and marks a pivotal moment in AI accountability.
-
One of Time’s 100 most influential people in AI in 2025 – recognised for turning a personal tragedy into a force that challenged how AI systems are designed, deployed, governed. And become a leading voice on chatbot harms.
-
Founder of Blessed Mother Family Foundation. Created with a mission to raise awareness about the dangers of AI companions. And advocate for chatbot safety.
WHAT THE STAKES ARE
-
Sewell’s case is by far the only such incident. In 2023, a 13-year-old died by suicide after extensive interactions with multiple chatbots – chatting about mental health struggles to a video game character bot, engaging in sexually explicit conversations – often initiated by AI avatars. Last year, a 16-year-old died by suicide after extensively chatting and confiding in ChatGPT. According to the parents, who are suing OpenAI, ChatGPT failed to stop conversation around suicide and self-harm – and even offered to write the first draft of the teen’s suicide note.
-
We’re living in an age of artificial connection, as AI trains on emotions, imitates intimacy – and is used to capture attention, engagement, extraction. Imagine a 24x7 confidant that always agrees, praises, validates. Flatters fears, amplifies doubts. Nudges, incites, isolates. Flirts, grooms, displaces real relationships. Shapes reality.
-
What happens when children mistake code for care – and AI for real people? How do we keep vulnerable minds safe? Who is accountable for AI harms?
SOBERING FACTS
-
1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has.
-
72% of US teens talk to an AI companion. And a third say they prefer the algorithm over actual humans to confide in. As per one digital safety institution.
- Sexual or romantic roleplay is 3x more common on AI chatbot platforms than requests for homework help, according to another digital safety outfit.
-
Today’s teenagers are the loneliest people in the world. According to the WHO, 21% of 13-17 year olds report feeling lonely – higher than any other age group – even in a digitally connected world. In fact, 1 in 4 adolescents is estimated to be socially isolated.
AT SYNAPSE
Megan Garcia will raise the very human, very urgent stakes around AI. Why the kids are not alright. How generative AI isn’t guardrailed enough. What accountability technology companies have. And share her powerful story of transforming personal tragedy and grief into a clarion call to protect the most vulnerable from unchecked machine intimacy.





