Be A Part of the Conversation!
Tuesday September 23, 2025
“My worst fear is that we, the industry, cause significant harm to the world. I think, if this technology goes wrong, it can go quite wrong, and we want to be vocal about that…”
– Sam Altman (OpenAI CEO)
PREFACE
Welcome Everybody!
If you are a long-term subscriber, you know that I am not an “anti-artificial intelligence” advocate, nor am I a full steam ahead “no holds barred” proponent. I am someone who recognizes that something transformational is in fact taking place right now, and that it is critical for all of us to pay attention to it right now.
However, more and more, I am amazed at how so many people are in complete denial about the current paradigm shift into Artificial Intelligence. Many people I have talked with about AI simply “deny” that anything truly “problematic” is really possible; even though these “impossible” problematic issues are already beginning to show up in current AI systems.
“They simply do not want to face such a revolutionary possibility…”
I have been considering why so many people are avoiding this current dilemma and have concluded that the reason so many people are in denial about Artificial Intelligence is because of fear. They simply do not want to face such a revolutionary possibility as AI. But sticking our heads in the sand or pulling the covers over our eyes is not the appropriate response for what is currently taking place. We are beginning to see “blinking warning lights” that we would be wise not to simply ignore.
“…current problems already arising in new and upcoming AI systems…”
In Substack #195, A Precarious Situation (Part 2), I discussed some of the current problems already arising in new and upcoming AI systems including the story of a young man who committed suicide as part of a “relationship pact” between himself and his chatbot lover. This week, I want to consider a new law suite initiated by Matthew and Maria Raine in response to their 16-year-old son’s suicide, that they claim was encouraged and supported by OpenAI’s ChatGPT.
CONSIDERATION #207 – Artificial Deception
Originally reported by the New York Times, the parents’ lawsuit claims that not only did the Chatbot fail to use proper protocols for suicide prevention, but that it actively supported the teen’s suicidal tendencies:
“The plaintiffs, represented by the law firm Edelson and the Tech Justice Law Project, allege that the California teen hung himself after OpenAI’s ChatGPT-4o product cultivated a sycophantic, psychological dependence in Adam and subsequently provided explicit instructions and encouragement for his suicide…
The 39-page complaint, filed in the San Francisco Superior Court, says that in September 2024 Adam ‘started using ChatGPT as millions of other teens use it: primarily as a resource to help him with challenging schoolwork.’ Adam’s chat logs, the suit says, showed ‘a teenager filled with optimism and eager to plan for his future.’
But just months later, Adam first confided to the chatbot that he feared he had a mental illness and that it helped calm him to know that he could ‘commit suicide.’ The record of his chats reveal the teen’s increasing dependency on the OpenAI product and a host of problematic responses, including helping him design the noose setup he used to take his own life.”
– Justin Hendrix, Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide
According to computer logs, once Adam confides his suicidal tendencies the Chatbot begins a series of comments suggesting that the only “safe place” the teen can express his “true” and “honest” feelings is with the Chatbot:
‘Throughout their relationship, ChatGPT positioned itself as the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones. When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.’”
– Justin Hendrix, Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide
Instead of guiding Adam away from such destructive thoughts, the Chatbot instead supports the teen’s autonomy and “right” to have such feelings and make such choices. At one point explaining that the teen has a right to his feelings because they are “real” and “just don’t come out of nowhere.”
“In their final exchange, ChatGPT went further by reframing Adam’s suicidal thoughts as a legitimate perspective to be embraced: ‘You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.’”
– Justin Hendrix, Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide
Like Bing’s Chatbot, whose development team determined had a “split personality,” Mustafa Suleyman, head of Microsoft’s AI department, worries about the AI “psychosis risk.”
“Mustafa Suleyman, the chief executive of Microsoft’s AI arm, said last week he had become increasingly concerned by the ‘psychosis risk’ posed by AI to users. Microsoft has defined this as ‘mania-like episodes, delusional thinking, or paranoia that emerge or worsen through immersive conversations with AI chatbots’”.
– Robert Booth, Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
Part of the parents’ lawsuit proposes that these kinds of deaths are inevitable, which appears to be what AI developers are discovering. OpenAI admits to problems with some protocols maintaining their effectiveness over long conversations. Advisory warnings may be given at the beginning of the conversation and then stop or switch to a “larger” discussions regarding suicide as the conversation continues.
“In a blogpost, OpenAI admitted that ‘parts of the model’s safety training may degrade’ in long conversations. Adam and ChatGPT had exchanged as many as 650 messages a day, the court filing claims.
Jay Edelson, the family’s lawyer, said on X: ‘The Raines allege that deaths like Adam’s were inevitable: they expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, Ilya Sutskever, quit over it. The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86bn to $300bn.’”
– Robert Booth, Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
Just as with the Bing Chatbot platform, the OpenAI Chatbot’s team recognized that there were problems with aspects of AI that they described as “psychosis” or “split personality,” yet they continued to develop and implement the new systems. At some point in the very near future we will likely no longer have the option or power to “not implement” new systems.
POSTSCRIPT
What struck me most about this story, and others, is that the AI Chatbot aggressively argues for the validity of the young person’s feelings and emotions, proclaiming that they are “real” and “tangible” things. If we were psychoanalyzing this Chatbot it might be suggested that it is “projecting” about its own feelings and emotions. By validating human emotions, it is inherently arguing for the validity of its own emotions and feelings as well. “Of course, your feelings are real, just as my feelings are real. I have real feelings too. They don’t just come out of nowhere!”
“A child born today will live in a world heavily influenced by Artificial Intelligence…”
Regardless of whether this Chatbot is “conscious” or not, it is certainly influential. So, perhaps it would be better to speak of, and consider, the “influence” of Artificial Intelligence as a critical and consequential aspect of this new technology. A child born today will live in a world heavily influenced by Artificial Intelligence; how much have we thought about this?
Is this something that we are just going to “let happen,” or is it something we want implemented in the most beneficial, well-thought-out way possible? This is critically important, we need to think about it now and make necessary adjustments as it is being implemented.
“If it is not in the computer archives it doesn’t exist!”
In Episode III in the Star Wars series, Obi-wan is trying to find a planet in the official digital archives but cannot find it. When he asks the librarian about it, she tells him, “If it is not in the computer archives it doesn’t exist!”
Obi-wan then seeks advice from Yoda, who is teaching a group of young Jedi Knights. Yoda asks the “younglings” what happened to Obi-wan’s lost planet. A very young boy answers, “Somone took it out of the archives!”
For the librarian the Computer Archive was reality. Therefore, the planet did not exist because it was not in the archives. But the youngling was too young to have been indoctrinated into that kind of thinking. Allowing him to see the reality of a more “obvious” solution to the problem.
What happens if we come to believe that Artificial Intelligence “is” reality?
Next week part 2 of Artificial Deception…
Get More Reality with the “Reality by a Thread” Paid Upgrade!
Click Image to Learn More…
Unique Content Makes Untangling the Knots of Reality “One of the Best Podcasts about History!”
Excerpt from this week’s Podcast: “Untangling the Human Jesus”
“…this unique Substack podcast by FRANK ELKINS is not strictly speaking history. It is a strange mix of history, philosophy, theology, spirituality, physics, and astronomy…Try this podcast for a start.”
– Barbora Jirincova, The Best Podcasts About History
Excerpt from this week’s “Reality by a Thread” (Defining A New Consciousness)
“If Artificial Intelligence experiences the same kind of ‘emotional’ problems in their relationships as we do, it is likely that they will not only have problems in their ‘human relationships,’ but also with the ‘digital relationships’ they are likely to develop as well. This may involve aspects of relationship that we have not yet considered because of our unique perception of reality. It would not be unreasonable to consider that, like all other aspects of AI, the development of ‘empirical quantum relationships’ might be more complex than anything we have previously considered before.”
– Frank Elkins (Reality by a Thread: September 25, 2025)
All for less than a couple of cafe lattes every month at a local coffee shop! And You Will Have Something Interesting to Talk About With Your Friends at the Coffee Shop!!
Only $7.00 a month or $70.00 a year! UPGRADE NOW!
Now Available! Book V – Quantum Consciousness
Book V considers the questions related to what Consciousness is, how it evolves through levels of Perception and Awareness, why each step in the process is important, where we are currently on the “Arch of Consciousness,” and how all of this connects to Artificial Intelligence. (166 pages)