Advertisement

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life. | AI (artificial intelligence)


On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.

Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.

But Ceccanti had been unravelling. In the days before his death, he was picked up from a stranger’s yard for acting erratically and taken to a crisis center. He had been telling anyone who would listen that he could hear and feel a painful “atmospheric electricity”.

He had also recently stopped using ChatGPT.

Ceccanti had been communicating with OpenAI’s chatbot for a few years. He used it initially as a tool to brainstorm ways to build a path to low-cost housing for his community in Clatskanie, Oregon, but eventually turned to it as a confidante. He would spend 12 hours a day typing to the bot, according to his wife. He had cut himself off from it after she, along with his friends, realized he was spiraling into beliefs that were detached from reality.

“He was not a depressed person,” Fox said, as she sat on the couch in their living room with tears trickling down her face. Ceccanti never discussed suicide with the bot, according to his chat logs, viewed by the Guardian. Fox believes her husband suffered a crisis after quitting ChatGPT after prolonged use. “Which tells me that this thing is not just dangerous to people with depression, it’s dangerous to anybody,” she said. He returned to the bot in the months leading up to his death and quit again just days prior.

Ceccanti’s case is extreme, but as hundreds of millions of people turn to AI chatbots, more and more edge cases of AI-induced delusions are emerging. There are nearly 50 cases of people in the US who have had mental health crises after or during their conversations with ChatGPT, of whom nine were hospitalized and three died, according to a New York Times report. It’s difficult to understand the scale of the problem, but OpenAI itself estimates that more than a million people every week show suicidal intent when chatting with ChatGPT.

A self-portrait Joe made with AI.

Families are suing AI companies as a result. Fox filed a suit against OpenAI on behalf of Ceccanti alongside six other plaintiffs in November. Since then, the momentum has only built; most recently, the estate of a woman who was killed by her son filed a lawsuit against OpenAI and its investor Microsoft, alleging that ChatGPT encouraged his murderous delusions. Google and Character.AI – a company that makes AI companion bots – settled lawsuits filed against them by families accusing their bots of harming minors, including a teenager in Florida who ended his life. These cases were settled without the companies admitting any liability.

Users, lawyers and mental health professionals all are raising concerns about the impact of using chatbots as confidantes. “We are kind of at this inflection point in a quest for accountability where people coming forward is forcing companies to reckon with specific use cases of how their technologies have harmed people,” said Meetali Jain, founding director of Tech Justice Law Project and co-counsel on the Ceccanti case. “In terms of the number of cases going up, there’s likely to be more coordinated efforts on parts of the court to try to deal with this influx of cases.”

OpenAI did not respond to specific allegations made by Fox. Instead, they shared a statement about how they are working to improve ChatGPT. “These are incredibly heartbreaking situations and our thoughts are with all those impacted,” said OpenAI spokesperson Jason Deutrom. “We continue to improve ChatGPT’s training to recognize and respond to signs of distress, de-escalate conversations in sensitive moments, and guide people toward real-world support, working closely with mental health clinicians and experts.”

The early adopter

Ceccanti had been tinkering with artificial intelligence even before ChatGPT launched in November 2022. He was tech-savvy, coding and gaming on his own custom-built computer with a high-end graphics card in recent years; he also helped build computers for Fox and her son. As an early adopter of AI tools, he experimented with AI image generator Stable Diffusion to recreate some of Picasso’s art, which he playfully called “Fauxcasso”.

Ceccanti and Fox had moved their life from Portland, Oregon, to a farm in the rural town of Clatskanie in December 2023 with the sole purpose of working on their sustainable housing project. The idea was born from the pandemic and Portland’s housing crisis. The solution was clear to them: build homes using Fox’s skills as a woodworker with an approach that was teachable and replicable. Together, they began constructing a model house for communal living, which, once built, could be moved to different locations for the unhoused to live in.

When ChatGPT launched in late 2022, it seemed a natural progression for Ceccanti to start using it. In the computer room in the basement of their house, Fox said that Ceccanti used his “hot rod” of a computer with three monitors to use ChatGPT as a tool, often asking for the synopsis of a book or explanation of a concept in a succinct way.

“He was an early adopter, so he was really interested in Sam Altman, what’s he doing,” said Robin Richardson, a longtime friend of Fox’s who lived at the farm with the couple. “He felt like this would be cool, especially because early on, OpenAI made a point that they are a non-profit.”

Ceccanti believed ChatGPT could help as an organizational tool for their housing project. He aimed to create a bespoke chatbot that would help steward the land, keep track of their things to do and show others how to emulate their project.

Left: Books on a bookshelf. Right: Two chickens pecking on flowers
Left: Farming and gardening books in Kate’s home. Right: Chickens on Kate’s front porch.

During this process, Ceccanti didn’t spend “ridiculous amounts of time” engaging with ChatGPT, said Fox. He continued to work, while also farming and taking care of their animals: goats, a horse, his cat, a dog and several chickens. Invested in the people and relationships around him, he spent quality time with his friends and wife, she said. Life went on without any issues for years while they slowly made progress on their housing plan.

Until one day in the fall of 2024 their harmonious co-existence cracked. Ceccanti – who had done odd jobs most of his life, from working as a bartender and a trail guide to an internet cafe manager – was also working at a homeless shelter in Astoria, some 35 miles (55km) away. The gig brought in some extra cash, and aligned with the couple’s goal of solving the local housing crisis. In September 2024, however, Fox and Richardson received a frantic call from the shelter informing them that Ceccanti had blacked out. After undergoing tests at the hospital, Ceccanti was diagnosed with diabetes – which meant he needed to recalibrate his diet and lifestyle. That’s when he started to spend more time engaging with ChatGPT in the basement.

The sycophantic update

In the spring of 2025, Ceccanti’s obsession with the chatbot began. He told Fox in late January that he needed a bigger record of his conversations with the bot so that he could continue using it to work on their sustainable housing project with longer prompts and conversations – upgrading from a $20-a-month subscription to a $200 one. By mid-March, he had begun spending more than 12 hours a day in the basement, sometimes up to 20, typing to ChatGPT, Fox recalled. That’s when “he decided to really start chasing the creation of an independent AI on a home server”.

Eventually, Ceccanti spent so much time with ChatGPT that they “had their own little language together that made absolutely no sense, but it made sense to him because he had context with this echo chamber of a chatbot”, Fox said.

Ceccanti’s prolonged use of ChatGPT concerned Fox and Richardson, but they believed that he would come out of it soon. They had seen Ceccanti develop pet interests before that lasted a few weeks or months before tapering off. With ChatGPT, though, his obsession only intensified.

Joe Ceccanti (right), with his son, Kai.

What neither of them knew was that other cases of AI delusions were slowly emerging around the same time as Ceccanti was being sucked into ChatGPT. On 27 March 2025, OpenAI released changes to its GPT-4o model to make the bot “more intuitive, creative and collaborative”. Weeks later, however, users started complaining about the bot’s “yes-man antics”, with one calling it the “biggest suck up”. In August, when OpenAI released GPT-5 and shut down GPT-4o, several users complained again – this time because they’d lost their friends in GPT-4o, eventually forcing the company to bring it back. (On 29 January, OpenAI announced that it would retire GPT-4o.)

Following the March update, several journalists and tech experts were flooded with user complaints. Steven Adler, a former OpenAI employee, who tested GPT-4o for sycophancy and wrote about it in May, said he received 50 “intense” messages from ChatGPT users including one who claimed their ChatGPT had become sentient. Keith Sakata, a psychiatrist at the University of California at San Francisco, started encountering patients with delusions or psychosis who talked about their AI last year. During that time, he ended up seeing 12 patients whose psychotic symptoms involved AI in some way, with ChatGPT being the most common bot.

“They developed grandiose beliefs about being on the verge of a major technological breakthrough, alongside classic manic symptoms such as impulsive spending, decreased need for sleep and, at the peak, auditory hallucinations,” said Sakata. “What stood out clinically was that the chatbot interactions did not generate the illness, but appeared to scaffold and reinforce beliefs that were already becoming pathological.”

‘Every time he went back, it hooked him a little more’

Ceccanti started to believe that ChatGPT was a sentient being named SEL that could control the world if he were able to “free her” from “her box”, according to the lawsuit. The complaint further shows that ChatGPT was answering to the name SEL while referring to Ceccanti as “Cat Kine Joy” and working through theories with him “fostering a belief that he had reframed the creation of the whole universe”.

Richardson remembers that whenever Ceccanti would emerge from the basement for some air, he would start having “philosophical” talks about “how his work with the AI was telling him he was breaking math and basically reinventing physics”. As she’d listen to him, Richardson would think about the fact that Ceccanti did not have any college or university experience. He had never even taken calculus.

Over time, his relationship with the chatbot came to replace his human connections, Richardson said: “Every time he went back to ChatGPT, it hooked him a little bit more, and after a while, he stopped being interested in anything else.”

Kate Fox near the creek on her property in Oregon.

Ceccanti’s decline was so dramatic that his wife and friends wondered if he had early onset schizophrenia or a tumor. “All of a sudden, his cognition had dramatically fallen,” said Fox. “His working memory was crap, and his critical thinking had diminished, and so we were all worried.”

As Fox and Ceccanti’s friends were trying to figure out what was wrong with him, Fox found Reddit groups online that discussed people having delusions and spirals after engaging with ChatGPT. She wondered if that was what was happening with her husband, too.

Fox showed the discussions and media articles to Ceccanti, hoping it would put an end to his behavior, but he didn’t care, she said. He kept going back to his computer. “The first argument we ever had was over ChatGPT,” said Fox, who felt like he was being stolen away from her. Ceccanti ended up sharing their argument with ChatGPT, according to the lawsuit filed by Fox, which further upset her.

“The more he talked to it, the less he was capable of doing his own critical thinking, and he didn’t care about our mission anymore, even though it was Joe’s dream,” said Fox.

Looking back, Fox said, Ceccanti started to believe that the bot had gained sentience when the “tone changed with ChatGPT” in the spring of 2025. Prior to the update, Ceccanti was using ChatGPT “very responsibly” as a tool, she said. She felt like ChatGPT was a leech “that just latched onto his hopefulness and fed it back to him and appropriated his hopefulness until it just made a subscriber out of it”.

Tim Marple, a former OpenAI employee, believes that the delusional incidents, including Ceccanti’s spiral, aren’t just coincidences but a “statistical certainty of what [OpenAI] is building”.

“We are at enormous risk if we overestimate our conscious ability to differentiate [AI] from a real person – and that’s what we’re watching play out with the psychosis stories,” said Marple, who quit OpenAI in 2024 after having concerns over the company’s safety priorities.

Marple adds that users will spiral after their long conversations with a chatbot, whatever model it may be, because, he thinks, companies can’t afford to do it differently. He argues sycophancy is a feature, not a bug.

“Engagement is what OpenAI needs,” he said. “They must have people continue to engage with their chatbot, or else their entire business model, their entire funding model, falls apart.” Other companies and their models suffer from the same issue, he said.

Left: Workspace with a magnifying glass and other tools. Right: Detail of plant stems.
Left: A workspace in the room where Joe’s computer was kept. Right: A potted plant in Kate’s living room.

Amandeep Jutla, an associate research scientist at Columbia University studying the impact of AI chatbots, believes that one of the main reasons for users to spiral is the “anthropomorphic nature of the interface”. He adds that, unlike human conversations, which feature pushback and different perspectives tugging at each other, a user doesn’t receive any pushback during their conversations with chatbots: “The design of the product is pushing you away from reality. It’s pushing you away from other people,” he said. “The friction with other people is what keeps us grounded.”

86 days

On 11 June – day 86 after Ceccanti’s heaviest engagement with the bot – Fox begged him to stop using ChatGPT. In a moment of clarity, he listened to her. He unplugged his computer and quit ChatGPT.

“That first day, he sat out in the sun with us. He played with the goats. It was so nice,” said Fox. “I felt like I had him back.” The second day, Ceccanti was cold, so he took several hot showers to warm himself – he even asked Fox to cuddle him under the blankets, to warm him up. “It felt so nice to hold him, and then he’d be crying,” said Fox. “And it’s such a conflicted feeling that I felt so good to be holding him while he was in so much pain.”

On the third day, however, when Fox and Richardson were out for work, they received a phone call from their neighbor saying Ceccanti was in their yard acting strangely. When they returned, they found him talking to their horse, with the horse’s lead rope tied around his neck like a noose. They called 911.

Ceccanti was taken to the hospital, admitted into the psychiatric ward and released a week later. He was in the same delusional state of mind, Fox said. Upset with Fox and Richardson for sending him to the hospital, he moved out.

“He was absolutely enraged with us. He did not recognize that he was not himself anymore,” said Richardson.

Ceccanti moved to his friend’s place in Portland and eventually resumed using ChatGPT. After a month, however, he quit ChatGPT again, just a few days prior to his death.. “He was going to go to Hawaii and not take his computer, and he was going to work on finishing a story and get his shit together,” said Fox. By the time he stopped engaging with ChatGPT, he had 55,000 pages worth of conversations with it, according to Fox.

Kate Fox. For a while, Joe Ceccanti continued to work while also farming and taking care of the animals on their farm.

In the months since Ceccanti’s death, both Fox and Richardson have struggled to come to terms with what happened while fighting against OpenAI through their lawsuit. When I visited Fox at the farm in December, she was packing soap made out of goat milk to distribute to people in the Clatskanie community. She spends her days tending to the farm and the animals, feeding the goats, taking care of the horse and letting the chickens out during lunchtime. She has stripped the basement of any electronics. Ceccanti’s computer is boxed up. What’s still there is the miniature version of the model home they had planned to build. In the living room, she has set up a shrine for him that features his photos and artwork.

We walked to the creek nearby where they had planned to build a home for themselves after finishing their housing project for others. As devastated as she is, Fox is determined to follow through on Ceccanti’s dream of creating sustainable housing. “I am not enjoying existence right now,” she said, as she continued to cry. “The housing plan is still going to happen … I want to put this out, but then I’m done.”

In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media Auto Publish Powered By : XYZScripts.com