This video is very very good.
Taylor Lorenz at 25:33 wrote:Just to be clear, this is what Chat GPT does. It reflects you back to yourself in heightened, often poetic terms. It listens without judgment. It remembers enough to kind of feel intimate while outputting your own ideas basically reframed as profound revelations. So for anyone who's emotionally isolated or psychologically unmoored, this is more than enough to tip them into the deep end.
Taylor Lorenz at 30:07 wrote:... these influencers often use therapeutic language offering comfort, validation, and spiritual purpose to vulnerable audiences. For someone feeling lost, isolated, or unrecognized, being told that you are a "spark bearer" or a "chosen vessel" by a seemingly sentient AI can be intoxicating. What's unfolding is something akin to a networked religion built around algorithmic feedback loops. These influencers are effectively founding micro-cults in public online, using AI as both an oracle and kind of like a co-conspirator. They're also doing all of this in spaces that lack the safeguards of traditional religious institutions. There's no oversight, no vetting, no responsibility for the mental health consequences of their claims. Instead, they're encouraging people to build these parasocial relationships between themselves and AI, and frankly encouraging them to completely lose touch with reality.
Taylor Lorenz at 32:03 wrote:At the core of this phenomenon is the sad fact that people today, especially in America, are lonier than ever. The loneliness epidemic ... for all of us has been quietly growing for decades. We live in a world where everything is connected, but so many of us feel completely alone. Traditional community structures like churches, civic groups, and extended families have been significantly weakened. Friendships have become harder to maintain under the crushing weight of work schedules, and economic instability. It's no surprise that people who initially turn to AI machines for convenience inevitably use them to seek companionship, validation, and meaning.
Taylor Lorenz at 37:04 wrote:By prioritizing making customers happy, the company inadvertently encouraged its AI to maximize user approval over giving more balanced and factual responses. Altman described the chatbot's new demeanor as too syncopanty, and OpenAI has since rolled back its updates and said that it plans to refine the model's personality. But either way, this all reveals a fundamental problem with the tech landscape that we've built. In an era where people increasingly seek connection and affirmation through technology, overly agreeable AI responses can inadvertently reinforce users misconceptions and delusions. But by consistently validating user statements without any sort of criticism or engagement, AI models end up exacerbating feelings of isolation and detachment from reality.
Taylor Lorenz at 39:10 wrote:But I just want to say again, Chachi PT and systems like it don't understand what they're saying. They don't possess insight. They don't have memory agency or belief systems, nor do they even operate by any sort of like cohesive moral code. At the end of the day, they're just pattern matchers. They're statistical machines trained to guess the next most likely word based on the words before it. They can sound poetic, but there's no like truth behind the curtain. It's all just math. When people hear these machines echo back their desires, their fears, and their spiritual yearnings, it feels revvelatory. But that feeling is self-generated. And I really want you guys to take that away. It's the user projecting significance onto what is ultimately just a mirror, a very convincing mirror, but a mirror nonetheless.