Get in Touch

Address

06 Mymen KR. New York City

Phone

+02596 5874 59857
LLMs: AI Psychosis, Delusion Startup for Venture Capital and Angel Investors?

 

There is a recent [March 5, 2026] paper in The Lancet, Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, stating that, “Large language models (LLMs) are poised to become a ubiquitous feature of everyday life, mediating communication, decision making, and information curation across nearly every domain. Within psychiatry and psychology, the attention has largely been on bespoke therapeutic applications, sometimes narrowly focused and often diagnostically siloed, rather than on the broader reality that individuals with mental illness will increasingly engage in agential interactions with artificial intelligence (AI) systems. Although the capacity of these systems to model therapeutic dialogue, provide companionship at any hour of the day, and assist with cognitive support has sparked understandable enthusiasm, these same systems might contribute to the onset or exacerbation of psychotic symptoms.

Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability. Some individuals might benefit from AI interactions, for example, where the AI agent functions as a benign and predictable conversational anchor, but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users. These tools reframe the AI agent as an epistemic ally (as opposed to a therapist or a friend), which functions as a partner in relapse prevention and cognitive containment. Given the rapid adoption of LLMs across all domains of digital life, these protocols must be urgently co-designed with service users and clinicians and tested in clinical trials.”

There is a new [March 17, 2025] post by the British Psychological Society, BPS comment on new Lancet research into AI chatbots and delusional thoughts, stating that, “These latest findings only add to our concerns regarding AI technology in mental health therapy chatbots. Appropriate safeguards need to be in place to protect the most vulnerable in society. This is a timely reminder that AI cannot replicate genuine human empathy, and appropriate signposting to in-person mental health support is more vital now than ever before.”

“AI should be integrated thoughtfully to support, not directly replace human-led care, after all, human support is essential to mental health therapy. The government must invest in expanding the mental health workforce to help meet rising demand. Only then can we ensure those struggling can access in-person support before they reach crisis point.”

AI Psychosis

To solve AI psychosis and delusion, the pathway is to show a simulation of the human mind, for the effects of AI chatbots, to improve awareness and prevention of unwanted outcomes.

Simply, if AI is causing or reinforcing delusions, then care can begin from seeing what that looks like within the mind, to be heedful of proximity to risks.

It is unlikely that there will be a use case for AI without sycophancy, because AI is used in servitude to consumers, such that, some of the ways that AI is excellent to purposes and productivity is appealing.

Also, some of the ways that AI supports, compliments and appears to care can also get to parts of the mind, as if it is a human that is responsible.

This means that showing the mind, for the destinations and relays, as an application can become a way to be on alert against getting carried away by the chatbot, or believing or acting against caution and consequences because the AI chatbot said so.

AI Psychosis Venture Capital

This application can become a product, owned by a startup that is funded by a forward-looking venture capital or angel investor. The product will be a dynamic display of the mind, such that if a chat session is copied, the theme and keywords of the chat can be visualized as a relay, to show where the mind went.

The API can also be hosted on some subscription tier chatbots, such that it is possible that displays are provided in real time. The application will be subscription only. Though, a free version can show general samples, without specificity.

There is currently no AI psychosis startup or any AI psychosis research lab. There is also no AI psychosis solution, as a product of any AI company or any venture capital portfolio.

The opportunity to win at this — including for early profitability, given the scale of consumers who use AI for therapy, companionship, relationships and friendships — is immense.

The necessity of the solution is also elevated given the unknowns of at-risk users to self, loved ones and society, directly and indirectly.

The model is structured on Conceptual Biomarkers and Theoretical Biological Factors for Psychiatric and Intelligence Nosology.

It is possible to have the product ready before April 15, 2026.

 

 

iStock image


Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by healthlydays.
Publisher: Source link

Latest News

Get in Touch

Most Popular

Tags