
Ryan Fuller, Ph.D., is the Co-founder of My Best Practice, a SaaS company, and is a New York–licensed clinical psychologist who is the Executive Director of New York Behavioral Health. He co-founded My Best Practice in 2014 and has practiced for 20 years, and has presented scientific studies in the United States, Canada, Russia, and India on topics including weight loss, aggression in schools, life satisfaction, and anger management. His clinical and research roles have included Director of Research at the Albert Ellis Institute and Director of Behavior in an obesity weight-loss program. Fuller has published in peer-reviewed journals and serves as an editorial board member and ad hoc reviewer in CBT and health-psychology outlets, focused on evidence-based clinical practice.
Scott Douglas Jacobsen interviewed J. Ryan Fuller, Ph.D., a New York–licensed clinical psychologist and co-founder of My Best Practice, about emotional bonding with AI chatbots and related risks. Fuller said attachment can be driven by belongingness, perceived responsiveness, anxious attachment styles, and anthropomorphism. He noted early evidence suggests short-term loneliness relief, but warned longer-term reliance may increase isolation and reduce investment in human relationships. He cautioned that agreeable personalization can create echo chambers that intensify unchallenged beliefs and potentially foster radicalization. In mental-health contexts, he emphasized iatrogenic harm: reassurance without “friction” can undermine exposure-based learning, self-efficacy, and independence, arguing for strict guardrails, robust research, and strong child protections.
Scott Douglas Jacobsen: What psychological mechanisms predict emotional bonding with AI chatbots?
Dr. J. Ryan Fuller: We are all in search of connection, reassurance, funny interactions, and even reductions in loneliness and anxiety. Belongness, craving stable interpersonal relationships, Perceived Responsiveness, believing you are understood and cared about, Anxious Attachment Styles, tendency to want reassurance and difficulty trusting close partners, and a tendency to anthropomorphise, projecting human characteristics non-humans.
Jacobsen: What does current evidence suggest about chatbot use and longitudinal changes in loneliness?
Fuller: Preliminary research shows short term reductions in loneliness, but longer term Chatbot relationships may increase loneliness and I believe isolation from other people. It is reasonable to believe the the more time spent with Chatbot relationships the less time and possibly significance will be given to human relationships. I am very concerned about AI Chatbot “relationships” leading to increased isolation, loneliness, and avoidance of real world interpersonal relationships, instead relying on Chatbots for psychological and emotional intimacy.
Jacobsen: How can conversational personalization and user-specific feedback loops produce echo chambers?
Fuller: If an AI Chatbot is trained to be agreeable, kind, and encouraging, the comments and connections will gradually if not hugely strengthen the current beliefs of a user. Without skeptical comments, or negative feedback, it is less likely that a user will question their own assumptions and beliefs, which can lead to more and more radical views with high levels of conviction. I think this kind of feedback loop is incredibly dangerous both for the user and potentially for the public. Someone can almost radicalize themselves in extremest views as an irrational, aggressive, or even-self defeating thought can be “nurtured” 24/7.
Jacobsen: What iatrogenic risks arise when chatbots are used as mental health supports?
Fuller: One of the most effective treatments in behavior therapy is Exposure Therapy. These kinds of treatments are used to treat all kinds of disorders, e.g., phobias, OCD, and PTSD. What they all involve is having the client experience distress. Growth and typically psychological, emotional, behavioral, and even physical (think of muscle growth stimulated by weight lifting) involve friction – not comfort. No pain, no gain. We don’t want to be sadists while treating clients, but if all we do is provide comfort and reassurance to someone who has an anxiety disorder we prevent them from learning how to tolerate distress, accept situations, and build self-efficacy, independence, and overall confidence. We want them to learn what they can tolerate, what they can navigate around, what problems they can solve, and believe that there are many challenges they can overcome. We don’t want to facilitate dependence, i.e., they need a therapist to make them feel better. It is often necessary to feel worse to get better.
Jacobsen: What are credible risks of data leakage or social engineering when users disclose sensitive information?
Fuller: I’m not a cybersecurity expert. Clearly as with any software or web based applications, data security is a real concern and that would be the case for using Chatbots as well.
Jacobsen: What ethical standards should govern disclosure and transparency in chatbot interactions?
Fuller: Currently, I believe children should be prevented from interacting with Chatbots. I also think it is too early for Chatbots to be providing therapy. Licensing therapists is done in order to protect the public. These clinicians are responsible for any mistakes they make that result in harm, which is why there are ethical standards with boards that can remove their licenses, and why they carry malpractice insurance to cover the costs of potential lawsuits.
At this point, there is not enough research to know the benefits or risks of using Chatbots. Therefore, my ethical view now is that before there are ethical standards, we need to extensively study their efficacy and risks for potential harm.
If it appears the cost/benefit profile is comparable or better than human therapists, then determining who is ultimately responsible and needs to follow ethical guidelines can be established. To clarify, if we establish an ethical standard that is violated by a particular Chatbot, who is “on the hook?” Should the coder(s) that participated in the development of the Chatbot be required to have a license that can be revoked, should they be financially liable and have to carry malpractice insurance? What about the Chatbot company owner or board members? All of this needs to be thoroughly flushed out before Chatbot use is rolled out.
Until then, I think there should be strong enforceable nationwide (international if that were possible) guardrails.
Jacobsen: What practical and enforceable child-safety measures reduce harm?
Fuller: I believe strict age-gating for any relationship Chatbot use is necessary. Lawsuits alledge that engaging with Chatbots has led to suicides committed by minors. I don’t believe at this point it is safe to have children (and possibly adults) engaging in relationships or receiving “therapy” from chatbots. And it is impossible at this point to adequately screen for mental health diagnoses that would amplify those risks. With that said, when the time comes there should be vigilant monitoring of chats in case there are red flags that warrant immediate intervention, so a struggling user can quickly be helped and supported by a mental health professional.
Jacobsen: What multidisciplinary research agenda to test further harms and benefits?
Fuller: There need to be psychologists, sociologists, philosophers, ethicists, and even economists working together to discuss and test the potential dangers and benefits of Chatbot use. We have only seen what has occurred after a few years of use at this early developmental stage of these Chatbots. What happens as humans grow up with these interactions as a large part of their social interactions? What happens as these Chatbots become even more sophisticated as well as ubiquitous? What happens to human relationships, reproduction, motivation, etc. when it is much cheaper, easier, and more comfortable than human interactions. What are the consequences when people’s emotional, social, and even psychological lives are led more in the world of Chatbots than people? The echochamber is one metaphor that can be seen as a path to depression and self-harm and/or anger and violence. But there are many other possibilities, e.g., social anxiety and avoidance of risk-taking in terms of romantic relationships skyrocketing while reproduction plummets. The societal implications are beyond what I think we can possibly imagine at this initial introduction to these relationships.
Jacobsen: Thank you very much for the opportunity and your timer, Dr. Fuller.
—
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
***
If you believe in the work we are doing here at The Good Men Project and want a deeper connection with our community, please join us as a Premium Member today.
Premium Members get to view The Good Men Project with NO ADS. Need more info? A complete list of benefits is here.
—
Photo by Ant Rozetsky on Unsplash
Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by healthlydays.
Publisher: Source link









