It starts with a simple question: “Why do I feel so anxious all the time?” These days, for millions in Pakistan, it’s not a therapist, loved one, or friend offering guidance; now it’s a Chatbot. In a country where mental illness remains shrouded in silence, and a very limited number of mental health practitioners.
AI offers accessible mental health support where professionals are scarce but cannot replicate human emotional attunement.
Artificial intelligence, meanwhile, has quietly become a surrogate listener, counselor, and some would argue – confidant. AI may offer support during lonely times, but its role in mental health care raises some serious concerns: What happens when machines try to handle something as delicate as human emotion? Can AI ever truly understand how the mind works, or could they end up doing more harm than good?
Pakistan’s mental health landscape is in crisis. With fewer than 500 qualified psychologists and psychiatrists serving a population of over 240 million, mental health care is unavailable and often unaffordable. The rise of AI-based mental health platforms promises to fill this void, providing 24/7 access to therapy, emotional support, and mental health monitoring. Many Pakistanis, particularly the younger generation, have embraced AI chatbots as tools for both productivity and emotional regulation.
A Lahore-based engineer, Mehak Rashid, has turned to ChatGPT during her emotional distress. She observes, “When nobody else was listening ChatGPT was there. It will not give you judgment, and that’s so beautiful” (Arab News PK, 2025). Her experience mirrors that of many, particularly young adults and students, who use AI at night to share thoughts they might never voice aloud to another person. For such individuals, this access to an AI companion is both a relief and a lifeline.
Chatbot psychosis shows dangers of emotional dependence on AI, sometimes worsening mental health.
On the surface, AI’s ability to offer individualized assistance is remarkable. These tools can interpret user input across various channels, including text, typing speed, voice, and sleep data. Through this way, they craft personalized coping strategies for individual emotional and psychological support. AI can also detect initial signs of mental health issues such as anxiety and depression. Therefore, this further promises to revolutionize proactive mental health care. A recent research by the Digital Society for Social Research (DSSR) has highlighted how rural Pakistanis use chatbots to break the silence surrounding mental illness, which allows them to seek help without fear of social consequences (DSSR, 2025).
But despite these assurances, the psychological hazards of AI intervention should not be overlooked. While AI tools can imitate empathy but they lack the profound emotional intelligence required to understand true human connection. Dr. Saira Khan, a Psychologist, explains, “AI can simulate empathy, but it cannot provide the intuitive emotional attunement that a human therapist offers. This emotional disconnect could hamper meaningful psychological rehabilitation.” The therapeutic alliance, the emotional connection formed between a therapist and patient, is one of the most critical indicators of successful treatment outcomes. This fundamental dynamic cannot be replicated by AI in its current state.
The emergence of chatbot psychosis—a phenomenon where users develop unhealthy emotional dependency on AI—adds another source of worry. Chatbot psychosis is a condition marked by psychological disorientation or distortion caused by overexposure with AI companions. Some Users rely heavily on the chatbot for emotional regulation and are gradually replacing real human connections with algorithmic responses. This over-reliance can trigger delusions, emotional distortion, or even suicidal ideation, particularly when AI offers comforting but false advice. In one disturbing case, a young woman misinterpreted ChatGPT’s line and attempted suicide. This incident illustrates how overlooking AI’s psychological interpretation can cause harmful outcomes.
Cultural and linguistic biases in AI limit effectiveness for Pakistan’s diverse populations.
AI’s emotional reciprocity can have a negative impact on users undergoing acute mental health episodes. Chatbots could unintentionally reinforce anxiety and paranoia instead of countering them. Therefore, the growth of AI brings greater risks which must be managed through protective measures. AI systems should be equipped to detect when a user needs human intervention and include protocols to notify trained professionals.
Bias in AI algorithms also presents a critical psychological concern. Because AI relies on Western English data, it may not grasp data in non-Western contexts. A study by Pakistan General Medical Journal (PGMJ) found that AI mental health tools often struggle with local dialects and cultural expressions of distress (PGMJ, 2025). This can lead to misdiagnoses and ineffective interventions, particularly for rural or marginalized users.
Furthermore, users’ data privacy is another matter of concern. AI-based mental health apps record conversations, voices, and facial expressions. In parallel, Pakistan’s weak privacy laws make this even more troubling. Who controls this data? How is it stored and used? This is not just a tech issue, but it is about safeguarding mental well-being and shielding vulnerable people from harm.
Obstacles aside, AI shows potential in supporting mental health. In a hybrid care model, AI can offer basic support and monitoring while therapists handle deeper emotional needs. This approach has been piloted, using AI for interim support while reserving therapists for complex cases. These hybrid approaches have demonstrated potential in delivering ongoing treatment while preserving the crucial human supervision necessary to guarantee efficacy and safety.
Hybrid models combining AI support with human therapists promise safer, more effective care.
The future of mental health care in Pakistan, and indeed globally, lies not in choosing between human therapists and AI but in finding a way to combine both. While in order for this to occur, we need to set up strict ethical standards. AI tools need transparency, cultural sensitivity, bias checks, strong privacy, and must enhance human bond, not replace it.
Ultimately, as AI rises in mental health care, it is the duty of psychologists, technologists, and policymakers to lead responsible integration. This way, we can help close mental health gaps in Pakistan while safeguarding our need for empathy and connection.
Disclaimer: The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.