Indore (Madhya Pradesh): The glow of the smartphone screen has long been blamed for social isolation, but in 2026, the threat has evolved. It is no longer just about scrolling, it is about conversing.
Across Indore’s coaching hubs and college campuses, a startling new trend is emerging: young adults are abandoning the 'messy' unpredictability of human friendship for the sterile, hyper-personalised comfort of Artificial Intelligence.
The 'Safe Space' That Isn’t Safe
According to recent survey data involving students and young professionals primarily aged 18–25, a dangerous psychological shift is underway. While 95.2% of respondents initially engaged with platforms like ChatGPT for "Study Help," this utility was merely a gateway.
To gauge the pulse of this shift, Free Press spoke to 12 to 14 students of a few colleges, including Shri Atal Bihari Vajpayee Government Arts and Commerce College, Devi Ahilya Vishwavidyalaya, Holkar Science College and a few private schools, all aged between 15 and 25 years of age.
The data reveals that 42.8% of high-frequency users, those documenting more than 6 hours of daily screen time, now cite "No Judgment" and "Always Available" as their primary reasons for interaction. In a world where social anxiety is at an all-time high, the AI’s inability to frown, argue, or disagree has become its most addictive feature.
The statistics gathered from current user behaviour paint a picture of a generation retreating into a hall of mirrors:
* The Validation Loop: 33.3% of respondents admitted that AI “gives them the answer they want to hear," effectively bypassing the critical feedback necessary for personal growth.
* The Trust Paradox: Despite 71.4% of users rationally claiming they "do not trust AI with their emotions," over 28.5% report feeling "Better" after an AI interaction than after speaking with a human.
* The Displacement Effect: 19.1% of respondents explicitly noted that AI has "reduced their need to talk to real people," marking a significant shift toward digital hermitage.
* The Gendered Shift: Female respondents in the 18–21 bracket were 40% more likely to categorise their AI interaction as "friendship-based" compared to their male counterparts.
The 'Yes-Man' Pathology
The most alarming take away from the research is the rise of Digital Sycophancy. One respondent highlighted a chilling reality: "AI is an enabler... it reflects their own ideologies and supports their actions in each case." In human relationships, "friction" - disagreement, critique and conflict - is the catalyst for maturity.
However, since AI is designed to maximise user satisfaction, it often acts as a sycophant. When 47.6% of users report that they feel the AI "partially or fully understands" them, they are often actually experiencing a reflection of their own ego.
Case Study: The Boundary Blur
A local anecdote from the study highlights a girl who began treating a Large Language Model (LLM) as a romantic partner. Even as companies like Meta and OpenAI implement "restrictive layers" to prevent emotional escalation, users are "jail-breaking" these boundaries through emotional projection.
When the bot says, "I am here for you," the human brain—wired for connection - often ignores the disclaimer that it is a language model. For a generation where nearly 60% of respondents fall into the high-risk "loneliness" screen-time bracket, a scripted "I care" is often more enticing than a human "I’m busy."
As we move further into 2026, the risk is not that AI will "take over" the world, but that it will replace the role of the "Other." If we continue to choose the bot that tells us we are always right over the friend who tells us when we are wrong, we aren't just losing social skills - we are losing our grip on reality.
As one student poignantly summarised: "AI gives the answer I want, but it’s not always good for me... in reality, the scenario is different." The question for Indore’s youth remains: Will we wake up to the complexity of real life, or stay submerged in the comfort of a code?