The growing trend of using Artificial Intelligence (AI) as a space to confide and seek emotional validation has gained significant popularity among young people today. AI is perceived as a safe platform for sharing personal stories or expressing distress without fear of judgment. However, this phenomenon carries serious risks, including potential threats to human life, as AI lacks empathy and emotional understanding, meaning no entity can be held accountable for the consequences.
Professor Ridi Ferdiana from the Faculty of Engineering, Universitas Gadjah Mada (FT UGM), explained that AI is a technological product designed to address personal human problems, making it widely used and trusted for its credibility.
“From a business standpoint, this phenomenon is strategically positioned because it allows a product to reach a deeply personal level, ideally becoming part of people’s daily lives,” Professor Ferdiana said on Thursday (Dec. 4).
From a technical perspective, Professor Ferdiana noted that the main goal of digital transformation is to encourage people to become more aware of digital aspects in their lives. He emphasized that there is nothing inherently wrong with using AI as a conversational partner or a confidant.
“There is nothing wrong with using AI as a dialogue partner,” he stated.
From a social perspective, Professor Ferdiana explained that AI chatbots serve as relatively consistent personal entities designed to assist in interaction under ideal conditions.
However, this consistency should not be confused with genuine care; it is merely the result of predicting the next word in a sequence. AI operates on machine learning principles, relying on high-quality pre-processed data.
“Essentially, whatever you instruct it to say will generate the desired words. AI operates on a ‘garbage in, garbage out’ principle. Poor input leads to poor output,” he said.
Professor Ferdiana observed that technology is now evolving toward AI systems capable of simulating feelings and empathy. AI is progressing into agentic AI (systems that not only respond but also perform specific actions).
Although such advancements offer many benefits, Professor Ferdiana emphasized the critical need for governance and clear policies regulating AI use.
“AI is like medicine. Overuse can cause ‘poisoning’,” he remarked.
He further stressed that AI must be designed to be safe from the outset. The principles of Responsible AI, Ethical AI, and Transparent AI are essential to ensure that AI does not produce negative impacts, such as hallucinations, factual inaccuracies, or subtle influence over users.
“Use AI in moderation. Just as with screen time on smartphones, we need to set limits when using AI as a space for emotional expression. Do not become overly dependent and lose control over ourselves,” he advised.
Author: Jelita Agustine
Editor: Gusti Grehenson
Post-editor: Rajendra Arya
Illustration: Blablast.id