The Real War Behind Unreal Images

The Real War Behind Unreal Images

We live in an age where truth is constantly being rewritten by technology. Every scroll, swipe, and click carries both light and darkness. The same digital tools that connect the world are now being used to deceive it.

Amol DeshmukhUpdated: Tuesday, November 04, 2025, 06:55 PM IST
article-image
Deepfakes are the latest and most dangerous face of deception. | Pexels Image (Representative Image)

“Deepfakes and synthetic media are unreal, but the battle in the mind and the damage to the body are real.”

We live in an age where truth is constantly being rewritten by technology. Every scroll, swipe, and click carries both light and darkness. The same digital tools that connect the world are now being used to deceive it. 

Deepfakes are the latest and most dangerous face of deception. The term refers to synthetic media created using artificial intelligence. Through machine learning and deep learning algorithms, thousands of images and videos of a person can be analyzed to replicate their voice, expressions, and gestures. Once trained, the software can produce highly realistic yet completely fabricated videos. The result is a dangerous blend of technical precision and psychological manipulation. 

The explosion of information and artificial intelligence has created what can be called a synthetic reality. This overwhelming flow of content has become fertile ground for a pandemic of misinformation and disinformation. Yet, the weaponization of falsehood is not new. The Mahabharata offers an ancient example: the false news of Ashwatthama’s death changed the fate of Dronacharya and altered the course of the war. Today, technology has turned that single lie into millions of digital illusions, spreading faster than truth itself.  

The real threat of deepfakes does not come from technology alone, but from human nature. People instinctively believe what they see. This trust in visual evidence, once considered a safeguard of truth, has become a vulnerability in the digital age.

The year 2024 has already shown how this technology can be abused. In the United States, fake robocalls using an AI-generated version of President Joe Biden’s voice urged voters to skip the New Hampshire primary and to stay home on election day.The synthetic voice was so precise,matching Biden’s tone, inflection, and speech patterns,that many recipients believed it to be genuine. Investigators later traced the calls to a political disinformation outfit experimenting with generative AI tools.Around the same time,a doctored video depicting President Donald Trump delivering inflammatory remarks about immigrants and election integrity went viral on social media platforms. Despite being debunked within hours, the clip had already been viewed millions of times, igniting political outrage and deepening public polarization.

These incidents underscore a disturbing truth: seeing is no longer believing.

In India, several deepfake videos circulated during the 2024 general elections, portraying prominent leaders making statements they never uttered. Some of these clips were designed to look like press interactions, while others appeared as leaked campaign recordings. Despite being flagged by fact-checkers, they spread rapidly across social media, blurring the line between truth and fiction. These incidents reflect a pattern of organized psychological operations aimed at influencing democracies and undermining public confidence in electoral systems.

Beyond politics, deepfakes have become powerful tools of harassment and exploitation. Across Europe and Asia, artificial intelligence has been used to generate pornographic videos targeting women, journalists, and activists. These synthetic images are often created using publicly available photos and are circulated to humiliate and silence victims. The technology is also being weaponized for crime and personal revenge. Fake videos have been created to blackmail individuals, extort money, and defame reputations.

One shocking case investigated by Europol involved a company executive who attended a video conference believing he was speaking to his Chief Financial Officer(CFO) and other colleagues. Every person on that call was a deepfake, digitally recreated to look and sound real. The executive was deceived into authorizing a transfer of more than more than $25 million dollars to offshore accounts. By the time the fraud was uncovered, the money had vanished without a trace. 

These examples show how the battlefield of deceit has shifted from physical spaces to digital screens. Deepfakes are not just videos or voices. They are psychological weapons that erode trust, distort perception, and make people question even genuine evidence. When truth itself becomes negotiable, society begins to lose its moral compass.

The danger lies in accessibility and speed. With easy-to-use AI tools now available online, anyone with basic digital skills can create convincing deepfakes. Artificial intelligence can generate false content within minutes and spread it to millions in seconds. Once a fake video goes viral, even a later correction cannot undo the damage.   

 Governments around the world are struggling to contain this growing menace. The European Union has introduced the AI Act to regulate the development and misuse of synthetic media.The United States has started drafting laws to criminalize malicious deepfakes.  that target elections, individuals, or national security. In India, law enforcement agencies rely on existing laws such as the Information Technology Act, 2000, the Bharatiya Nyaya Sanhita, and the Digital Personal Data Protection Act. However, these frameworks are still reactive, not preventive. The law moves after the harm is done, while technology races ahead at the speed of a click. 

Ultimately, neither technology nor law can fully contain the threat. AI detection tools will always chase the next generation of forgery software, and no legal remedy can undo reputational or emotional damage once a fake has gone viral. In a world where lies can spread faster than facts, our best defense is not a better algorithm, but a better-informed society.

We must build a human firewall through awareness and digital literacy. Every individual should learn to pause before sharing, verify before believing, and question before reacting. Healthy skepticism must become a natural habit, just like looking both ways before crossing a street.

The battle against deepfakes is not limited to screens or systems. It is a struggle for the human mind itself. The technology may be unreal, but the psychological damage it causes is painfully real.

Amol Deshmukh

Amol Deshmukh | The author is a group 'A' officer in the Maharashtra education service currently heading the forensics, cyber and allied sciences.

RECENT STORIES

The Real War Behind Unreal Images

The Real War Behind Unreal Images

Dr Bindal’s Mission 2027: Rebuilding BJP’s Base Through Organisational Renewal And Public Trust

Dr Bindal’s Mission 2027: Rebuilding BJP’s Base Through Organisational Renewal And Public Trust

Delhi Air Pollution: A Failed Attempt At Making Rain

Delhi Air Pollution: A Failed Attempt At Making Rain

Three To The Tango: The Fast Shifting Indo-Pacific Balance

Three To The Tango: The Fast Shifting Indo-Pacific Balance

Women In Blue Script History

Women In Blue Script History