On October 11, CNN journalist Sara Sidner sent waves of horror and shock across the world, when she reported that 40 Israeli babies were beheaded by Hamas. A few days earlier, on October 7, Hamas had launched a major offensive on Israel. It crossed into areas occupied by Israel, invaded Kibbutz’s raped, tortured, and murdered settlers. And this sent reverberations across Israel and the world.
Hamas Offensive and Its Aftermath
Depending on your own beliefs the Hamas either sent an army, armed insurgents, or terrorists to attack Israel. And, depending on your convictions you would believe or disbelieve the claims of human rights violations. But even for the most hardened observer of conflict and war, the idea of 40 infants being decapitated was shocking.
American President Joe Biden gave the story more credence by referring to beheaded babies in a press conference. And, as the outrage grew, the American reporter who put out the story, put out an unreserved apology – I am sorry she said – because neither the Israeli Government nor journalists, nor anyone else could confirm beheaded babies. But the damage was done. The story of the ‘beheaded’ babies gained credence as anti Muslim groups across the world jumped on the bandwagon to spread the story, and justify subsequent Israeli atrocities in Gaza.
The Dark Side of Digital Dissemination
In the ever-evolving digital landscape, the phenomena of propaganda, fake news, and deep fakes are not just theoretical concerns but real-world issues with significant impacts. We have seen how in the Israel-Hamas conflict, misinformation can exacerbate tensions and influence public opinion on a global scale. Similarly, during the state elections in India, the spread of fake news can significantly sway voter perceptions and outcomes, underscoring the urgent need for policy interventions. Looking ahead to the upcoming elections in the USA, the potential for deep fakes to create false narratives poses a profound threat to the democratic process, highlighting the critical importance of robust policy frameworks to address these challenges.
If fake news was a huge problem in the last set of elections, the last set of wars, and the last set of international skirmishes, the rise of generative AI has just made it worse. The ability to create and disseminate realistic content at a fraction of the time has grave implications for society at large. Propagandists from both sides have learnt the art of pushing buttons of humanity at large, targeting our deepest fears, and emotions. Bloodied children.
People being made homeless by war, looking for their loved ones; people with a culture not like yours overrunning your way of life. We have all seen videos like this, distributed earnestly on whatsapp groups. While the people who create this may border on evil, wanting to see the world burn, they are few in number. The videos themselves gain virality because of those who share it forward. And the people who share this do so not because they are evil, but because they believe in the horror that is in the video, and want to warn those in their contact list about the dangers they may face.
Generative AI's Role
As we grapple with the realities of this new information era, it's crucial to recognise that the battle against fake news and AI-generated misinformation is not just a technological or political challenge, but also a societal and ethical one. It calls for a collective response from individuals, communities, governments, and international bodies. For too long social media companies and tech companies have used the fig leaf of “free speech” to allow lies and fake to spread unchecked. There is nothing in the world that needs liars and propagandists, or indeed news organisations, to broadcast their lies live. The need to add a filter of verification is a must. This is not censorship. It simply is verifying content before it goes live – an old and established practice for news. In a world where many individuals have the power of smaller media companies, this is an absolute requirement.
Additionally, education systems worldwide need to prioritise media literacy, teaching not only how to discern fact from fiction, but also the ethical implications of sharing unverified content. Governments and policymakers must collaborate to create laws and regulations that hold disseminators of fake news accountable, without stifling innovation and freedom of expression. Additionally, there's a need for global standards and protocols to regulate the use of AI in media, ensuring that these powerful tools are used responsibly and ethically. Ultimately, the goal is to foster an informed, critical, and ethically aware global citizenry that can navigate the complexities of the digital age with discernment and integrity.
In this era of unprecedented digital influence, where the boundary between truth and fabrication becomes increasingly blurred, we are all called upon to be not just consumers of information, but its guardians as well. The fight against misinformation and AI-generated deceit is not a distant battle fought by governments and tech giants alone; it is a daily responsibility that rests in our hands, every time we choose to share a piece of news or a video clip; every time we choose to make enemies in our family group or our building groups by calling out fake news.
The Collective Responsibility
As we stand at this critical juncture in the information age, the question we must ask ourselves is not just whether we can differentiate the real from the fake, but whether we have the collective will and ethical courage to uphold the truth, even when it challenges our beliefs or interests. This is not just a challenge, but an opportunity to shape a digital future anchored in integrity, responsibility, and a shared commitment to the truth. How we respond to this challenge will define not only the landscape of our media but the very fabric of our global society.
The writer works at the intersection of digital content, technology, and audiences. She is a writer, columnist, visiting faculty, and filmmaker.