When Israel and Hamas went to war in October 2023, a clip of a rocket lighting up the night sky went viral on X within hours. It was shared by journalists, politicians, and millions of ordinary users as "live footage from Gaza." It was actually from a 2022 video game - Arma 3. The same clip surfaced again during the Russia-Ukraine war. And before that, during the Syrian conflict.
This is the new reality of wartime information: old footage recycled, AI visuals mistaken for documentary evidence, and outrage that spreads faster than any correction ever can.
How misinformation spreads and why it's getting worse
The mechanisms are well-documented. A combination of algorithmic amplification, breaking-news panic, and declining media literacy means that a single false post can reach millions before a single fact-checker has even opened their laptop.
During the Russia-Ukraine war, footage from conflicts in Libya, Syria, and even Hollywood films was shared widely as "new" content. In India, when tensions flared along the border with China in 2020, dozens of old clips - some dating back years - were recirculated as fresh evidence of hostilities. WhatsApp, in particular, became a vehicle for misinformation that was nearly impossible to trace or debunk at scale.
Now, generative AI has added a new layer of deception. Tools like Midjourney, Sora, and open-source video models can produce photorealistic imagery of explosions, casualties, and military equipment. In 2024, AI-generated images of floods in Valencia, Spain circulated widely on Instagram before being debunked - a preview of what conflict misinformation will increasingly look like.
Tools that can help you verify what's fake and what's not
You don't need to be a professional journalist to push back. These tools are free, accessible, and work on a standard browser or smartphone.
Google Reverse Image Search: Drag any screenshot or image into Google Images to find where it first appeared online. If a photo labelled "Rafah, 2024" actually dates to 2014, this will often surface it. On mobile, hold down on any image in Chrome to trigger the reverse search.
TinEye: A dedicated reverse image search engine at tineye.com that tracks the earliest known appearance of an image across the web. More thorough than Google for older or obscure content.
InVID / WeVerify: A free browser extension, originally built for journalists by the European Broadcasting Union, that breaks videos into keyframes for reverse searching. It also analyses video metadata, which can reveal when and where a clip was originally uploaded. Install it via the Chrome Web Store.
YouTube Data Viewer by Amnesty International: At citizenevidence.amnestyusa.org, this tool extracts the precise upload time of YouTube videos and generates thumbnail images for reverse searching. Indispensable for verifying clips shared secondhand.
Grok: If you see something on X, you can directly ask Grok to check if what is shared is true or not. Just reply to the tweet by tagging @grok and writing 'is this true?', and the AI chatbot will cross verify and give you a comprehensive report.
Deepware Scanner: Deepware Scanner is an AI-powered tool that analyses videos to detect whether they have been synthetically manipulated using deepfake technology, flagging face-swaps and AI-generated alterations with a confidence score. It specifically focuses on deepfake manipulation that uses a real video of a person and modifies or swaps their face with another person, and is available as a web platform, API, and SDK.
Hive AI detection: Hive Moderation's AI Detection tool analyses the authenticity of media - including images, video, audio, and text - to determine whether content is AI-generated or a deepfake. It returns confidence scores alongside the likely generative engine used, helping platforms flag and remove synthetic or manipulated content before it spreads as misinformation.
Botometer (Primarily US-based, limited use in India): Developed by Indiana University, this tool analyses X/Twitter accounts for bot-like behaviour. Useful for identifying coordinated inauthentic campaigns, though its effectiveness for Indian-language content is limited.
A basic checklist before you share any war footage
Before hitting repost, run through this quickly:
- When was this first uploaded?
- Search the video title or a frame on Google.
- Is the location verifiable?
- Check landmarks, signage, and shadows against Google Street View or Google Earth. Who is sharing it — and why?
- Accounts with no history, no followers, or suspiciously round numbers are red flags.
- Does it match what major credible newsrooms are reporting? If no wire agency or established outlet has the same footage, be cautious.