X's head of product Nikita Bier has revealed that the platform identified and shut down a coordinated network of 31 hacked accounts operated by a single individual in Pakistan, all repurposed to spread fake AI-generated war footage under the guise of conflict reporting - a discovery that has prompted X to make policy changes targeting the monetisation of fake war content.
The disclosure comes amid a surge of AI-fabricated imagery circulating online in the context of the ongoing Iran-Israel conflict, raising urgent questions about how social media platforms can police disinformation in real time during moments of geopolitical crisis
According to Bier, the operation was straightforward in its mechanics but alarming in its scale. A single actor based in Pakistan had taken control of 31 previously existing accounts - hacked from their original owners - and systematically changed their usernames to "Iran War Monitor" or close variants. The accounts were then used to post AI-generated videos purporting to show footage from active conflict zones.
"Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos," Bier wrote on X. "All were hacked and the usernames were changed on Feb 27 to 'Iran War Monitor' or some derivative."
The operation bore the hallmarks of a classic influence campaign: hijacked accounts with established follower counts, coordinated rebranding, and a steady stream of content designed to exploit public anxiety during a period of genuine geopolitical instability.
X suspended the accounts and took action to remove their monetisation access.
The video was made by Sora 2
The network's activity came to light in part because of a specific video that circulated widely, purporting to show Iran launching an attack on Tel Aviv. The clip attracted significant engagement before Bier publicly debunked it revealing that the footage had been generated using Sora 2, OpenAI's advanced video generation tool.
More troubling was the source. The account that shared the video claimed its operator was a journalist reporting from northern Gaza. In reality, it was part of the Pakistan-based network, with no connection to Gaza or to journalism of any kind. The fabricated identity was designed to lend the content immediate credibility - a reporter on the ground, in an active war zone, sharing footage of an escalating regional conflict.
The episode is a textbook example of how AI-generated content, when paired with a false identity and distributed through a network of seemingly independent accounts, can manufacture the appearance of corroborated, eyewitness reporting.
X says it is getting faster
Bier was candid in his posts that detection is improving, even as the threats grow more sophisticated. "We are getting much faster at detecting this - and also eliminating the incentive to do this," he wrote.
The speed of the takedown is notable. The accounts were rebranded on February 27; the network was identified and acted upon within days. For a platform that has faced persistent criticism over its handling of coordinated inauthentic behaviour since Elon Musk's acquisition in 2022, the swift response represents a meaningful operational shift.
Still, the fact that 31 accounts were simultaneously hacked, coordinated, and monetised before detection underscores how thin the margin remains between a timely intervention and a viral disinformation event.