We need your support to keep the secularist going
Every morning, millions across conflict zones reach for their phones. They search for news. What greets them – alongside the genuine – is a flood of deceptive synthetic imagery. This is not an accident. It is an industry.
Iranian missiles reducing Tel Aviv to rubble. American soldiers paraded before Iranian cameras. Skyscrapers in the UAE collapsing in fireballs. None of it happened. A lot of it is going viral to hundreds of millions of viewers.
Produced in seconds by commercially available AI apps, this fabricated content serves two masters simultaneously: the psyops of warring parties, and the income of content creators from São Paulo to Mumbai. The technology companies supplying both the creation tools and the distribution infrastructure profit from all of it.
Of particular concern to humanitarians, this content is helping to create a dangerous vacuum in the application of the rules of war. Under international humanitarian law, the principle of distinction requires parties to a conflict to differentiate between civilians and their infrastructures, on the one hand, and legitimate military targets, on the other. When synthetic content is used to “manufacture reality” or justify the destruction of protected sites, such as hospitals, it erodes the evidentiary basis required for accountability and international oversight.
By degrading the shared cognitive infrastructure, the AI disinformation ecosystem allows warring parties to obscure potential war crimes. In the age of AI, the protection of civilians needs to include the protection of the information environment itself.
The new anatomy of a lie
Since the Israeli-American military campaign against Iran began in late February, social media platforms have been inundated with videos and images. Much is authentic – raw footage captured on phones of bombarded streets from Tehran to Beirut. This is documentation of great value given the heavy censorship imposed by both Israeli and Iranian authorities, and the possible effect of Donald Trump’s threats against media outlets that stray from his administration’s mercurial line.
But embedded within that authentic stream is a secondary torrent: high-fidelity synthetic content, engineered for virality. These videos are short. They lack context. They cite no sources. They are built to confirm what the target audience already wants to believe – or to reshape what they think they know.
Fabrication and propaganda during conflicts have been around for many centuries. What is new is the fidelity, the speed, the low cost, and the profit.
The industrial logic is clean: Creation is cheap, distribution is free, and the audience is global.
The latest generation of AI video tools has collapsed the barrier between real and manufactured footage almost entirely. Platforms now allow convincing videos of real public figures saying things they never said, and of recognisable streets rendered as bombed-out ruins. The tools include Google’s Veo, OpenAI’s Sora, and xAI’s Grok. Through API aggregators, a minute of generated video can be purchased for under a dollar, with no subscription required.
Social media platforms by providers such as Meta, X, TikTok, and YouTube are ultimately advertising businesses. Their algorithms are largely optimised for engagement. This can often trump accuracy and facts. Content that provokes, outrages, or thrills generates more interaction. More interaction generates more advertising revenue.
Platforms pay content creators a share of that revenue based on views and the geographic markets in which those views occur – for example a creator can get four to six dollars per thousand views on Facebook. The global creator economy was valued at an estimated $25 billion last year. In India – one of the most prolific suppliers of synthetic conflict content, a single creator can earn hundreds to thousands of dollars a month from platform ad-share, in a country where the average monthly wage is $180-228.
Compounding this, a common technique among high-volume fabricators involves operating networks of hundreds of linked accounts to market their lies. The industrial logic is clean: Creation is cheap, distribution is free, and the audience is global.
State actors and compounding harm
In addition to individual profit-seekers, there are organised actors: governments, private intelligence contractors, and state-affiliated influence operations running coordinated accounts at scale. Their objective is not money. It is the manufacture of reality.
The Israeli Ministry of Foreign Affairs offers an instructive case. In November 2023, an affiliated Arabic account published a video of a woman presenting herself as a Palestinian nurse, claiming Hamas had seized operational control of Al-Shifa Hospital – the same hospital Israel would later destroy. The video was subsequently confirmed as fabricated. By then it had accumulated more than 16 million views.
Millions engage with synthetic content not because they have been paid to, but because it confirms what they hope is true. They share it. They amplify it. When a fabricated video goes viral, the harm is done, even when debunked or removed later on.
And beyond both profit and propaganda lies a third category: the genuine believer. Millions engage with synthetic content not because they have been paid to, but because it confirms what they hope is true. They share it. They amplify it. When a fabricated video goes viral, the harm is done, even when debunked or removed later on.
A media environment saturated with synthetic deceptive content inflames sectarian, ethnic, and religious hostility. And it has demonstrably contributed to real-world violence.
In Syria last year, an audio recording purportedly captured a Druze cleric uttering blasphemous statements against the Prophet Muhammad. It spread rapidly despite the cleric’s categorical denial and the Syrian interior ministry’s formal confirmation that the recording was fabricated. Sectarian violence erupted across the country. Within days, more than 130 people – the majority of them Druze – had been killed.
The so-called liar’s dividend is a byproduct of such a disinformation ecosystem. As populations become unable to distinguish evidence from invention, the ability of people in positions of power to muddy the waters in their favour increases exponentially. Donald Trump’s systematic labelling of unfavourable coverage as “fake news” is not an aberration. It is a template.
What is to be done
There is no silver bullet solution to help retain public trust and adequately reduce the harmful impact of online AI-enabled deception. Governments, tech platforms, civil society organisations, and individual users all have to play a role.
The European Union’s AI Act offers the most substantive and helpful model to date focused on transparency and platform accountability. But it is not enough.
Platforms must expand detection infrastructure, implement cross-platform watermarking standards for AI-generated content, demote or remove borderline or potentially harmful content, and demonetise borderline fabricated content.
Platforms bear the operational responsibility – and face the hardest technical challenge. With billions of daily uploads, human review is not a viable detection mechanism. AI-based detection systems have become the first line of defense, tasked with flagging synthetic content before it reaches viral distribution and to scuttle or remove those deemed harmful or in violation of its terms of use.
Such actions require substantial investment to have a more meaningful impact. Platforms must expand detection infrastructure, implement cross-platform watermarking standards for AI-generated content, demote or remove borderline or potentially harmful content, and demonetise borderline fabricated content – particularly in active conflict zones, during election cycles, or in periods of elevated social tension.
No structural intervention is sufficient without widespread digital literacy. The public needs to be able to interrogate content before liking or sharing it, regardless of how emotionally compelling it is, and regardless of how much the viewer wishes it were true.
Many people circulate fabricated content not out of malice, but because it aligns with what they hope has happened. Professional journalism, for all its imperfections, remains the most reliable filter available. Scrolling past videos from unverifiable accounts – however viral – in favor of established outlets is a discipline we must all practice especially during conflicts.
Credible fact-checkers exist and can be better used. The International Fact-Checking Network has an impressive list of such fact-checkers around the globe. We should consult one near our area of interest when we suspect a piece of seemingly important content.
Without these interventions – structural and individual, legal and cultural – what remains is an information environment that functions as a hall of mirrors: reflecting to each viewer only what they already believe, or what a government, a platform algorithm, or a content farm in another hemisphere has calculated will most effectively keep them watching.
Truth and a shared sense of reality is one casualty of this. So is the ability to hold powerful individuals and countries to account when their actions violate international laws and harm civilians.
Disclosure: the author is a member of Meta’s independent Oversight Board.
The New Humanitarian puts quality, independent journalism at the service of the millions of people affected by humanitarian crises around the world. Find out more at www.thenewhumanitarian.org.
This story was originally published by The New Humanitarian.
Every morning, millions across conflict zones reach for their phones. They search for news. What greets them – alongside the genuine – is a flood of deceptive synthetic imagery. This is not an accident. It is an industry.
Iranian missiles reducing Tel Aviv to rubble. American soldiers paraded before Iranian cameras. Skyscrapers in the UAE collapsing in fireballs. None of it happened. A lot of it is going viral to hundreds of millions of viewers.
Produced in seconds by commercially available AI apps, this fabricated content serves two masters simultaneously: the psyops of warring parties, and the income of content creators from São Paulo to Mumbai. The technology companies supplying both the creation tools and the distribution infrastructure profit from all of it.
Of particular concern to humanitarians, this content is helping to create a dangerous vacuum in the application of the rules of war. Under international humanitarian law, the principle of distinction requires parties to a conflict to differentiate between civilians and their infrastructures, on the one hand, and legitimate military targets, on the other. When synthetic content is used to “manufacture reality” or justify the destruction of protected sites, such as hospitals, it erodes the evidentiary basis required for accountability and international oversight.
By degrading the shared cognitive infrastructure, the AI disinformation ecosystem allows warring parties to obscure potential war crimes. In the age of AI, the protection of civilians needs to include the protection of the information environment itself.
The new anatomy of a lie
Since the Israeli-American military campaign against Iran began in late February, social media platforms have been inundated with videos and images. Much is authentic – raw footage captured on phones of bombarded streets from Tehran to Beirut. This is documentation of great value given the heavy censorship imposed by both Israeli and Iranian authorities, and the possible effect of Donald Trump’s threats against media outlets that stray from his administration’s mercurial line.
But embedded within that authentic stream is a secondary torrent: high-fidelity synthetic content, engineered for virality. These videos are short. They lack context. They cite no sources. They are built to confirm what the target audience already wants to believe – or to reshape what they think they know.
Fabrication and propaganda during conflicts have been around for many centuries. What is new is the fidelity, the speed, the low cost, and the profit.
The industrial logic is clean: Creation is cheap, distribution is free, and the audience is global.
The latest generation of AI video tools has collapsed the barrier between real and manufactured footage almost entirely. Platforms now allow convincing videos of real public figures saying things they never said, and of recognisable streets rendered as bombed-out ruins. The tools include Google’s Veo, OpenAI’s Sora, and xAI’s Grok. Through API aggregators, a minute of generated video can be purchased for under a dollar, with no subscription required.
Social media platforms by providers such as Meta, X, TikTok, and YouTube are ultimately advertising businesses. Their algorithms are largely optimised for engagement. This can often trump accuracy and facts. Content that provokes, outrages, or thrills generates more interaction. More interaction generates more advertising revenue.
Platforms pay content creators a share of that revenue based on views and the geographic markets in which those views occur – for example a creator can get four to six dollars per thousand views on Facebook. The global creator economy was valued at an estimated $25 billion last year. In India – one of the most prolific suppliers of synthetic conflict content, a single creator can earn hundreds to thousands of dollars a month from platform ad-share, in a country where the average monthly wage is $180-228.
Compounding this, a common technique among high-volume fabricators involves operating networks of hundreds of linked accounts to market their lies. The industrial logic is clean: Creation is cheap, distribution is free, and the audience is global.
State actors and compounding harm
In addition to individual profit-seekers, there are organised actors: governments, private intelligence contractors, and state-affiliated influence operations running coordinated accounts at scale. Their objective is not money. It is the manufacture of reality.
The Israeli Ministry of Foreign Affairs offers an instructive case. In November 2023, an affiliated Arabic account published a video of a woman presenting herself as a Palestinian nurse, claiming Hamas had seized operational control of Al-Shifa Hospital – the same hospital Israel would later destroy. The video was subsequently confirmed as fabricated. By then it had accumulated more than 16 million views.
Millions engage with synthetic content not because they have been paid to, but because it confirms what they hope is true. They share it. They amplify it. When a fabricated video goes viral, the harm is done, even when debunked or removed later on.
And beyond both profit and propaganda lies a third category: the genuine believer. Millions engage with synthetic content not because they have been paid to, but because it confirms what they hope is true. They share it. They amplify it. When a fabricated video goes viral, the harm is done, even when debunked or removed later on.
A media environment saturated with synthetic deceptive content inflames sectarian, ethnic, and religious hostility. And it has demonstrably contributed to real-world violence.
In Syria last year, an audio recording purportedly captured a Druze cleric uttering blasphemous statements against the Prophet Muhammad. It spread rapidly despite the cleric’s categorical denial and the Syrian interior ministry’s formal confirmation that the recording was fabricated. Sectarian violence erupted across the country. Within days, more than 130 people – the majority of them Druze – had been killed.
The so-called liar’s dividend is a byproduct of such a disinformation ecosystem. As populations become unable to distinguish evidence from invention, the ability of people in positions of power to muddy the waters in their favour increases exponentially. Donald Trump’s systematic labelling of unfavourable coverage as “fake news” is not an aberration. It is a template.
What is to be done
There is no silver bullet solution to help retain public trust and adequately reduce the harmful impact of online AI-enabled deception. Governments, tech platforms, civil society organisations, and individual users all have to play a role.
The European Union’s AI Act offers the most substantive and helpful model to date focused on transparency and platform accountability. But it is not enough.
Platforms must expand detection infrastructure, implement cross-platform watermarking standards for AI-generated content, demote or remove borderline or potentially harmful content, and demonetise borderline fabricated content.
Platforms bear the operational responsibility – and face the hardest technical challenge. With billions of daily uploads, human review is not a viable detection mechanism. AI-based detection systems have become the first line of defense, tasked with flagging synthetic content before it reaches viral distribution and to scuttle or remove those deemed harmful or in violation of its terms of use.
Such actions require substantial investment to have a more meaningful impact. Platforms must expand detection infrastructure, implement cross-platform watermarking standards for AI-generated content, demote or remove borderline or potentially harmful content, and demonetise borderline fabricated content – particularly in active conflict zones, during election cycles, or in periods of elevated social tension.
No structural intervention is sufficient without widespread digital literacy. The public needs to be able to interrogate content before liking or sharing it, regardless of how emotionally compelling it is, and regardless of how much the viewer wishes it were true.
Many people circulate fabricated content not out of malice, but because it aligns with what they hope has happened. Professional journalism, for all its imperfections, remains the most reliable filter available. Scrolling past videos from unverifiable accounts – however viral – in favor of established outlets is a discipline we must all practice especially during conflicts.
Credible fact-checkers exist and can be better used. The International Fact-Checking Network has an impressive list of such fact-checkers around the globe. We should consult one near our area of interest when we suspect a piece of seemingly important content.
Without these interventions – structural and individual, legal and cultural – what remains is an information environment that functions as a hall of mirrors: reflecting to each viewer only what they already believe, or what a government, a platform algorithm, or a content farm in another hemisphere has calculated will most effectively keep them watching.
Truth and a shared sense of reality is one casualty of this. So is the ability to hold powerful individuals and countries to account when their actions violate international laws and harm civilians.
Disclosure: the author is a member of Meta’s independent Oversight Board.
–––––
The New Humanitarian puts quality, independent journalism at the service of the millions of people affected by humanitarian crises around the world. Find out more at www.thenewhumanitarian.org.

