AI Propaganda Is Weaponizing Social Media and How We Can Respond
Written By: Katherine Pfeizer
Date: April 24, 2026
Photo by Fotos on Unsplash; graphic by author; screenshots sourced from Donald J. Trump (Truth Social) and Iran Embassy in Tajikistan (X) In the modern age, when a single image can spark fury across continents within hours, AI-generated propaganda, defined as the organized spread of ideas, rumors, or information, has become a global political weapon amplified by social media. Nobody is immune to propaganda, and it has evolved beyond traditional posters and broadcasts into a digital force embedded in everyday online spaces, making it more subtle yet more pervasive. Today, these AI-generated images are engineered to trigger anger, fear, or loyalty towards political actors or to sway certain political beliefs. This pulls audiences into immediate, reactive engagement that can reinforce or manipulate their existing political loyalties, biases, or views on what is real and credible. This dynamic is also further compounded by the rise of low-quality, mass-produced AI content “slop.” This floods digital platforms and prioritizes virality over truth or artistic integrity. As a result, AI propaganda intensifies polarization and erodes trust in information itself, making public discourse increasingly volatile. This growing influence demands stronger platform regulation and more critical engagement from the public.
What Is Happening Right Now
On April 12th, Trump posted an AI-generated image on his Truth Social account– one of many instances in which his campaign and online presence have embraced AI-generated media. He showed himself draped in a red-and-white robe, with light radiating from his hands as he touched the forehead of a sick man. This presumably depicts Trump as a Christ-like healer performing miracles. This image, therefore, drew immediate criticism, even from some conservative Christians who previously supported him. After deleting the post due to backlash, he was confronted about it by a passing question during a PBS News Hour interview conference on his “no tax on tips” policy. He dismissed the controversy, claiming he had depicted himself as a doctor to show support for Red Cross workers, said he makes people feel a lot better, then blamed the fake news and moved on.
Then, just two days later, Iran's Embassy in Tajikistan fired back on X (formerly Twitter), posting a video of the same AI-generated depiction of Trump that he described himself as a doctor. Yet, in the video, a Christ-like figure descends from the sky as an automated voice says, “Your reckoning has come,” with Trump screaming “No,” before the AI-generated Christ-like figure graphically punches him, sending him into a pit of fire. That video instantly racked up millions of views on the platform. So, what began as Trump’s widely criticized post spiraled into an international exchange, weaponizing AI to mock and threaten the opposing side.
The Persuasive Power of AI Propaganda
These two instances show two governments and two pieces of AI propaganda with millions of views. Supporters felt either more confident and validated by the political figure shown in the AI-generated content, or, in some cases, betrayed, while critics saw clear manipulation that deepened political divisions. Research also demonstrates how persuasive AI propaganda is, according to a study from Stanford University’s Institute for Human-centered Artificial Intelligence. The study surveyed 8,000 Americans, produced six human-written propaganda pieces (control) and then revised the six with AI-generated content (treatment). Of the control group, 24.4% found the article agreeable, whereas of the treatment group, 47.4% found the AI-written article agreeable. This nearly double increase verifies that AI propaganda is a persuasive weapon, that this current modern technology lets political actors produce endless variations at almost no cost, tailoring them to hit many different audiences consistently. So, when propaganda is made to feel personal and targeted, it causes emotional entanglement that erodes trust in the media.
Platforms Benefit from All of It
These incidents also highlight how social media platforms amplify and benefit from the spread of AI-generated political propaganda. Truth Social was created after Donald Trump was banned from major social networks following the January 6, 2021, Capitol attack. And now, after Elon Musk acquired X in October 2022, the app has gutted its content moderation and reinstated accounts previously banned for coordinated inauthentic behavior. So, though X and Truth Social are not the only platforms where AI propaganda circulates, they are among the most visible. The visibility and virality that can be reached on these platforms, combined with the current approaches to moderation (or lack thereof), make both apps dangerous spaces for political manipulation, where highly emotional and divisive content drives engagement, clicks, shares and time spent on the platform. This ultimately increases advertising revenue and profits of social media companies.
How to Spot AI Propaganda
No one needs a media studies degree to spot AI propaganda. When you see politically charged imagery, ask yourself a few questions:
Does it make a political figure look superhuman, chosen, or divinely protected?
Does it strip a person or group of their humanity entirely?
Does it feel designed to provoke an emotion rather than inform a thought?
Does it lack clear authorship, sourcing, or context?
If yes, slow down, do not share it without first finding out what's legitimate or verifiable about it, especially not before spreading it to friends, family, or others. Be aware, though, these identifiers do not apply only to AI propaganda, but also to propaganda holistically.
What Needs to Change
Platforms need enforceable, transparent policies on AI media–especially in political content. Lawmakers need to treat AI propaganda as an information security threat with sociopolitical consequences for real communities. And we, as readers and sharers, need to stop treating virality as a measure of truth. The reason why AI propaganda works is that it is fast and emotionally resonant. So, know what you’re looking for, name it accurately, and do not let your feed emotionally manipulate you.
We all share a responsibility to think critically, to verify information before sharing and to resist engaging with content designed to purely manipulate our emotions. AI propaganda succeeds precisely because it moves quickly and targets deep emotions, resonating on a personal level and driving immediate reactions. The Trump image and the video response stand as fresh warnings that show how quickly AI can turn personal. This technology will only grow more sophisticated, and what feels shocking today will seem routine tomorrow unless we act. The only question left is whether we will keep spreading and amplifying this content or finally demand better protections, as our shared reality depends on the answer.
Written by: Katherine Pfeizer
About the author: Katherine Pfeizer is an editorial staff member who follows current events and enjoys analyzing books and films, especially horror, thriller and classic literature. She is also an undergraduate at UC Davis pursuing a degree in Comparative Literature with a minor in Political Science and Education.
AI-Generated Propaganda, Social Media, Media Literacy, Technology Ethics, Content Moderation
Check out our social media for more resources:
Additional Reading
The Problem of Invasive Slop in Creative Spaces
You Are Not Immune to Propaganda
The Hidden Climate Cost of AI
Leave a comment