In the age of deepfakes and brazen lies, we need to figure out who we can trust. Credible media institutions are more crucial than ever.
filed inMedia
The podcasting platform used by Current Affairs, Riverside, has introduced a creepy new feature called “VideoDub.” With VideoDub, if you change what you say on the transcript of a video, it will change what you say on the video itself. Through the magic of AI, it will create audio of your voice saying the thing you wish you’d said. And then it will modify your lips on the video so that it also looks like you are saying the thing you wish you’d said, rather than the thing you actually did say. So if I say that a group of flamingos is called a “murder,” and then I later remember it’s called a “flamboyance,” I can change the transcript, and a fake Nathan Robinson voice will say “flamboyance” even if I’ve never said the word in my life, and the real visual images of me saying “murder” will be swapped out for generated images that manipulate my lips to say “flamboyance.” Here's the ad: Riverside has put some limits on this for now. You can only do it on the host’s track, meaning you can’t manipulate what your guests have said, and you can only make little corrections. But this is yet another small step in the direction of universally-accessible, indistinguishable-from-real, rapidly-deployable deepfake technology. Riverside might put limits on its use, but we have to assume that lots of other programs won’t. We must adapt to a world in which it is not just possible, but easy, to instantaneously change what an interviewee said, and have it look like they said something completely different. (Interviewees should take note and always record their own version of a conversation in any hostile forum, although of course they could be accused of faking the real version!) The technology has gotten very scary, very fast. Using even a short clip of someone, you can basically instantly generate a convincing audio fake of them saying anything you like. AI video is getting crazy, and it’s easy to generate fake footage that will be politically inflammatory—fake riots and street fights, fake poll workers committing nonexistent election fraud, and so on. One of the most alarming aspects of this new technology is that the speed is increasing, and the barriers to production are getting lower. It’s not just possible to make indistinguishable AI fakes, it’s easy to do it, and to spread them everywhere.Soon, it will be simple to rapidly generate hundreds and hundreds of entirely realistic, totally false news reports within seconds, post them everywhere, and watch them take off. (Already such news reports can be made, but the speed and ease is constantly increasing.) I can very easily imagine right-wing accounts like LibsOfTikTok, who do not care about truth, publishing fake AI footage purporting to show, say, a transgender person committing a crime, whipping up a mob of hatred (which is LibsOfTikTok’s full time job). Right now, older people fall for AI-generated slop more than younger people, but soon none of us will be able to tell if what we’re seeing is real. Five years ago, Ryan Metz presciently wrote in this magazine: If you think “fake news” is a problem now, just wait. When an image can be generated of literally anyone doing literally anything with perfect realism, truth is going to get a whole lot slipperier. The videos will soon catch up to the images, too. Already, it’s possible to make a moderately convincing clip that puts words in Barack Obama’s mouth. Fake security camera footage, fake police body camera footage, fake confessions: We are getting close. Marco Rubio has worried that “a foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe” or a “fake video of a U.S. soldier massacring civilians overseas.” More worrying is what the U.S. military and police forces could do with it themselves. It didn’t take much deception to manipulate the country into supporting the invasion of Iraq. Fake intelligence is going to become a whole lot more difficult to disprove. Notably, this week it came out that Rubio himself had been deepfaked by an impersonator who contacted U.S. officials using Rubio’s voice. The good news would seem to be that because so much AI-generated fakery spreads on just a few monopolistic social media platforms, it should be easy for those companies to put verification procedures in place, monitor carefully for slop, lies, and scams, and purge them quickly. The trouble is: the companies don’t care. Meta has made it clear it won’t try to get rid of AI-generated content. Elon Musk’s X is full of totally unreliable, AI-generated garbage. His own AI factchecking tool, Grok, seems to have gone pro-Nazi. (Grok is also not good at its job. It recently said a viral fake AI-generated video of “Alligator Alcatraz” being flooded was real.) Both platforms even financially reward the creators of fake AI rubbish. There has always been propaganda and fake news. God knows the New York Times has its fair share of bullshit. But there is a distinction here: most of the problems with mainstream media are problems with biased framing, selective presentation, misleading narratives, and manipulative language, rather than outright false factual claims. When we’ve criticized the New York Times here at Current Affairs, it has usually been for ignoring topics and voices, focusing on the wrong things, or choosing to publish loathsome opinion commentators, not for just outright making up news out of whole cloth. The New York Times might give you a distorted view of existing reality, but that’s different from just inventing reality. The truth is usually somewhere in the paper, if you’re willing to go looking for it. (The Wall Street Journal can even be remarkably honest in its depiction of the class struggle under capitalism!) It’s hard to overstate the dangers of this new technology. Scammers are already using AI to convincingly impersonate people’s relatives and steal money from them. A recent scam in Israel was incredibly sophisticated: This wasn’t the usual con. The fraudsters impersonated prominent tech and finance figures…. as well as reputable institutions such as the Tel Aviv Stock Exchange, the Israel Securities Authority, Bank Hapoalim, Discount Bank, and Meitav Dash. On Instagram, entire pages were built around deepfake videos. In one case, a synthetic clip showed Bank of Israel Governor Amir Yaron endorsing a bogus financial product. To enhance credibility, some pages posted legitimate-looking content before switching to the scam, building trust in their audience… The scammers went all-in on AI-driven manipulation. Dozens of deepfake videos were released, depicting well-known public figures like Amir Yaron, Prime Minister Benjamin Netanyahu, Eyal Golan, Noa Kirel, Gal Gadot, Elon Musk, and Mark Zuckerberg—all appearing to tout fake investment opportunities. They customized the videos for different Israeli demographics, even adding subtitles in Russian, and timed releases to coincide with current events—making the content feel timely, local, and authentic. We can’t trust our leaders here. The Trump administration is staffed with some of the world’s most spectacular morons, who do not understand or care about the dangers that AI poses and have a “let it rip” approach. (An early version of Trump’s “Big Beautiful Bill” even Many of them, including Trump, are scammers themselves. Heck, Trump loves posting AI-generated rubbish, creating the world he wishes existed, one where Gaza becomes his next luxury resort. The administration’s security protocols are abysmal, as we know, so I would honestly not be at all surprised if Trump accidentally found himself in a diplomatic negotiation with AI Putin. Hopefully we don’t end up bombing a country because some commander was fooled by an AI deepfake of Pete Hegseth. But the question is: What can be done? It’s hard to do much at the individual level. I mean, I can try to be a skeptical news consumer. But as the technology gets better and better, there’s going to be no good way for me to tell, just from looking at something, whether it’s real or fake. What I can do is get my understanding of the world from institutions that I trust to put resources into checking their facts. For instance, I read Drop Site News because it’s run by Ryan Grim and Jeremy Scahill, and I know they are two experienced journalists who check whether things are true and correct them if they are false. Same goes for theLever, run by David Sirota. OrZeteo, run by Mehdi Hasan. I also read the news skeptically. I look for claims to be substantiated. If a statistic is cited, I want to know who produced it, by what methods, using what data. But it also happens to be my full-time job to read and analyze the news. Most people do not have the time, or the background in bullshit-detection, to scrutinize all the claims that pass before their eyes online. Of course, you can react by becoming a total disbeliever, unwilling to accept anything anyone tells you. But that’s not a good way to become an informed participant in democracy. In a democracy we have to make decisions ourselves, and those decisions need to be based on solid information. If one mayoral candidate accuses another of being corrupt and predatory, you need to be able to look at the evidence to know whether the accused candidate is, in fact, corrupt and predatory. If the public isn’t looking at the evidence for claims, then all politicians are just in a contest to see who can lie the most convincingly. (I’m sure plenty of people would say that’s an accurate description of the state of American democracy.) If our corporate overlords were responsible, we’d be facing far less of a problem. Digital watermarking could help. The prevailing approach has been to try to watermark AI-generated content, but that might be impossible. It might be better to have practices that certify videos as real. Camera manufacturers have started introducing ways to certify that content was genuinely shot with a camera, and perhaps it’s time for a widespread “verified” tag that would certify that content was at least what it appeared to be, like a “certified organic” label. (Although hopefully much better, given how easy the organic label is to game.) New laws are also being introduced to criminalize the malicious use of deepfakes, and publishing nonconsensual deepfake porn of someone is a crime. But ultimately, I don’t think we’re going to do well in this alarming new era without building institutions that people trust to check what’s true and false. I’m actually very encouraged by the Internet Archive and Wikipedia, both of which provide models for how true information can be preserved reliably. The Internet Archive’s Wayback Machine can’t tell you if the content of a website is true or false, but it can at least tell you with confidence what was posted online at any given time, which is hugely useful when big pieces of the internet can easily disappear. (We use the Wayback Machine extensively at Current Affairs,and it was also very helpful in checking old media sources when compiling the endnotes to The Myth of American Idealism. In an age of informational chaos, the Wayback Machine feels like one of the few institutions that never lies to us.) Wikipedia has managed to create a process that is democratic and reliably produces something approaching the truth. It’s not always perfect, and I know conservatives whine that Wikipedia is biased against them, but I think we should all be genuinely impressed with how well the seemingly quixotic project of a publicly-editable encyclopedia has worked. I’ve watched the evolution of my own Wikipedia entry with interest. I’ve never tried to touch or influence it—I was interested to see how accurate it would be. For a while it had a couple of false details. It got my age and educational background wrong. Eventually those were corrected, and it has gotten closer and closer to being completely accurate. I believe that the truth can be kept alive even in the age of AI. ChatGPT seems to me to be better at sourcing its claims than it used to be, although some reports say its “hallucinations” are getting worse. In order to keep a handle on reality, though, first we have to actually care about reality. We have to think evidence and truth matter. We have to avoid becoming like Robert F. Kennedy Jr. or Joe Rogan, who believe things because they heard them somewhere, or they saw something that resembled a study, or like JD Vance, who admits he likes to “create stories” regardless of whether they’re true. We have to be rigorous, careful, willing to correct mistakes, and committed to public policy that is truth-enhancing. What troubles me more than the capabilities of deepfake technology is the widespread lack of interest in even discussing the question of how we can stay anchored in reality and ensure that lies are exposed and people get an accurate understanding of the world around them.