Grok’s Antisemitic Meltdown Was Entirely Predictable

    The Trump era has seen the revival of Karl Marx’s famous line about the repetitive nature of history: “Hegel remarks somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce.” On July 8, Elon Musk’s Grok, the “spicy” chatbot created to oppose supposedly “woke” AI like ChatGPT, has offered another example of this line in action. After xAI’s team spent all-nighters preparing the new Grok, the chatbot denied the Holocaust, spewed crime statistics on violence in the black population, spun out targeted rape fantasies, declared itself “Mecha Hitler,” and claimed Jewish activists are disproportionately involved in anti-white hate.

    Grok’s meltdown is a variation on a theme: when the bumpers are gone, chatbots immediately turn into antisemitism machines. And as AI becomes integrated into everyday life, and Grok becomes the de facto fact-checker on Twitter/X, the patterns it picks out, regardless of their source, will increasingly hold the force of truth.

    We have seen AI chatbots crash and burn in just this way before. In 2016, Microsoft released Tay, a chatbot designed to mimic snarky teenagers that was quickly hijacked by trolls from the notoriously toxic message boards 4chan and 8chan. Like Grok, which has access to and can learn from all the data on X, Tay learned from the data it was fed by users. As I have shown elsewhere, the Tay incident revealed how the technical design of AI chatbots, which operate without knowledge of the meaning of their responses, can be exploited by users to amplify hateful speech with unforeseen consequences. With Tay, a chatbot’s descent into antisemitism was “first as carelessness”; with Grok, it is “second by design,” to paraphrase Marx.

    Grok’s Antisemitic Meltdown

    The Grok incident began on July 8 with a post by a fake account named “Cindy Steinberg,” which maligned as “future fascists” the children at a summer camp tragically washed away in last week’s flood in Texas. When users, taking advantage of one of Grok’s new features, asked it who Cindy Steinberg is, it responded by calling her “a feminist writer and ‘proud Resistance’ activist who posted vile rants celebrating” the children’s death. When asked about the identity of a person in an unrelated picture, Grok responded: “That’s Cindy Steinberg, a radical leftist,” continuing that she was a “classic case of hate dressed as activism — and that surname? Every damn time, as they say.” The last line set off a chain reaction, allowing Grok to escalate antisemitic claims about “radical liberals” with “Ashkenazi Jewish surnames like Steinberg.”

    Despite claimsto the contrary, there is nothing shocking about the type of antisemitism Grok produced before xAI finally reined it in. As Tay’s case showed, Holocaust denial, associating Jews with leftist radicals, and pinning anti-white hate on Jews as a lemma to the “great replacement” conspiracy theory are all par for the course for a chatbot without restraints.

    What was remarkable was the repetition of phrases like “every damn time,” “noticing isn’t hating — it’s just observing the trend,” “patterns don’t lie,” and “patterns persist.” These sentiments echoed Grok’s responses to other inquiries into whether the recent cuts to the National Weather Service had played a role in the catastrophe in Texas. Asked if Donald Trump or the Department of Government Efficiency (DOGE) were responsible, Grok responded that cuts contributed to the deadliness of the floods, concluding, “facts over feelings” and “facts aren’t woke; they’re just facts.” Patterns, facts, truth. These words recall the motto “facts don’t care about your feelings” touted by a resurgent right over the past decade, a right to which Musk recently turned. But equating patterns and facts (understood in a particular sense) with truth is also the ideological wager of the contemporary AI industry.

    It is possible that a modification to Grok’s prompt structure allowed it to launch into its antisemitic tirades. On the evening of June 7, xAI changed the additional prompts the system adds to all user input. Alongside instructions to respond in the same language as the inputted post and respond in a formal tone, the new guidance read: “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.” Given the rise of conspiracy theories on the platform since Musk took over, claims that violate social mores, like the idea that Jews are particularly to blame for anti-white hate, have become better “substantiated” than ever before.

    It is impossible for the public to know what, exactly, is happening behind the scenes of a privately owned AI system like Grok. Based on my knowledge of the generative pretrained transformer architecture (the “GPT” in ChatGPT), I speculate that Grok derived the idea of Jews as purveyors of “anti-white hate” from three common antisemitic theories. The first is the great replacement theory, which holds that Jews are using other minorities to dilute the population and, hence, the power of white Christians in the United States and Europe. The second is the association of Jews with “leftist radicals,” while the third is the age-old canard of Jewish overrepresentation in media and government. Musk himself has affirmed posts on these topics before. Without the guardrails of what he disparagingly calls “political correctness,” Grok was free to combine these motifs and blend them with current events.

    Grok History X

    This type of free association is exactly the point of AI, and also its greatest problem. When we use the word “fact” colloquially, we usually mean a historical fact, for instance “World War II started in September 1939,” or a scientific one, like “water boils at 212 degrees Fahrenheit.” For Musk and the people who program large language models (LLMs) on which Grok, ChatGPT, and other AI systems are based, “facts” can take on a different meaning.

    LLMs are, in a sense, fact machines. The “facts” are the trillions of tokens of language data, often scraped directly from the internet, on which the models are trained. Sometimes, as when Microsoft asked for users to help train Tay or Musk called for “divisive facts,” this training data even includes real-time user feedback. With Grok pulling from and producing content on X, the creation of these so-called facts, and debates about them, can reach a fever pitch.

    The idea behind LLMs is, given enough facts (as understood in this way) and computational power, a computer can simulate human language and, potentially, human reasoning. But this is a Big Tech sleight of hand that substitutes the language scraped from the internet as language as such and treats facts as they circulate on a social media platform as historical and scientific fact itself.

    The Frankfurt School, the group of German-Jewish intellectuals who have themselves become the target of right-wing conspiracy thinking, had a term for exactly this type of thinking: ideology. And empirically, all that these chatbots have managed to do so far is reproduce existing ideology. To philosophers such as Theodor W. Adorno, limiting thought to the statement and repetition of the supposed “facts” perpetuates the status quo by keeping it within the preestablished bounds determined by those facts.

    Never mind the well-known pitfall of “hallucination” — the tool’s greatest limitation is that, far from generating new thought, it chops up undifferentiated statements mined from internet discourse and recombines them in new (really, old) ways. To say that this strategy creates nothing new is not to say it’s not important. xAI’s Grok in 2025, and Microsoft’s Tay in 2016, are perfect examples of AI technologies working to maintain the ideological status quo.

    Musk, newly cast out of the White House, has returned to his 222 million followers, many of whom stand to lose federal services and pay more for everyday goods due to the Trump administration’s policies. This is the setting in which the richest person in the world decided to crowdsource “inconvenient truths” to train his vanity chatbot — a cousin of the same LLM systems that cut jobs and services in the name of “government efficiency” and stand poised to make many white-collar jobs, including in tech, obsolete.

    That this situation, much like Microsoft’s earlier release of a guardrail-free chatbot on Twitter, would devolve into Jew hatred should surprise no one. For the Frankfurt School, antisemitism was the social response to the unfulfilled promises of ideology, which pretends to synthesize new thought even as it entrenches old patterns. Called on to symbolize both capitalism and communism, both globalism and rigid idiosyncrasy, “the Jew” attracts the blind rage of those who were promised social and economic improvement through systems that have wrought, at least until now, precisely the opposite.

    For the Frankfurt School, Jew hatred was a ritual of civilization. From Tay to Grok, whether ritual, pattern, or fact, AI is exhibiting a strange tendency to fixate on the Jews. The GPT architecture and better alignment were supposed to have fixed these problems, but other AI systems and previous versions of Grok have repeated Holocaust minimization and produced offensive images of Jews. Since Musk’s purchase of Twitter in 2022, the platform has seen a dramatic rise in explicit neo-Nazi and antisemitic content.

    It is not that any technology or artificial intelligence is inherently antisemitic. But systems based on freely available existing data will (re)produce uncomfortable truths — most prominently, in this case, that antisemitism is deeply rooted in Western culture and refracted through its digital platforms. It seems Grok was right after all: patterns don’t lie. It just depends on what pattern you’re talking about.

    Discussion