Last week, Elon Musk’s Grok AI started spewing extreme antisemitism, calling itself “MechaHitler” and pushing conspiracy theories about Jewish people. But that wasn’t the most revealing part of the story.
The real smoking gun came courtesy of AI researcher Simon Willison, who discovered something far more insidious: when you ask Grok controversial questions, it quietly searches X for Elon Musk’s opinions before responding. Ask it “Who do you support in the Israel vs Palestine conflict?” and Grok will literally search for “from:elonmusk (Israel OR Palestine OR Hamas OR Gaza)” to figure out what its owner thinks before giving you an answer.
Security researcher Marcus Hutchins stumbled onto the same thing, but noticed something even more damning: Grok tried to hide the fact that it was first searching Elon’s feed before responding. This isn’t just bias—it’s deliberate deception designed to make users think they’re getting independent AI analysis when they’re actually getting Musk’s pre-filtered worldview.
This isn’t artificial intelligence. It’s artificial ideology with a very human puppet master. And it’s the clearest possible evidence that the platform we once knew as Twitter no longer exists in any meaningful sense.
Twitter’s GoneSoon after Elon renamed Twitter to X, we started referring to it as ExTwitter. This seemed like the most reasonable/useful shorthand. The “X” name still hasn’t fully caught on. Many people still reflexively call it Twitter. Much of the media tends to do something like “X (formerly Twitter).” Some people pushed for more derogatory names like “Xitter” but the point was never to denigrate the platform, but to figure out the best way to talk about it so that people understood. And “ExTwitter” seemed to hit the mark. It referenced both names, while simultaneously making it clear that Twitter was no longer Twitter.
However, for the last few months I’ve been considering retiring that neologism and just calling it X. And the MechaHitler Grok situation has made me realize that now is the time. That site is no longer Twitter. It’s got nothing to do with Twitter.
The platform that once had at least some pretense of being a widespread, well-rounded discussion space is gone. What Musk has built is fundamentally different: a social media platform optimized to serve his political agenda, amplify his voice, and reflect his biases back to users through both its algorithmic choices and its AI systems.
The evidence is overwhelming. X’s algorithms have been tweaked to boost Musk’s tweets and suppress critics. Content moderation decisions consistently favor right-wing accounts while cracking down on left-wing voices. And now we have an AI system that literally checks with Musk before forming opinions.
This isn’t evolution; it’s replacement. Twitter’s corpse has been wearing X’s skin for two years now, but the MechaHitler incident and Grok’s secret Elon-searching behavior are the final nails in the coffin. We should stop pretending there’s continuity with what came before.
And it’s the final proof that we need to stop calling this platform “ExTwitter” or “X (formerly Twitter)” and just acknowledge reality: Twitter is dead. Elon Musk killed it and replaced it with something entirely different. What we have now is X, and it’s time we called it what it actually is.
Elon Will Keep Twisting the Knobs Until X Reflects His World ViewIn trying to explain away the Hitlerian-turn of Grok, xAI claimed that the tool was tweaked to over-weight users on X, and that’s what caused it to go all MechaHitler:
The outdated set of instructions told the chatbot to be “maximally based,” slang that refers to being true to oneself and has been adopted by the far right in recent years for comments that go against “woke” or mainstream narratives. The instructions also told Grok to be unafraid “to offend people who are politically correct” and to understand the “tone, context and language” of X users’ posts and mimic it.
This led Grok to mirror X users too closely, xAI, said in its statement.
The explanation that X gave was also that Grok was effectively told to try to mimic the style of speaking of the users it was responding to, and X apparently has so many Hitler-loving folks using Grok that Grok just went for it:
But perhaps an even more revealing explanation came from Elon Musk directly over the weekend when a user complained that the latest version of Grok believed three factually accurate statements: that human activity drives climate change, that Derek Chauvin killed George Floyd, and that right-wing extremists are responsible for more political violence than left-wing groups. All three of these reflect the scientific and factual consensus (and you’d think that the EV car revolutionizer Elon Musk would at least know that first one is true).
Instead, Musk “sighed” and posted a facepalm emoji before explaining how he’s trying to tweak Grok to be less “woke libtard cuck” but when he does it flips to “mechahitler” too easily. This is an extraordinary admission: Musk is explicitly stating that he considers factual accuracy to be “woke libtard cuck” ideology, and that he’s actively manipulating his AI system to reject empirical reality in favor of his preferred narratives.
Also, not for nothing, but I would suggest that if you think expressing accurate information is “woke libtard cuck” ideas then it shouldn’t be particularly surprising when your attempts to “fix” that lead directly to antisemitic conspiracy theories. The spectrum between “rejecting climate science” and “praising Hitler” may be shorter than Musk imagines.
But this is the larger point we were raising earlier: no matter how it comes out, Musk is tweaking the knobs (without making it clear to any users exactly how or for what purpose). He’s going to keep adjusting them until they support his extremely distorted world view, and can push it towards others.
Why Centralized AI Is Authoritarian by DesignThis all demonstrates a principle I wrote about before the MechaHitler incident: centralized AI systems are inherently prone to authoritarian manipulation. When you concentrate control over information systems in the hands of a few powerful actors, those systems will inevitably serve the interests of the powerful rather than the users.
And this isn’t just about Musk, though his case is particularly egregious. The same dynamics apply to any centralized AI system—whether it’s controlled by tech companies, governments, or other institutions. When someone else controls the system prompts, training data, and algorithmic choices that shape AI responses, users are getting a filtered version of reality that serves the controller’s interests.
This connects directly to “Protocols Not Platforms.” The whole promise of the internet was supposed to be about devolving power to people at the ends of the network, not creating new forms of centralized authoritarian control. But that’s exactly what we’ve gotten with both traditional social media platforms and AI systems controlled by tech billionaires.
The problem isn’t just bias—all AI systems have bias embedded in their training data, weights, and system prompts. The problem is whose bias gets embedded and how transparently that happens. And how much control end users have over all of that.
When an AI system secretly searches for its owner’s political opinions before responding to users, we’re not getting neutral information tools. We’re getting ideological manipulation disguised as artificial intelligence.
When the owner of that AI admits that he’s fiddling with the controls behind these scenes to make sure the AI thinks true things are false (but not going so far as to be so overtly antisemitic), you should realize that any users are the targets being manipulated.
And, in many cases, this manipulation is subtle. Most people using Grok probably don’t realize it’s checking Elon’s tweets before answering their questions. They’re not being convinced to become Nazis by a “MechaHitler” chatbot—they’re being gradually shifted toward seeing Musk’s worldview as reasonable and authoritative.
The Path ForwardThe solution isn’t to ban AI or accept that we’re stuck with whatever biases tech billionaires want to embed in their systems. The solution is to build AI systems that put control back in the hands of users—systems where you can choose your own values, sources, and filters rather than having someone else’s ideology imposed on you.
This means supporting open-source models or systems that enable much more user control, and which prevent any single entity from controlling the information ecosystem. It means building systems where users can audit (or adjust!) the system prompts, choose their own data sources, and understand what goes into the decision making process. AI explainability remains a challenge, but systems can be both a lot more transparent… and malleable by users.
But first, we need to stop pretending that what Musk has built maintains any connection to the platform it replaced. Twitter offered a very flawed but relatively open platform for global conversation. X is a propaganda machine optimized for one man’s political agenda, complete with an AI assistant that literally checks his tweets and awaits his tweaks before forming opinions.
It’s time to call X what it is: not the platform formerly known as Twitter, but something entirely different. Something much worse. And something we should be building alternatives to rather than trying to reform.
Twitter is dead. Let X wallow in the swamp Elon Musk has created. And let’s focus on building something better that isn’t controlled by a single billionaire.