OpenAI CEO Sam Altman expressed concerns on Monday about the growing influence of bots on social media platforms.
He noted that the spread of automated accounts has blurred the line between human and machine-generated posts, making it harder to assess authenticity in online conversations.
Subreddit activity prompts doubts about authenticityAltman’s comments followed his experience in the r/Claudecode subreddit, dedicated to OpenAI’s Codex programming tool. The forum saw a surge of posts praising Codex, with many users claiming to have switched from Anthropic’s Claude Code. The volume of similar messages prompted Altman to suspect bot-driven amplification.
One user joked, “Is it possible to switch to Codex without posting a topic on Reddit?”—highlighting how repetitive the posts had become. Altman admitted that while Codex adoption is real, the discourse “feels fake/bots” even when genuine growth exists.
Factors fueling the perception of fake engagementAltman pointed to several dynamics shaping this environment:
He cited the rollout of GPT-5.0 as another case where online feedback seemed unusually negative, raising the possibility of manipulation.
Social platforms face record levels of bot activityAltman’s comments come amid data showing bots dominate internet traffic. Imperva reported that more than half of online activity in 2024 was non-human. On X, the platform’s own AI assistant Grok estimated hundreds of millions of bots operating last year.
i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real.
i think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely… https://t.co/9buqM3ZpKe
— Sam Altman (@sama) September 8, 2025
This aligns with Altman’s conclusion that “AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago.”
Speculation about OpenAI’s social ambitionsSome analysts view Altman’s remarks as a signal that OpenAI may be preparing its own social platform.
In April, The Verge reported that OpenAI was exploring such a project to rival X and Facebook. While no product has been announced, the idea raises questions about whether OpenAI could realistically build a bot-free network.
Research from the University of Amsterdam has shown that even bot-only networks devolve into echo chambers, amplifying their own biases. Altman also acknowledged the broader challenge, noting that large language models can hallucinate facts—a problem that persists regardless of whether posts come from humans or machines.