
Anthropic released four Super Bowl commercials on Wednesday targeting OpenAI users following the company’s announcement of ads on ChatGPT’s free tier. One ad depicts a chatbot offering advice that shifts into promotions for fictitious products, prompting a response from OpenAI CEO Sam Altman.
The first commercial opens with the word “BETRAYAL” displayed prominently on the screen. A man seeks guidance from a chatbot, portrayed by a blonde woman and clearly intended to represent ChatGPT, on how to converse with his mother. The chatbot provides standard suggestions, including starting by listening and proposing a nature walk. The response then pivots to advertise a nonexistent cougar-dating site named Golden Encounters. Anthropic concludes the ad by stating that while advertisements are arriving in the AI sector, they will not appear in its own chatbot, Claude.
A second commercial shows a slight young man requesting advice on developing a six-pack. He supplies details such as his height, age, and weight to the chatbot. In response, the bot recommends height-boosting insoles. These spots directly reference OpenAI’s recent decision to introduce advertisements on the free version of ChatGPT.
OpenAI made this announcement prior to the ads’ release, outlining plans to incorporate sponsored content into the platform. The commercials generated widespread media attention, with headlines describing them as Anthropic mocking, skewering, or dunking on OpenAI.
Sam Altman, CEO of OpenAI, reacted on X, formerly Twitter. He acknowledged laughing at the advertisements but proceeded to post a lengthy response. In it, he labeled Anthropic dishonest and authoritarian. Altman detailed that the ad-supported tier for ChatGPT aims to support free access for millions of its users. ChatGPT remains the most widely used chatbot by a substantial margin.
First, the good part of the Anthropic ads: they are funny, and I laughed.
But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic…
— Sam Altman (@sama) February 4, 2026
Altman contested the portrayal of advertisements in Anthropic’s commercials. He argued that the depiction falsely suggests ChatGPT would alter conversations to insert ads, potentially for inappropriate products. In his post, Altman stated verbatim, “We would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that.”
OpenAI has specified that upcoming ads will remain separate from interactions, clearly labeled, and uninfluencing of any chat. The company plans to position them at the bottom of responses when a sponsored product or service relates to the ongoing conversation. As detailed in OpenAI’s blog post, “We plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.” This approach addresses the core premise shown in Anthropic’s ads, where promotions emerge directly from user queries.
Altman extended his criticism to Anthropic’s business model. He claimed, “Anthropic serves an expensive product to rich people.” In contrast, he emphasized OpenAI’s commitment to delivering AI to billions unable to afford subscriptions. Claude, Anthropic’s chatbot, includes a free tier alongside paid options priced at $0 for basic access, $17 for a standard subscription, $100 for a higher tier, and $200 for the top level.
ChatGPT offers comparable pricing structure: $0 for the free tier, $8 for ChatGPT Plus, $20 for an intermediate option, and $200 for the enterprise-level plan. These tiers demonstrate structural similarities between the two services despite Altman’s characterization.
Altman further accused Anthropic of seeking to control AI applications. He alleged that the company blocks access to Claude Code for certain organizations, including OpenAI itself, and imposes restrictions on permissible AI uses. Anthropic’s marketing has consistently highlighted responsible AI development since its inception.
The company originated from former OpenAI employees, including two key figures who left due to concerns over AI safety during their time there. Both Anthropic and OpenAI implement usage policies, AI guardrails, and safety protocols. OpenAI permits erotica generation via ChatGPT, whereas Anthropic prohibits it. Each company restricts specific content areas, such as mental health-related topics.
In his post, Altman escalated the rhetoric by describing Anthropic as authoritarian. He wrote, “One authoritarian company won’t get us there on its own, to say nothing of the other obvious risks. It is a dark path.” This exchange underscores tensions between the rivals amid their competition in the AI chatbot market.