Mark Cuban warns that ads inside AI responses will be manipulative and erode consumer trust
Aug 20, 2025 with Mark Cuban
Key Points
- Mark Cuban warns that ads embedded inside AI model responses exploit user trust in ways traditional digital ads cannot, particularly in healthcare where vulnerable users lack access to physicians.
- Foundation model labs face a content crisis as premium sources like Reddit and the New York Times demand payment, forcing consolidation to five or six survivors while specialized models proliferate.
- The durable revenue model for AI platforms is tiered freemium with vertical add-ons, not advertising; ChatGPT runs at roughly $12 billion annualized and should capture younger users to secure long-term adoption.
Summary
Mark Cuban is sounding an early alarm on AI-embedded advertising, and his concern goes well beyond standard disclosure debates. The core risk, in his view, is that large language models are uniquely positioned to manipulate users in ways traditional digital advertising cannot. Unlike a banner ad or a paid search listing, a response generated inside a conversational AI is perceived as trusted counsel, and the underlying model can be trained to exploit that trust without the user ever knowing.
Cuban draws a sharp distinction between display ads placed in the chat interface sidebar, which he considers acceptable, and sponsored influence embedded inside model responses, which he considers dangerous. His hypothetical is pointed: a wellness-focused LLM, trained by an operator who also sells pharmaceuticals like Xanax, could steer vulnerable users toward purchases in ways that would be illegal in other regulated channels. There are currently no age restrictions or content guardrails preventing this.
Sam Altman has reportedly indicated that OpenAI's chat interface itself would not be monetized through ads, but that referral fees triggered when the model recommends a product and facilitates a purchase could be fair game. Cuban sees the referral model as meaningfully different from in-response manipulation, noting that Cost Plus Drugs already receives meaningful referral traffic from ChatGPT. He treats query-based referrals more like search than like persuasion.
The argument that market discipline will self-correct, that users will abandon AI tools that push bad products, does not hold up in Cuban's view. He points to persistent consumer behavior around COVID vaccines and raw milk as evidence that people do not abandon information sources after receiving advice that harms them. Personal and tribal identity, he argues, increasingly determines which AI tools people trust, not outcome quality. A user who finds ChatGPT's answers too mainstream may simply migrate to an alternative model calibrated to their priors.
Cuban's structural recommendation is minimal. A simple dollar-sign disclosure alongside compensated recommendations, analogous to a sourcing badge, could be sufficient if self-enforced. But he acknowledges self-enforcement will fail in at least some cases, particularly in healthcare contexts where AI is already filling the gap left by inaccessible physicians.
AI's Limitations and the Content Supply Problem
Cuban is firmly in the camp that current AI is not intelligent and will not become so on any near-term timeline. His framing is that today's models function as the world's largest library, capable of synthesizing known information and projecting patterns, but without genuine understanding of context, causality, recency, or consequence. He uses autonomous vehicles as an analogy, arguing that his six-year-old Australian Shepherd navigating a new intersection demonstrates more situational judgment than any current self-driving system.
The more pressing structural issue, in his view, is training data. Foundation model labs have largely exhausted easy content sources and are paying for premium access, citing deals with Reddit and the New York Times as examples. Cuban expects institutions like Stanford Medical, Mayo Clinic, and MD Anderson to pull back from open publication precisely because their research is being absorbed into commercial models without compensation. The logical outcome is a fragmented model landscape where proprietary data stays locked inside vertically specialized models, and foundation model operators have to pay materially more for content to keep improving.
He floats the idea that NIH-funded research, currently being cut under the current administration, represents an untapped government asset. Rather than defunding research, he argues the smarter play would be to invest, then license that output to the highest-bidding AI developer, effectively monetizing publicly funded science.
Business Model Outlook
Cuban expects the frontier model market to consolidate to five or six survivors before fragmenting into millions of specialized downstream models. Every major brand and institution will eventually need its own model, and none of them will voluntarily hand their proprietary IP to a general-purpose foundation model to absorb for free.
For the foundation model players themselves, he sees tiered freemium as the durable revenue architecture, not advertising. ChatGPT is already running at roughly $12 billion annualized and growing. The free tier stays free to capture the next generation of users during their formative years, because a user who does not adopt a platform in high school or college is unlikely to bring it into their first employer. Premium vertical add-ons, a healthcare tier at $5 per month, a language-learning or math tier at $0.50 to $1, an entrepreneur tier with agent-driven incorporation workflows, represent the upsell stack.
The Labor Market Argument
The single most actionable near-term opportunity Cuban identifies is AI implementation for small and mid-sized businesses. He cites 33 million companies in the US, of which 30 million are sole proprietors, and argues that virtually none of them have the internal expertise or budget to deploy AI effectively. Young people who graduate with hands-on prompting and model customization skills, not necessarily computer science degrees, will find more demand than the market can supply. He explicitly advises students to prioritize learning tools like Sora, Vio, and Lovable alongside the ability to walk into a business and map those tools to specific operational problems. His summary is blunt: there will be two types of companies, those that are great at AI and everyone else.
Private Markets and the Liquidity Problem
On retail access to private markets, Cuban is skeptical of the current equity crowdfunding model, citing illiquidity as the core defect. Investors who identify a high-conviction private company have no exit mechanism and can be locked in indefinitely. The deeper systemic issue is the collapse of the IPO market, which he describes as resembling meme coin dynamics, where early participants extract value and late entrants absorb losses.
His fix requires a market maker willing to guarantee some liquidity within a defined percentage band of the last trade price. Without that structural backstop, he argues expanded retail access to private markets is more likely to harm ordinary investors than help them. He credits Vlad Tenev and Robinhood with the right instinct but says access alone is insufficient without liquidity infrastructure.
On the Government Taking Equity in Intel
Cuban calls the conversion of an existing CHIPS Act grant into a government equity stake in Intel a bad-faith move, arguing the US made a contractual commitment and is now renegotiating unilaterally. That said, he sees the broader concept of government taking equity stakes in exchange for capital as philosophically coherent and underappreciated by the left. He frames it as structurally progressive, capturing value before it accretes to private shareholders, and suggests it is more efficient than raising marginal tax rates on billionaires from 37% to 39% or 41%. He credits Trump with executing a genuinely progressive economic maneuver while packaging it in a political vernacular that the traditional left has failed to claim.