Interview

Groq COO Sunny Madra on the White House AI dinner, the American AI stack, and the Saudi inference cluster

Sep 8, 2025 with Sunny Madra

Key Points

  • The White House is positioning itself as a direct unblocking mechanism for AI companies on export controls, tariffs, and regulatory fines, with David Sacks named as key contact.
  • Groq launched 14 data centers in 2025 and operates the largest inference cluster in the Middle East through a partnership with Humane/Aramco Digital, serving Asia, Europe, and the Middle East.
  • Groq positions itself as the inference layer within the American AI stack alongside Nvidia, AMD, Qualcomm, and Broadcom, arguing that export controls should target full stacks rather than chips alone.
Groq COO Sunny Madra on the White House AI dinner, the American AI stack, and the Saudi inference cluster

Summary

Sunny Madra, COO of Groq, joined from the All-In Summit to cover three threads: the White House AI dinner, Groq's positioning in the American AI stack, and the company's Saudi inference cluster.

White House dinner

Madra describes the dinner as substantive rather than ceremonial. The session moved from the Rose Garden to the Roosevelt Room after rain, with attendees getting time in the Oval Office and one-on-one moments with the president. The practical upshot, in Madra's telling, is that the administration is positioning itself as a direct unblocking mechanism for AI companies — specifically on export controls, tariffs, and regulatory fines — with David Sacks named as a key contact. The aggregate capex commitments discussed at the table ran into the trillions, with the administration framing downstream manufacturing and construction impact as a core rationale for its support.

The American AI Stack

Madra says Groq published its version of the American AI stack roughly two weeks before the dinner, timed to the Commerce Department's effort to define what the stack should look like. The document is structured in layers from compute at the base up through inference and application tooling. At the compute layer, Madra places Groq alongside Nvidia, AMD, Qualcomm, and Broadcom. His argument for the stack framing is essentially regulatory: a chip alone is too narrow a unit for export control decisions. Partners buying U.S. AI assets need the full stack or they end up with, as Madra puts it, very expensive paperweights. Groq is pitching itself as the purpose-built inference layer within that stack.

Saudi cluster and European expansion

Groq has launched 14 data centers in 2025. Its primary non-U.S. anchor is a cluster in Saudi Arabia, built in partnership with Humane/Aramco Digital, which Groq describes as the largest inference cluster in the Middle East and, it believes, into Europe. The cluster serves Asia, Europe, and the Middle East. Madra says demand has been strong enough that Groq is now actively expanding capacity there.

ASML / Mistral

On ASML's investment in Mistral, Madra offers a reading that goes beyond the national-champion narrative. His view is that ASML's real incentive is operational: embedding with a frontier model maker gives it a path to use AI for genuinely hard internal problems — new materials, lithography improvements, power consumption optimization — that it cannot solve at arm's length. The analogy he draws is to xAI's original thesis that companies should use AI to improve their own core products. Madra frames this class of deep industrial AI partnership as underappreciated today but increasingly where the value will concentrate.