Jay Parikh (Microsoft Core AI EVP): 1% of software ever written, the Hoover Dam analogy, and AI adoption pitfalls for Fortune 500 CEOs
Oct 28, 2025 with Jay Parikh
Key Points
- Microsoft's Jay Parikh argues less than 1% of all software ever written has been created, with large language models holding the potential to unlock a decade of code generation that dwarfs computing's first fifty years.
- Most Fortune 500 companies shouldn't train AI models on proprietary codebases; the real edge lies in commit history and pulling inference earlier into design conversations before work begins.
- GitHub's 180 million developers will expand to include product managers, designers, and marketers as AI lowers the cost and complexity of turning ideas into running software.
Summary
Jay Parikh, EVP of Core AI at Microsoft, argues that less than 1% of all software that will ever be written has been written so far. The Hoover Dam analogy is his frame: the accumulated intelligence sitting inside large language models is like 9.3 trillion gallons of water behind a dam, and unlocking it requires a prolific expansion of software creation over the next decade that dwarfs everything produced in computing's first fifty years.
The original Copilot insight is worth revisiting. The team built it to generate documentation, not code — engineers hate writing docs, and the model could handle the same characters either way. When they flipped it to code and introduced ghost text, the realization was immediate: the tool required no behavior change from the developer. You just started typing and it appeared. Parikh traces everything since back to that moment, including the agentic advantage in software specifically — code can be verified, which makes software a stronger domain for agents than most.
Enterprise pre-training for coding agents
On whether large enterprises should train models on their proprietary codebases, Parikh is direct: most shouldn't bother. Companies insist their code is unique, but in practice it isn't, and fine-tuning on a standard Django or Java project produces only marginal improvements. The exception is legacy COBOL and mainframe code at real scale — Parikh says Microsoft has had conversations with companies about contributing 100 million lines of COBOL, where volume genuinely moves model quality. For everyone else, the more interesting signal sits in the commit history: why a choice was made, not just what the code says.
The upstream inference problem gets less attention than it should. Parikh's argument is that "garbage in, garbage out" undermines most multi-agent abundance strategies. Spinning off five parallel agent tasks only helps if the original specification was solid. His proposed fix is pulling inference earlier — into the Zoom call, the Linear ticket, the design conversation — so Copilot is already asking clarifying questions before a single line of work begins, rather than waiting to be invoked.
What Fortune 500 CEOs get wrong
Parikh's advice for enterprise AI transformation breaks into three parts. First, get specific about outcomes — "transform my business" is not a goal, and the right diagnostic is understanding whether the target is cost reduction or top-line growth. Second, stop reading and start building internal adoption; cultural risk tolerance determines how fast the organization can actually move. Third, raise ambition further than feels comfortable, because the models are improving faster than most organizations can track, and whatever target feels ambitious today almost certainly isn't.
On the gen AI versus classical ML question, Parikh flags a real failure mode: organizations treating gen AI as a universal hammer when mature, optimized machine learning systems would be faster, cheaper, and more accurate for certain tasks. The two disciplines are blurring, but not so much that the distinction stops mattering.
The developer label
Jared Palmer, who joined Microsoft as VP of Product for Core AI and SVP of GitHub 13 days before this conversation after leaving Vercel, makes the case that the underlying infrastructure — how packages are distributed, tested, and built — is changing much more slowly than the models sitting on top of it. That durability means human involvement in software creation persists for the foreseeable future, even as what people build becomes more ambitious. The analogy is the smartphone camera: professional photography skill stopped being the barrier to taking a photo, but photographers didn't disappear.
GitHub now has 180 million developers, with new accounts joining every second. Parikh's bet is that the definition of developer expands well beyond engineers — product managers, designers, marketers — as the cost and complexity of turning an idea into running software continues to fall. The internal team motto: more demos, less memos.