Naveen Rao's Unconventional AI raises $475M seed to build 1000x more efficient AI hardware from first principles
Dec 9, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Naveen Rao
next guest is in the reream waiting room. First, let me tell you about adquick.com. Out ofome advertising made easy and measurable. Plan, buy and measure out of home [music] with precision. We have Naveen Ralph from Unconventional, CEO of Unconventional AI in the Reream waiting room. Naveen, welcome to the show.
Thank you. Thanks for having me. Uh let's jump right into an introduction on yourself, companies you built in the past, and then we'll get into unconventional.
Yeah. So I was uh the first company I built in the AI space was called Nirvana. It was actually the first AI chip company. That was back in 2014 before most [clears throat] people knew what machine learning was.
Pretty tough.
A little too little too early,
but you're back.
Um I think it was actually decent timing to be honest with you. Um but you know I I think I sold too early to be honest. Um uh the second company was actually moving more toward the software and algorithmic side called Mosaic ML. Uh sold out to data bricks back in 2023 and I led AI at data bicks up until uh just a couple months ago where I left to start unconventional.
Incredible. Uh so yeah, jump jump right into it. Uh talk about you know the vision. Uh, I feel like you guys, um, the perfect amount of controversy yesterday because people were latching on to a [laughter] not not a seed round at a $500 million valuation, but a a 500 million of of capital invested at the seed round. Uh, it was honestly perfect. You want you want a little bit of spice on the timeline to get attention, and I'm sure there's been, you know, a ton of inbound interest both from the candidate side and and future customers.
Yeah, I mean, you know, obviously that was somewhat intentional. uh we wanted to have a big shocking moment. Um you know, but I think it's actually pretty rational when you think about the opportunity ahead of us. So what we're doing at Unconventional is rethinking how a computer works. You know, the computers that we use today, uh the the fundamental abstractions have been around for nearly 80 years. And you know, it's kind of weird to think about in the tech industry something being around for 80 years, but it really is true. And uh uh now we're we've come to the point where we've pushed that paradigm as far as we can push it. You know, fundamental constraints around energy are now hitting us at the global level. Like if we keep scaling AI like we're doing, I mean, I think AI is amazing. I I use it all the time and I think the rest of the world is going to do that too. We can't get there. We're going to run out of energy to scale these things up because it requires very power- hungry chips. So really we we saying can we rethink the paradigm actually from the circuit level up and build something that's vastly more efficient like a thousand times [clears throat] more efficient uh than what we've been building and focus prime only on AI we don't need to do accounting software and you know artillery calculations and all the things that traditional computers do we'll let that be in the realm of digital machines but can we build something that's much more efficient for the substrate of AI and uh so the opportunity ahead of us I think is nearly infinite and That's why valuation actually can make some rational sense.
So what does your supply chain look like or or what do you think it will look like in the in the near future?
Yeah, I mean when we started this project there were two fundamental constraints I put on it. One is that we have to be able to solve the problem within 5 years
because this problem is going to hit us in three or four years and we need to have a solution ready.
Then at that point when we have a solution you can't have something that's like I've solved the science problem now what we have to be able to manufacture it. So we have to have scalable manufacturing. So we believe we can solve this problem in a in a big way within 5 years that actually leverages the silicon ecosystem.
So we do want to manufacture it on standard lithography techniques. Now
interesting like ASML could potentially be a partner in the next 5 years like they're not off the table.
Yeah. TSMC we we talking with already. You know I was out in Taiwan just a few weeks ago for this purpose. And um you know it doesn't mean that's the end the beall end all. What we're doing is really building a new set of abstractions like a computer is built on a digital abstraction. Can we move away from that and actually move to something that is more amendable to using fundamental dynamics of uh the substrates and if we can do that we can actually open up the world to a whole new set of you know [snorts] potential um substrates. It silicon is one of them and maybe there'll be more exotic things. I've talked to folks who are building 3D printed circuits and all kinds of crazy stuff. And so I think the world is going to be very rich in the next 20 years in terms of new capabilities. Right now we got to leverage what we have.
Yeah. I I mean in terms of of leveraging what we have, I feel like there's been uh you know movement from okay GP Nvidia GPUs are bust. Everyone's just doing that. Then we got some serious movement this year from AMD stepping it up. We got TPU. Tranium's looking pretty good. There's like these AS6 are doing well. There's a whole crop of of AS6 startups that are saying, "Hey, we're going to bake the transformer right onto the silicon." They're going with TSMC. Can you help me understand how you were thinking about um like creating something that's higher performance but still leaving enough like flexibility to be able to actually work with um whatever the next algorithmic paradigm looks like.
Yeah. I mean one part of what we're doing is a very deep code design. So we're not saying here's a transformer and it's immutable. We're not looking at it like that. the the the way you define the neural network itself is something we're going to change and actually link to the hardware. And if you look at biology, there is no difference between the definition of the neural network and the phys physical substrate. They're one and the same.
So the the the dynamics, the physics of those neurons actually gives you the algorithmic richness that you have.
So we want to move more that direction. It's not this abstraction on a digital machine which is built out of numeric. All the machines that you highlighted just then which all have a wonderful purpose by the way um all work on the same fundamental abstraction. They all use bits to represent numbers and those numbers represent weights that then are manipulated in some way to build a neural network. We're actually talking about building those sort of effective numeric on the physics.
Yeah.
On on the fundamental properties. Do do you have a view on uh you know we were reflecting on this Mark Beni off post earlier in the show. Uh he was kind of making the argument that LLMs are going to commoditize that you know it's like hard discs and you're going to be able to move them around. There's not a lot of value or not a lot of lock in at the okay I got this bag of weights layer and I'm wondering if you have a view on how the the you know foundation model landscape will evolve over the next five years because I want to know what your customer mix looks like. Uh but you can kind of tie that together however you like.
Yeah. I mean transformers are something that works really well today. We were able to scale it up. We're able to prove out that we can build synthetic systems that learn. I think that was enormous in the last several years, right? And actually not just that learn that can learn but can be useful.
So now it's about okay well we can define what useful is. Now we have these quality metrics for what a token is. That actually has opened up a whole new world to me in a sense where I can say, well, as long as I can supply that quality, I don't really care if it's a transformer. I don't really care what it looks like. And if I can do so very cheaply and very fast, that's all I care about. So now we're really getting to this almost pure play supply of intelligence. That's what we want. So now, if that's the abstraction, it's not about a transformer, it's about intelligence, and I can define that with some metrics. Okay, now I can supply that in new ways. And so that's the way we're looking at it. Like I'm not looking at the world today. I'm looking at the world in four or five years in terms of a product.
Yeah. Can what like what do you what do you uh can you break down more specifically what what you expect your like kind of first initial customer cohorts to to look like?
Yeah. Yeah, I mean the way we're going to see all of this stuff expressed is really much cheaper per token cost like 1500 the cost per token or something like that. And um and I think our our cohort is really at first we're going to go out for data center as the as the fundamental rollout but anyone who's using intelligence for an application at that point they're going to actually have probably strict requirements on what the capabilities of the models are what their uh what the token quality is defined as. So when we have that metric, we can build toward it. We can say okay well GPT8 or whatever it is at that point gives me this and is there a way I can recapitulate that in a in the kind of neural network we're defining and the answer should be yes. And so we we will have kind of strict specs in in a way to build toward in terms of what that intelligence token really means.
And so it's it's really our our customer cohort will be anyone who's using AI as part of their application. And uh we want to supply that faster, cheaper and eventually in a more ubiquitous way. I mean personally if I really want to look out there like 10 years, I think robotics are going to be huge. And I think robotics require us to solve this problem of bringing energy down drastically to actually have more processing on on the bot itself. That's how we're really going to open up this world of autonomy.
And so uh personally, I think that's where I'm really excited. Maybe that's just the kid in me uh excited about seeing that future, but I think that's where we're going to go. What? Uh, yeah. I I mean, you I know you talked a little bit about the the size of the seed round, but um can you help me understand a little bit more? $475 million is a ton of money. Uh is is there [clears throat] is there like a you know a bill that you see yourself paying where you're like, "Yeah, I'm going to I'm going to spend $200 million on this." Or is it more like you're just going to be operating at the level of like, okay, we're burning a hundred million dollars a year on a lot of top engineers and AI scientists and like it's just a big organization early and you're just kind of jumping to growth stage company scale as fast as possible.
Uh not not really that. So I don't anticipate the company becoming huge in terms of people. Um call it 80 to 100 people steady state for a while.
Yeah. Uh but much like a frontier lab, like a frontier lab spends a bunch of money on GPUs to iterate.
Yep.
Like training GPT5 maybe 30 million bucks, but you got to follow a bunch of dead ends, right?
And you do that with GPU compute. Our version of that is actually building hardware. So we're going to be fabricating chips. We're going to be trying different things. It's very hard to model some of this stuff actually numerically. And you [clears throat] know, we're going to do our best to model it, scale up that modeling actually on GPUs. Uh but at the end of the day, you got to build it, you got to test it, and you got to see if the whole thing works end to end. And we're gonna do that a whole bunch of times.
Yeah. Yeah.
How are you uh how are you planning to actually use existing Gen AI kind of the existing Gen AI stack to accelerate your own development of of uh of an alternative to traditional, you know, GPUs?
Yeah, actually that's a great question. There's it's it's really interesting as we start clicking into like the the tools for designing hardware. they're still pretty far behind. Um, you know, automation of digital logic is has been done to some degree as almost like coding tools. Uh, but on the analog side, this is still relatively new. And we are seeing now companies that are saying, hey, I can explore the design space of different circuit architectures for you using AI. And you know, we said great, we want to either work with them or buy them, I don't know, whatever it is, whatever makes sense for us. But yeah, we are trying everything to use the latest techniques to accelerate our exploration of the space. That's the way we're looking at this is how fast can I iterate? How fast can I find working solutions? That's our main goal right now.
Makes sense. What have you learned from racing that you've applied to company building? And what have you learned from company building that you apply to racing?
Oh boy. Uh, I think racing when you start getting into like truly competitive events, you you start to see that every like tenth of a second matters. Like when you're coming into the pit lane, for instance, you have a pit lane speed limiter, right? So, I don't think people even know this, but when you when you come in, you you hit this pit lane speed limiter, and when you drop below the pit lane speed limit, you hit the throttle, it's it pegs it at the speed speed limit. You basically want to optimize that so you don't even lose a tenth when you come into the pit lane. You basically hit the brakes at a certain point, slow down and and are at this pit lane speed. You don't want to slow down ahead of time and kind of ease your way in.
Y
every tenth of a second matters, and that may be in even a 24-hour race. And I think that's true with a company. Like, don't waste time. [clears throat] Every moment matters. You may make bad decisions. That's okay. Figure out how to back them out. Move as fast as you can. If you may, if you hired the wrong person, fix it. Um, and so I think the main thing I learned is that time is your biggest enemy always.
Love it. Love Uh, well, we normally ring the gong for $475 million, but we should also ring the gong for $265 points in the Ferrari challenge.
There you [laughter] go.
Yes, I'll take it.
Uh,
they're both massive accomplishments.
Well, we're big fans of the Ferrari challenge around here, so it's a big deal for us.
Imp is where it's at. Really? That's the big stuff.
Oh, cool. Cool. Yeah.
Yeah. We
we were out in thermal uh a couple weeks ago and it was a first time for both of us on the track and it was it was life-changing. It was like a new like I'm sure somebody that you probably shared the hobby uh here and there. So uh anyways, congrats uh on on the announcement and everything and the whole team. Uh sounds like an incredible place uh to go work right now and uh
uh I'm sure uh uh I I can't wait to have you back on
and and what a murderer's row. We got Andrees, Lightseed, Sequoia, Lux, DC, VC, Future Ventures, Jeff Bezos, Data Bricks, and many others. Uh, what a fantastic way to kick off a new business. Congratulations.
Awesome. Thanks.
Have a great rest of your day. We'll talk to you soon.
Goodbye. [applause]
Uh, let me tell you about wander.com.