Milan Lustig's Opt32 raised $5M to build full-stack compute infrastructure for autonomous robots and drones
Apr 23, 2026 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Milan Lustig
Fellowship Giga stream. We've seen some suits creep into the tech podcasting world. Job's not finished, but we're making progress. We'll go back to our Teal Fellowship, Gigastream, with Milan from OPT 32, developing compute infrastructure, from compilers to chips for physical autonomy and local intelligence. Welcome to the show. How are you?
Good. How are you guys doing? Great to be here.
Thank you so much for hopping on. Please introduce yourself and the company.
Yeah, totally. So, I'm Milan. I'm uh one of the co-founders of OPT32
and at OP32 what we're trying to do is essentially build modern full stack compute infrastructure for physical autonomy. Okay.
So more concretely like you said that's everything from software so compilers down to chips right custom accelerators to run ondevice machine learning and things like robots, drones, cars, uh autonomous defense systems, other autonomous vehicles.
That seems extremely extremely extremely broad. Uh are you going to narrow it down? Is there a beach head that will happen where you will find a particular uh niche or is going broad of the issue to pick a small niche market and dominate it?
I think it's wrong everything all at once.
I'm I'm I'm super curious.
Yeah. So, I guess a couple things to touch on there. So, the first is that you know
the the nice thing about building infrastructure and compute infrastructure in particular is that it does generalize fairly well to many different applications. So you can use the same chip and the same software to run ML in a robot as you could in an autonomous vehicle. Yeah. Um I do think we are you know taking a sort of a an entry approach into the market particularly from the robotic side. Sure.
Um and from you know our side we are starting at the software layer and then gradually working our way down the stack towards hardware.
Okay. And in terms of so if you're starting with the software layer uh walk me through the the infrastructure stack that you might be buying off the shelf. Are you buying like Nvidia GPUs? I I I know that like when I think of like a previous autonomy stacks, I think of like Tesla being very vertically integrated and then Nvidia and a few other partners sort of being like Mobileey sort of going around to the rest of the automakers and the OEMs and sort of plugging in with a little bit more flexibility, but those companies that provide that stack aren't fully integrated. So, I see the opportunity, but I'm I'm I'm curious about how you're solving the short term where even if you wanted to design custom silicon today, it's probably not going to ship to you in a year, right?
Yeah, certainly. So, so what we do right now is we essentially build a, you know, fully automated model optimization platform, right? So, say you're a robotics company, you're deploying some perception model on your robots, you uh very often need extremely low latency. um you know you have compute constraints and that you can't just toss a massive server scale GPU on a robot that runs off say a battery. Yeah. Um so what we do is we you know work with these companies to get their models running faster or for them to be able to fit more intelligent models on you know cheaper hardware stuff like that. Um and right now our software layer is pretty much hardware agnostic. So it could in theory run on Nvidia GPU or even like a you know I don't know Qualcomm accelerator or something like that. But primarily we do target Nvidia GPUs as our back end. Okay. Um, yeah. Where what are are you excited about uh like robots that are too small to house an NVIDIA GPU? I'm thinking of like the the the the Madic robotic sweeper or the Roomba like and and I and I've been super excited about the the potential like the Roomba had such wide deployment. Uh it's it really did break through across the chasm in terms of like robotics in the home. And I'm wondering if you're excited about uh or optimistic about sort of a shorter timeline to uh those more incremental steps to robotics robotic deployment versus you know we've heard a lot of pitches for like the straight shot to humanoid AGI and I think that's coming but are is there a uh a wall crawl run that you know technology sector will do here.
Yeah I definitely agree with you there. think we're very excited about sort of the the gradual deployment of robots and autonomy into our everyday lives over the next few years. Um I do very strongly agree that uh as as you know we get maybe you know jumping three years into the future we'll see a lot more autonomy in in consumer life. So you know that could be something like a Roomba, could be something like a cooking robot, could be specific duties like street cleaning as well as another area I'm particularly excited about is autonomy and manufacturing. Um, I think it can do a great job at sort of augmenting human workers and and uh helping to sort of bridge this um gap we're seeing where there's not necessarily enough um uh skilled say like welders or something to fulfill all the manufacturing uh wants. Is there enough data in manufacturing to do anything at scale with machine learning or is there enough you know transfer learning from the big models
today? Maybe not. Over the next few years I would hope so. Um we don't concern ourselves with the model layer. We try to be everything after that. So we don't build our own models. We don't train models. We work on running models essentially.
Sure. Sure. Jordy.
Very cool. What what were you doing before this? Uh, I was a freshman at Harvard studying CS and philosophy.
That's cool.
Why do you why do you why did you get into Harvard? Uh, other than you seem like a a smart guy.
Uh, I have I have no idea. I don't know. Um, I spent uh most of high school doing CS research. Um, I started writing compilers back when I was a freshman. Worked in a bunch of different university research labs all on, you know, compilers, computer architecture, programming languages, ML acceleration.
Yeah.
Yeah. Uh what do you make of the decline in uh computer science as a major across universities? It feels like there's a I've seen some stats that show like a pretty pretty steep drop off and yet
it's contributing to it, John.
Uh no no you I mean you're still you're still CS like major effectively. Uh but but it it it feels like uh the route that like the fear around don't major in CS is very much like don't major in CS and then try and go get like a front-end engineering job that's just writing code. But if you're majoring in CS, that still might be the best path into working in robotics, working on infrastructure, doing a lot of different things. So how have you processed the value? Like do you feel like the CS that you've learned is less relevant today? Yeah, that's what that's what I was going to say. I think there's sort of going to need to be a reallocation so to speak of of talent and focus within the field of CS where as we see, you know, artificial intelligence capabilities advance, we see a move up layers of abstraction to where skills like system architecting uh sort of higher level theory are going to become more and more important and kind of these you know very low-level implementation details like you said being like a front-end developer doesn't provide as much value. So I don't necessarily think CS as a whole is dropping off so much as it's reallocating towards those higher levels of abstraction.
Yeah. Uh how big is the team? How are you thinking about the capital intensivity of this business? Uh when I hear custom silicon, I'm I'm I'm hearing like hundreds of millions of dollars to do really anything interesting. Uh we've had some previous teal fellows from etched on the show or I've talked to them and done podcasts with them. Um and it feels like this can be extremely uh in capital intensive or uh you can find partnerships and do something that's much more on the actual uh you know software and infrastructure side that might be less capital intensive but how are you thinking about it?
Yeah. So so right now the the team is just the three of us co-founders me and my two high school best friends all technical. Um we are hiring and hoping to grow pretty quickly. Um, in terms of capital intensivity, the really great thing about, you know, building full stack compute infrastructure is that we can actually sell our software in isolation of the hardware and we're already doing that and working with some early design partners. So, we can get revenue, we can do that quickly. Um, software development costs are relatively cheap. Um, sort of the, you know, cost of operating our software is essentially zero. So, any revenue from that is pretty much pure profit. And in terms of hardware development, I think there's a great path towards gradually moving towards a full production run of a custom ASIC. Um, you know, in particular, what we're going to do is build singleboard computers, so PCBs around some existing accelerator chips and build the entire software layer there. Then we'll move on to maybe implementing some chip architectures and FPGAAS. And then from there, you could do a, you know, smaller uh, not super advanced process node kind of prototype tapeout run. And then say one two years down the line once we've raised a lot more capital that's when you go for the the big advanced process node full production takeout.
Yeah.
Are you working with uh Victor and Cavala yet? The chat
or are you competing in bitter rivals?
No natural they be they'd be natural partners to some degree.
Yes. Yes. I think we'll be definitely working with Kavala at some point in the future. Um right now we're mostly focused on sort of internal technical work but
shortly
uh why uh what is the uh the the especially in the manufacturing sector uh is there not a lot of energy being devoted to uh not teleoperation but just putting compute like like I guess like a thin client for robotics would be the term like you have a server room with a NVL72 rack on site And then you're doing inference in the IT cabinet or closet or the local data center effectively. And then your robot can just be much lighter and has like barely it just has a camera on that's just feeding stuff and like yes there's maybe a little bit of latency but you're talking about you know the speed of light across high bandwidth Wi-Fi. It seems like that might be an interesting solution. Like is that happening? Is that one of many strategies or is that a dead end?
Yeah, I think it's a split, right? I think there are definitely use cases where you can do that and there's also other use cases where you might have some sort of network constraint or some sort of uh extreme latency constraint where you can't and you know some things that we work on are actually like splitting the workload across both a cloud server call and maybe something that's more latency sensitive can run on device.
Yeah, that makes a ton of sense. Have you guys raised money already outside of obviously you know fellowship is uh fellowship is for you but um what
yeah we raised our our seed round about two months ago we raised 5 million um co-led by box group and by venture
boxator would get into this one
we love we love him uh great great pickup and great to meet you I'm sure he'll be back on
meet you guys as well thanks so much for coming on the show we'll talk to soon. Have a good day.
Thank you.
Goodbye.
Up next, we have Gayen Me from Standard Intelligence focused on
not to be confused with non-standard super intelligence.
Non non-standard unintelligence. You don't want to be unintelligent. Uh building aligned general learners. Gayen me joins us on the show. And I believe