SiFive raises $400M Series G to build RISC-V CPUs for AI data centers, with Nvidia participating

Apr 13, 2026 · Full transcript · This transcript is auto-generated and may contain errors.

Featuring Krste Asanović

Up next, we have the chief architect from Sci-Fi announcing a massive fundraising to scale risk v CPUs and AI intellectual property data

have some time.

Yes.

Oh,

I think our next guest is in the waiting room. Let's bring him in.

How are you doing?

Hi. Doing okay. You hear me fine?

Yeah, we can hear you fine. Thanks so much for taking the time. Uh since it's the first time on the show, I'd love an introduction on yourself and the company. Uh so um yeah I'm Christo Sanovich. So uh I was a professor at UC Berkeley. Um we developed Risk 5 which is an open standard instruction set.

Yeah.

We then uh a couple of my uh graduate students uh we set up a company SCI5 to commercialize risk 5.

Okay.

And uh we just completed our series G round um to go work on uh data center CPUs.

That's amazing. Uh how long have you been actually working on the company? I mean, series G, it seems like there's been a whole sequence of events. Can you take me through a few of the milestones that unlocked growth and sort of where the company is today?

Yeah. So, we founded the company in 2015, so it's been almost 11 years now. Um,

overnight success. When we uh first started the company, we thought we'd be doing custom silicon, but then when we started talking to customers, you know, get out there and into the trenches and talk to customers, they all wanted risk 5 IP, like designs they could use in their own silicon.

Yeah.

And so we pivoted a little at that point, started making IP for them.

And you know, risk 5, the ecosystem has grown over that 10-year period. And so, initially, a lot of our designs were smaller, lower-end embedded processes. Mhm.

And then uh we sort of over the years grew you know going through the various rounds of funding as we grew the product portfolio uh we added more higherend processes and processes specialized for doing AI kind of tasks as well what we call our intelligence line.

And then this last um funding round really signals we're getting back to very high performance CPUs that sit next to GPUs in data centers. Yeah,

that's the

uh like help me understand if I have this correct. Uh you know AI is very GPU intensive but the GPUs need to be filled with data that needs to be processed through a CPU. If you're doing some sort of reinforcement learning environment, you might need to spin up a piece of software and that requires a CPU. And so even though we are in a GPU crunch, we are also in a CPU crunch. And so more and more companies and hyperscalers are developing their own CPUs and and uh and GPU companies like Nvidia are also doing this where they have a CPU that is designed to work directly with the with the GPU to make sure that the workloads are as efficient as possible. And so you uh you're able to license your intellectual property to make those CPUs more efficient, more effective. Is that roughly correct?

Yeah, that's roughly correct. So one way to think about it is AI the last few years has been focused on building those models getting that working. Yeah.

Now they're being applied at massive scale. Yeah.

And you know if you have an AI coder it needs to comp if it's going 30 times faster than the human you need to compile stuff 30 times faster. So it's putting that load on the regular compute.

Yeah.

Uh you know and that's that can be the bottleneck. Not the GPU side. It's the you know getting the work done on the CPU side as well can be the bottleneck. So what uh so if you're not doing it sounds like you're not fabbing chips yourself. You're so uh the raise seems very significant. It's large. Uh is this mostly to hire talented uh researchers to advance the designs like what is the use of funds?

Yeah. So the very high performance processes take a significant engineering investment. M

um you know very large teams working for a long time. you're working at the very edge of high performance core design very we're working at you know uppermost tier there

and so it's very expensive to you know need the talent you need a lot of work a lot of modeling a lot of uh development so it's it's quite a labor intensive process to get those designs and then this is why an IP company makes sense a lot of companies are focused on the more system aspects they just want to have a very high performance CPU can drop in you know for example like you may design an airplane but You get the engines from Rolls-Royce.

Yeah.

Similar kind of model here. Like you want to design a high performance system,

but you want to get your CPUs from a good source like like Sci-Fi.

So, uh those CPU companies that will actually go and once they license your technology go and produce the chips with some fab uh are you forward deploying folks into their organization? Is there some sort of do they just come to you and they just want uh like a design document and then they're all good or how collaborative is that process?

No. So that's part of this you know some of the folks involved in the financing as some of these lead customers as well and partners. They view it as a collaborative development. We have to work way ahead of time to figure out like I use the airplane analogy like do you have the right kind of engine for the kinds of planes you want to build. You don't just you know put in the catalog and say pick one of these three we built previously. We have to understand where the customers are going, what the needs are. And one of the benefits of sci-fi technology, we make our cores quite customizable. So for different customers, we can adapt and configure it to their needs, to their workloads. And so we have to work with them ahead of time to understand what they're going to try to do. And so we can plan our products appropriately. H

how is how are the design tradeoffs changing around custom CPUs uh and just custom silicon in general? Is it is it just all about performance versus uh you know like flops per watt like or or price to produce the actual chips like what are the key uh levers that companies are most interested in pulling these days and how is it

well you know if you look at the overall picture it's just classic business ROI like if I make this silicon am I going to make more money by saving on you know cost of ownership power whatever those other things are and also can I offer a capability lead that brings me customers.

Yeah,

that's going to increase my top line. So, it's just a classic business decision of buy versus build. And what you'll see is the big companies will be buying some chips. They'll be designing their own chips.

Depends on the application and domain. In each case, they're making their own, you know, ROI judgment on what's the right thing to do. Biobuild.

How do you think about uh depreciation or just the lifetime of a chip? There was a big discussion over will GPUs burn out over six years and uh and I at the same time a lot of people have computers that have one CPU that they've been using happily for 20 years and I'm wondering if you're seeing a trend or a change in uh the lifetime of CPUs that are going to be used for AI workloads in data centers running very aggressively probably 247 for years. Are we seeing uh these chips burn out faster or is that a sort of surmountable hurdle?

Well, there is a there's one technical problem which is the chips are literally burning out faster just because at the spiner geometries

um you know wires move and melt under and so we're dealing with aging failures like we haven't before. But um at the business side of things I think companies trading off a bunch of things. One is it's hard to get new silicon. M

you see all the shortages. The fabs can only make so much silicon. AI is sucking up all the

all the capacity in terms of new production.

But the incentive to build new silicon is that I have a limited power budget. So if I want to do more, I can't just get more power from PG& or whoever. I need to go make more efficient systems. So sometimes I'll swap out those racks for a new rack that's 2x 3x more power efficient. I can do more with the same power budget. So there sort of capability cap on some of these companies. they just, you know, need to replace the silicon if they want to grow their capabilities.

Yeah. Zooming out, do you think that the the the tech community, the AI community is is doing a good job of using all different process nodes effectively? There's been so much focus on the the leading edge TSMC, 2nmter, 3 nanometer, the really advanced nodes. Uh there is a lot of lagging edge capacity out there. At least it felt like there was. I don't know if there's do you think that there's lowhanging fruit there that we might see the AI industry or the tech community start figuring out a way to get more out of in the future?

Well, I think for the large data centers probably not. I think again these power constraints you want the most advanced technology. However, as AI gets pushed out and permeates all these applications out in the real world, I think those trailing nodes will be used for, you know, intelligent doorbells, you know, robots, there's lots of places where,

you know, they're good enough. And also, um, there's some applications where you're interfacing with higher voltages where you're working with nonvolatile memories that are not available in advanced nodes. And so those older technologies definitely have a place, but probably more in the AJI space.

Tell us about the round.

There you go. Tell us about the round. How much did you raise?

So we raised 400 million.

Uh who who who participated?

Lead was a trades um was the was the lead. We had some other you know notable names including Nvidia participated in the

in the funding. Yeah. Very cool.

That's great. Well, thank you so much for taking the time to come and explain it to us. We appreciate you and good