Tim Fist on the Nvidia-China H20 chip saga, Middle East AI deals, and why Washington is dangerously behind on AI
May 16, 2025 · Full transcript · This transcript is auto-generated and may contain errors.
Featuring Tim Fist
We made it. We made it. Thank you so much for for bearing with us. Um, that was such a weird, you know, it's not the first time a nation state has to take down TVP. We start talking and we get spicy and all of a sudden the connection gets fuzzy. It might happen again. We don't know.
Yeah, we had uh 10 cent was pretty upset at the uh H20 allegations. So, yeah, prime suspect number one. Take us through the allegations again, break down what you guys published, and then we'll go through uh some of the the reaction and the fallout from the piece. Yeah.
So, u god, so much has happened since uh this was in the news. It feels like there's five different things in export controls that have gone on since. Um but yeah, basically this was the uh H20 um inference chip from Nvidia, which some might remember was um designed to be compliant with the export controls.
Um so is this the chip they sold about um reportedly 1 million of them into China in 2024.
And yeah, there was a lot of outcry at the time for the Bureau of Industry and Security who's you know the part of government that administers um and enforces export controls to do something about this because you know 2024 was the period where everyone realized that inference compute was perhaps you know the most important strategic input to frontier AI development because of you know test time comput scaling, reinforcement learning and synthetic data generation is the key things and um yeah uh you know uh earlier this year uh there was reporting that you know Nvidia had a huge number of additional sales plan to like big Chinese companies and a bunch of people including us sort of said hey is this really what we want to be doing do we want to sort of allow you know China to get access to millions more of these chips and yet the government ended up taking action on this and issued some guidance basically saying hey no you can't make these sales um Nvidia reportedly was left holding the bag to the tune of about 5.
5 billion um and now you know since then we've seen a bag for ants for Nvidia it's barely fleshwood. Well, they made up for it with deals in Saudi Arabia, right? Yeah, indeed. Yeah. So, that's the new thing, right?
It's the administration walking back on the diffusion rule, the speaking framework, and then the series of deals that have been announced um over in the Gul and Yeah. And what's your reaction to the diffusion rule and the deals in Saudi Arabia? Is this a step forward? Are you excited about this? Is this positive news?
Yeah. So, it really depends on the details of all this. So, I think um on the deals um I guess fundamentally, you know, what do we want? We need the US AI tech stack to win the global competition against China.
And I think a big part of that is locking in these early adopters and big spenders, especially like the United Arab Emirates.
Um, but I think we need to be thinking about how to structure deals like this to sort of get the US the outcomes it wants, which is, you know, US tech diffusion, the kind of like Chinese tech stack locked out in a way that, you know, we couldn't handle with 5G, like China won sort of like the 5G battle globally.
um and then appropriate national security guard rails in place. And it's pretty unclear like what the trade-offs that have been made for this deal are.
So I think like the high level specs that we've gotten is um a 5 gawatt AI data center campus to be deployed over some time period in Abu Dhabi and then reports of 500,000 chips per year to be exported with maybe fourth of those for US firms who are building data centers over there and one for G42 this big tech conglomerate in the UE and kind of depending on how quickly that all happens and whether they're referring to you know 100,000 of today's chips or 100,000 of like chips in uh you know 5 years time this could be the difference between you know 1x to 100x in terms of like different differential in compute.
So yeah I think that really makes the difference and I think the key question here is you know uh do we want an AI lab in you know an authoritarian country to have the biggest clusters or close to the biggest clusters in the world.
Um and you know this is a country that does collaborate with China in areas like drones and 5G and military technologies and so you need to sort of be careful with giving them access to sort of frontier scale compute here. Yeah. Yeah.
So there's kind of like you talk about I'm curious if you have insight on capital flows from the UAE and Saudi into Chinese AI is how how much investment activity is there? Do you have any insight? I don't have stats on this. I suppose the interesting to say on this you guys might remember there was this $ 1.
5 billion deal between Microsoft and G42 a couple of years ago that sparked a lot of this stuff.
Um so this was essentially you know Microsoft making investment into G42 and starting to build data centers in collaboration with them and the department of commerce got really involved in this and part one of their requirements was to divest from Chinese companies in the kind of AI stack um for G42 which is you know uh this huge tech conglomerate uh that is uh funded through the nation sovereign wealth fund which is the second largest in the world so pretty serious money we're talking trillions of dollars um and yeah reportedly what they did is they had a bunch of passive investments in Chinese companies including you know hyperscalers and AI companies and reportedly G42 did divest from those companies but then basically moved the investments over to another fund called Lunate that was sort of also owned by this you know big sprawling tech conglomerate that's ultimately funded by you know the sovereign wealth fund so big question marks about how much are they actually decoupling from China and like how costly is this to them like are they actually burning bridges that are hard to reverse yeah there's some element of like we have this smooth gradient of friendship with different countries obviously ly there's the five eyes like the closest allies.
We sell nuclear submarines to some countries. Uh but then there's countries that are more jump balls and could go either way.
uh as the diffusion rule goes away, do we need uh firmer rules around, hey, we're willing to sell to you, but don't immediately set up a reseller and start selling, just passing these on to China because you could imagine if the natural economic forces take hold.
Uh and I was in some country that was just about to get 500,000 chips. Um it's pretty easy to just immediately start reselling these if I and and just print money basically. Yeah. And there's there kind of two ways to do that, right? One is you can just on sell the chips, so kind of smuggle them into China.
And the other way that's pretty straightforward is set up your own data center and rent it out as cloud computing. Exactly. So yeah, I think that's what you're referring to. And yeah, currently there's not great guard rails around either of those things.
And I think the administration wants to set up better versions of this, but yeah, I think this needs to be part where there's sort of like countries that are really willing to, you know, buy hundreds of thousands to millions of chips. You kind of want these structured deals that have these guardrails in place.
And I think the Trump administration is really really well positioned to strike these kind of smart bespoke deals that get us the right possible outcome across each of these dimensions. Um and hopefully that's what they're pursuing. Yeah.
Uh can we talk about the news today or maybe it came out late yesterday that Nvidia to set up research center in Shanghai maintaining foothold in China. Basically they're they're opening this R&D center uh in a as almost like an olive branch to China is kind of how I would describe it.
They they say they're going to use it to understand Chinese customer demands and design US compliant products. Uh and they're basically doing this to just kind of navigate uh domest uh sorry navigate export controls and compete with companies like Huawei. This uh to me I mean there's so much to unpack here.
I love your kind of initial take and then I want to kind of maybe move more high level. And to me, this signals that that um maybe Jensen doesn't take national the national security concerns about like an AI war as seriously as even some of the US Foundation uh model labs uh talk about it.
Yeah, I'd say that's an accurate assessment.
And you know, I think it's really hard to design, you know, sanctions, export controls, like these kinds of things in a way that it's not that sort of can't easily be escaped from, but it's kind of like the spirit of the rule in that like, hey, we're worried about China, uh, you know, beating the US in like the most strategically important technology of this century.
Uh, and we don't want you selling to them. But then, you know, you can do all these additional, you can do all these things to sort of like actually be cooperating around the scenes like design comps or whatever.
And to be fair to Nvidia, you know, like I think they say, look, look, if the if the speed limit is 60 and we're going 55, like we're not breaking the law. Like we should be allowed to do this.
And if you think like the strategic perspective for them is also a lot of their customers are like these big US hyperscalers, right? Um and all of these hyperscalers are developing their own custom silicon. So you know, we know Google has their TPU um you know, that they use for both training and inference.
We know Amazon has like traium and inferentia that they're also using for training and inference. Microsoft is developing their own stack and so you know huge source of revenue for um Nvidia is potentially at risk.
So it kind of makes sense for them to want to be uh you know diversifying to other parts of the world and not just be selling to us hyperscalers. So they're certainly in like a difficult position overall.
And yeah, what you think is right here depends really on how much you buy this kind of argument that hey, over the next 5 years, AI as a technology that will like reshape the global balance of economic and military power is a thing and will be a really big deal. Yeah.
Um it seems like the speed limit is getting lower and lower though. Uh we went from the H100 to the H20. Now, Nvidia is preparing to release a modified version of the H20 chip for the Chinese market after your piece and the changes to the H20 restrictions.
Um, is there a point where the restrictions are so are so ownorous that no one wants to buy a car that goes 9 miles an hour? You know, and a certain point Nvidia will just lose the market share because um Huawei Ascend chips will just be outperforming um are you tracking any of that?
Yeah, so here it becomes an interesting conversation.
I think the kind of crux of the matter is uh if you know Huawei has better chips than Nvidia in the Chinese market um and is able to sort of capture that market how bad is that like should we sort of set export controls such that Nvidia is always slightly ahead of Huawei for example and sort of like raise those limits over time and here I think there's two sort of dimensions to it one is that you know in AI the quality of the chips matter so how sort of individually performant each chip is but also the quantity really matters so you know you can have a million chips uh each that are uh you know like half as good and sort of like substitute for like 500,000 chips for example um and so you know this is a crude approximation but you both want the sort of best chips and as many of them as possible and where we're trying to squeeze China like we being like the US government is across this whole supply chain so not just on you know being able to procure chips directly but also being able to manufacture their own so having like the semiconductor manufacturing equipment and the fabs and everything there and so I think the goal of the US government has to be really to restrict the quantity of AI chips that China, so like SMIC and Huawei as like the key firms here are able to produce.
And so by this logic, you know, even if you know, you Nvidia has a chip that's uh only slightly better or slightly worse than Huawei's, you still might want to restrict it because you would prefer them not to have access to 10 times as many chips than they otherwise would have.
Like you don't want them to have access to essentially like, you know, TSMC's production capacity of like being able to do many, many millions more chips than you could otherwise produce. So I think yeah these are like hard trade-offs to make. I think where it really matters is in foreign availability.
I think you know where Huawei is accessing foreign markets and outperforming US chips that's really bad and we should sort of make sure that that is not the case and is you know it is US chips that are being used globally in countries that aren't China. Yeah.
Do you think Huawei will ever go public or do they not want people to know uh they want people to know as little about their business as possible? Yeah, I'm clear. Depends what the CCP wants. It's notoriously opaque organization. Yeah. Interesting. Um, what is your take on open-source AI?
Um, we were talking to Erin Gin about this idea that uh, if America does not provide a state-of-the-art um, open-source stack to countries that want to build their own AI products off on top of fine-tuned or post-trained LLMs that meet their definition of uh, free speech or or ideals or morals.
Um the stack by default will be Huawei Ascend, Deepseek, Manis. Yeah, I totally buy this. I think that um being able to sort of have the best open source models in the world be American is really important for a similar reason as like the chip stack and the data centers and the cloud services.
Um I think there's a question about how sticky is this ecosystem actually. So you talk to people who are like yeah we really need to like lock in the tech stack globally like American open source models need to be sort of like the rails that the world runs on.
But then if you look at sort of how AI developers work they are very happy to switch between different base models for their application depending on which happens to be the best.
You know you look at like the revenue of Frontier Labs and you know when they have the best model it's up here and when they don't it's down here like everyone is switching every day depending on like who who has the best model.
So it's yeah like you want to diffuse American open source as widely as possible but like what about it makes it sticky and my hypothesis would be that it's probably the secure security and reliability side of things.
So you know everyone's worried about you know the fact that you can insert like back doors to create slipper agents into open source models and there's no way to actually detect these. Uh I think you know the US wins if it can prove that it has the most trustworthy and reliable models over China.
Similar to how I think US cloud computing companies compete over you know Huawei and Alibaba cloud like often Huawei and Alibaba are coming to the market with a cheaper option but the US is just more trustworthy in terms of you know data privacy security etc.
Um, so I think trying to figure out those technical problems around security, reliability, interpretability and sort of proving that US models win across those dimensions is probably the way to make the kind of open source models from the US more sticky.
Um, or just sort of be way better and you know this is uh where sort of most of the effort is currently going and definitely support uh those efforts as well like you know building out more domestic compute for example and like finding more training data sets. Yeah.
How do you think about the dynamic between uh the importance of the application layer versus the foundation model layer? Uh we've seen efforts on the foundation model layer at the national level all over the place, but let's just use Mistral as an example. Mistral has a consumer app called Lehat.
uh it is a direct competitor to chatpt and uh I think that's great for the French and they could potentially have a fine-tuned model that meets their sty standards and guidelines but if at the end of the day the French consumers 90% of them are using Google and 90% of them use uh open AI well then all of that is kind of worthless uh and sure maybe mistall will be cheaper in the enterprise and be implemented in French businesses or European businesses.
But in terms of like control of the population, that feels like the diffusion of American ideals in Europe, which doesn't seem extremely controversial because America and European ideals are ideals are pretty similar, but you can see how this would play out in other countries. Yeah, totally.
And the economics are pretty brutal here, right? Like we're in a regime where the amount of compute being used to train a model goes up 5x every single year. We're moving from, you know, hundred million to train a model, you know, last year to rapidly approaching the billions.
If you are a company committing to this and you're only sort of just slightly better or worse than, you know, one of these huge tech companies, you can't keep sustaining this over time. You have to you have to you have to check out eventually. Yeah.
And we've seen that with a lot of the early stage foundation model companies that have raised trained something but never been on the frontier and now they're, you know, falling out of favor more or less. Yeah, totally. Interesting.
Has there been any conversation on the on the ground in DC or have you heard any chatter around uh the Manis investment that that uh definitely kicked the that benchmark made kicked the hornets nest a little bit in uh the hornets nest of American dynamis deli and aspir but u but to to give uh benchmark a little bit of credit you you if you're going to be mad at Benchmark for making investment in Manis you also in some ways I think have to be mad at Jensen for going up and setting up a research and design, you know, R&D center in Shanghai right now explicitly to work on developing products for the Chinese AI ecosystem and in some way direct the CCP directly.
Yeah. Yeah. I it's a bit hard to evaluate. I think you know one uh you know the one of the US advantages is like very deep capital markets, right?
And so being able to exert financial control over setups overseas by sort of like acquiring stakes is potentially a way to uh you know um have you sort of a better sort of fairer sort of more aligned like global system overall. I think this is hard when it comes to Chinese companies.
But does America benefit from having American capital allocators and bite dance? I don't think we have much Yeah. influence or control over what bite dance does. Yeah. Exactly.
So I think for big companies especially those that are part of this you know um Chinese sort of state industrial military complex uh this is a pretty poor prospect.
This is why you know the Treasury Department has outbound um investment restrictions in a bunch of different industries which have been expanded to AI over time relatively slowly though.
Um, so yeah, one one thing that they've been sort of working on is trying to figure out whether to apply these outbound investment restrictions more solidly to commercial AI developers and commercial AI cluster operators.
Like so like investments coming from VCs actually restricting investments of those kinds into China. Um, and yeah, this is complicated by the fact that just like due diligence is really hard.
Like let's say you're investing into an application developer who's doing something pretty innocuous like, you know, automated code agents or um, you know, search or something along these lines. Uh, you don't have any control over, you know, are they going to work with the Chinese military in the future?
Are they going to be sort of like an instrument of state back surveillance over the local population? They're not going to tell you that and they might not not have plans to do that, but they can certainly be compelled to do that in the future. Uh so the due diligence question is super hard.
Uh do you think uh regulators or the American government needs to be thinking about the pre-training scaling law potentially not holding or reaching some sort of diminishing marginal return. We've seen the data from GPT 4. 5. It feels like uh GPT5 might not just be 100x bigger than GPT 4. 5.
It might actually be some sort of mixture of experts, different models, and more of almost like a product challenge than just a scaling and just get the more chips challenge. At the same time, OpenAI is also investing in um in Stargate and and there's still a drum beat of ever larger data centers in the United States.
But uh the the overall tone of uh of AI research labs in America seems to have shifted away from just the ever bigger transformer.
And there seems to be a somewhat of a somewhat of a resignation to this idea that there might be more challenges on the path to ASI than merely scaling up the architecture that we have right now. Yeah, I think there's a few things here.
One is that, you know, obviously these companies are still making huge investments in clusters and energy to get to the next order of magnitude of scale.
So there's some level of just you know uh financial buy in to the idea that you know pre-training scaling will continue but also a recognition that uh you know we've got this other scaling law this sort of test time compute scaling law and now like reinforcement learning as a paradigm that seems to really be working for language models where a lot of companies are starting to put more of their compute resources overall.
So I think the rough balance now seems to be around sort of 8020 pre-training compute and then post- training and I expect what we might see as we see sort of um you know companies who are building out these clusters and these energy sources to support them where are they going to sort of like balance the comput allocation across those calculus seems to be that um yeah pre-training comput is relatively less promising to the four and putting more of your resources in a relative sense into RL which is sort of very much at kind of like the early stages of scaling up is the better strategy.
at the moment. Yeah.
Yeah, I mean the the other side of this is like as even even beyond post- training and RL as reasoning tokens and just more test time compute, more inference cost increases, maybe the real way to get a GDP boost or competitive advantage out of AI is just to make sure that there is an H100 or equivalent for every member of your society or every every citizen because everyone will need to be inferencing thing at a very high level, very large model, basically constantly.
And so even if you've trained the greatest model, if you can't have every single one of your citizens constantly inferencing it all day long, you're not going to see the benefits of I need my AI companion constantly on. I mean, we do need we do need, you know, codegen and we need research and we need answers.
If we're if we're timing out, it's not just codegen and deep research are going to be competing with the AI companions for inference. Yeah, probably.
Um, but just to I guess maybe push back on maybe hypothesis underlying that um I am very um confused about where most of the compute is going to be spent and how that is going to be distributed across people.
So I can easily imagine a world like in two years where most people in the world still aren't really using simple tools like chat GPT like you know my grandma still like never heard of it and uh but at the same time you have some companies at the frontier who are deploying millions to billions of agents internally.
So you have this really like unequally distributed use of compute overall. Uh so yeah I I kind of buy the idea that there'll be just like massively uneven sort of like usage of these kinds of resources and like really like located in particular countries and within particular companies. Yes.
Agree in the sum total of the importance of inference and compute allocated and available for inference but potentially disagree on the distribution of that. And I I I think now that you now that you hash that out I think that makes a lot of sense. You already see that just in the proumer versus consumer market.
Uh, I'm probably kicking off like three to five deep research reports using a lot of tokens. I'm getting my $200 worth and there's a lot of people that are, you know, just land on it every once in a while to toy around with it. That makes a ton of sense. Uh, what's next, Jordy? What should we talk about?
How h have you guys had success explaining uh the potential for AI, you know, AI's potential progress in the next few years in in Washington broadly? Do you feel like lawmakers have fully kind of uh fully understand the potential? Right. Nobody has a crystal ball.
Uh we can't predict the future, but if you sort of extrapolate trends and even just use the products today, uh do you feel like do you feel like Washington is is pricing it in or is it still uh you know or or are people going to be you know extremely surprised in the next few years?
Yeah, my bet is 100% extremely surprised. I am I think consistently disappointed with how the lack of kind of level of AGI pled uh people in DC are even with sort of like current capabilities and like where the sort of trend is obviously going.
I think like there's a lot that just isn't being priced in about kind of how weird the world is um you know in 5 years time.
There's also I think a sense in which you know there's a real loss here in that um if you look at technologies like the internet which you know as you probably know like came from like this with this like DARPA funded project um as well as you know um early sort of genomics with the human genome project these were technologies where the government sort of saw what was coming and took sort of a really active role in shaping the development of the technology through basic R&D.
So with Arpanet for example, the focus was really on creating, you know, a resilient network system that could survive a nuclear bomb.
Um, but also they sort of like they were able to take that sort of secure network infrastructure and apply the notion of kind of like openness and freedom of information to create like a scaled global network that really represented American values and was like very secure.
Um, we don't see like an equivalent kind of level of basic R&D investment in the United States around AI.
And that would be really cool to see because there's basically a bunch of problems where you know right now industry is focused on where the money is which is you know B2B SAS apps and chat bots but there's like a huge space of just like if you accept that over the next few years we'll have these incredible new AI capabilities.
there's all these massive important societal and scientific problems in areas like, you know, materials discovery, drug discovery, etc. where the government could be placing essentially huge bets that could pay off, you know, within a few years.
And we're kind of not doing that because we're sort of in DC at least failing to see sort of where the future is going and therefore what the role for this kind of basic R&D is.
Um, so yeah, I'm there's a congressional coalition that's just started called the American Science Acceleration Project, which we're really excited about and try to build hype around.
Um but they I think I have the right the right idea around this but you know there's um a deficit of this kind of thinking in DC at the moment. Last question for me. Um how has Meta's reputation changed in DC over the last few years. Famously Zuck goes to Capitol Hill.
They don't even understand how his business model works. He says, "Senator, we sell ads. " He was castigated for being too left then too right. and uh very a lot of political hot button issues around the Facebook app and the type of content it's servicing.
Obviously, a big vibe shift there, but now it seems like Meta is increasingly a very important tool in the American foreign diplomacy AI tool chest with Llama.
Llama is of course in their open source model that's you know there's defense llama now the DoD has partnered with Neta on this and yet uh today we learned that their behemoth model is struggling to improve capabilities they're facing setbacks um has the has the tune changed in DC to say hey Zuck like sell more ads please like do more stuff we got to we got to get you pumping because like you're a national champion now.
Yeah, I'd say like by and large, you know, there's different factions in DC who care about different things. Obviously, sort of Meta is going through this big like antitrust case at the moment as well. Uh like has been going through it. Um yeah, on the AI side, I guess it's interesting as well.
Um like Meta is very much seen as kind of like the darling of, you know, open source for the US. I think it's kind of awkward for a lot of Meta fans to see the latest batch of models and then really not be that impressive.
and also potentially a bit of gaming with uh leaderboards and sort of what models they're releasing there.
Uh obviously they've executed the strategic pivot to appeal to the current administration with you know getting rid of facteing and bringing in like the community notes type approach and that seems to being pretty effective. But yeah, I think um they've certainly got a lot of backers here.
Um yeah, and the open source approach it's like really good to have like a really well capitalized company like really like pursuing the strategy overall. Um, but yeah, I think they're copying a bit of heat for not actually delivering on the promise to some extent.
And also people there's sort of like another faction that's pretty worried about uh open source models with capabilities that could be significantly misused, especially in the cyber domain being like freely available to the whole world. Um, but that's less of a concern while they're behind.
Yeah, it was very interesting in the in the in the press release around this in the in the the management of the the news that they were delaying the roll out. They didn't say, "Oh, it's it's too dangerous to release. " They could have easily said that.
That's always an easy out for the AI labs to say, "It's just too good. Trust us. We we we we can't trust you with it. We're we're doing more safety research. " Instead, it seems like there's a little bit of an admission that uh it's just not at the level we want it to be.
Jordan, this might be out of scope for you, but do you think that AI is are at a much uh is being used in nefarious ways in the context of social engineering attacks at a greater scale than people sort of realize today? This was top of mind just due to the the Coinbase news this week, the leak that they had.
Uh and uh I think people broadly anecdotally reported that they just felt like they had a huge uptick in like sort of inbound calls and social engineering attacks.
My my question is is AI is already when you're using tools like Sesame and and some of these lower you know Sesame is obviously state-of-the-art but even some of these lower latency video models should already be capable and more better at social engineering attacks than like you know a PhD level person globally that that even even if English wasn't their first language and and and you Uh so anyways I'm curious if you have cyber security social engineering the any sort of like insights potential regulation around it.
Yeah no insights into the true extent. I will note that it is surprising to me that we don't seem to see more of this. My impression is that from sort of GBT 3.
5 onwards we've had LLMs that can produce better text more convincing text than the kind of median fishing email which as you know is often like pretty poorly written.
Um, maybe that's deliberate, but yeah, like it's kind of weird to me that we haven't seen mass sort of spear fishing campaigns of the kind that have been possible for like several years now yet, or at least they haven't been widely reported. Maybe it's that yeah, existing sort of filters and defensive approaches work.
Maybe like we shouldn't expect actually there'd be that many people who are trying to pick this low hanging fruit. They're not technically sophisticated enough. Or maybe it's like it's like happening and not being reported on.
Yeah, I find I think it's I think there's potential that it's happening, but it's happening to a demographic that doesn't even know it's happening, right? Like I don't pick up random calls, right? I just don't I not going to answer the call that you know of. Maybe the last time I called you, it was actually a bot.
Yeah, that's true. That's true, John. You do pick up non-random calls, but that's the point of them. Yeah. I mean, the same thing happened with the crypto stuff.
like there were a lot of the victims of cryptocurrency scams like weren't profiled in the new the New York Times and so we just didn't really hear about them because like the people that with the loudest microphones didn't fall for the scams. Yeah. Tricky. Anyway, thank you so much for joining Tim. This was fantastic.
I'm glad we survived the attacks from the nation state actors that don't want us. We kept the stream up a stream together. We really appreciate it. We'd love to have you back and this was fantastic. Yeah. Thanks for coming on. We'll talk to you soon. Have a great weekend.
Uh, next up we're bringing Chris Best from Substack. But first, let me tell you about Adqu. Adqu. com. Out ofome advertising made easy and measurable. Say goodbye to the headaches of out of home advertising only. Adqubines technology.
Out of home expertise and data to enable efficient seam buying, seamless ad buying across the globe. And we should also talk about wander. Find your happy place. Find your happy place.
Book a wander with inspiring views, hotel grade amenities, dreamy beds, top tier cleaning, and 247 concier service is a vacation home, but better. Anyway, let's bring Airbnb is getting distracted. Wander's doubling down. Wander's doubling down. It's a knockout dragout fight in the rental