NVIDIA’s CEO, Jensen Huang, might have given his chip team a ‘Christmas’ gift that no one would’ve expected, as it was reported that Team Green had entered into an agreement with Groq, a company that builds specialized AI hardware. And these aren’t simple chips; they could be a gateway for NVIDIA to dominate inference-class workloads.
To understand why this is a ‘Masterclass,’ we need to check two distinct battlefronts: the regulatory loopholes Jensen has just leveraged, and the hardware dominance he has secured.
It Looks Like an Acquisition. It Smells Like an Acquisition. But On Paper, It’s Just a ‘Non-Exclusive’ Arrangement
CNBC was the first to report on this development, claiming that NVIDIA is “buying” Groq Inc. in a mega $20 billion deal, marking the biggest acquisition by Jensen. This led to a massive wildfire in the industry, where some suggested that regulatory investigations would hinder the move, while others claimed it was the end of Groq. However, later on, Groq officially released a statement on its website, stating that it has entered into a “non-exclusive licensing agreement” with NVIDIA, granting the AI giant access to inference technology.
We plan to integrate Groq’s low-latency processors into the NVIDIA AI factory architecture, extending the platform to serve an even broader range of AI inference and real-time workloads. While we are adding talented employees to our ranks and licensing Groq’s IP, we are not acquiring Groq as a company.
– NVIDIA CEO Jensen Huang in an internal mail
Therefore, the perception of a merger, at least on paper, was nullified following Groq’s statement. Now, the sequence of events seems quite interesting to me, especially since the only thing this deal lacks to be considered a full-scale acquisition is the avoidance of mentioning it in official disclosures.
This is a classical “Reverse Acqui-hire” move from NVIDIA here, and if someone doesn’t know what this means, it is a move from Microsoft’s playbook, where the tech giant back in 2024, announced a deal with Inflection worth $653 million, which includes the likes of Mustafa Suleyman and Karén Simonya joining Microsoft, that spearheaded the firm’s AI strategy.

Reverse Acqui-hire translates to a company hiring key talent from a startup, and leaving behind a “bare-minimum” corporate structure, which ultimately prevents such a move from being a merger. Now, it appears that Jensen managed to execute something similar to avoid being under the FTC’s investigation, as by framing the Groq deal as a “non-exclusive licensing agreement,” NVIDIA is essentially outside the scope of the Hart-Scott-Rodino (HSR) Act. Interestingly, Groq mentions that GroqCloud will continue to operate, but only as a ‘bare structure’.
What happened is that NVIDIA acquired Groq’s talent and IP for a reported $20 billion, managed to escape regulatory investigations, which allowed them to execute the deal in a matter of days. And when you talk about the hardware they now have access to, that’s the more interesting part of the NVIDIA-Groq deal.
Groq’s LPU Architecture & Why It Could Be the Missing Piece For NVIDIA Dominating the Inference-Class
This is the segment that I am most excited to discuss, as Groq has a hardware ecosystem in place that could replicate NVIDIA’s success in the training era, and I’ll justify this ahead as well. The AI industry has evolved dramatically in the past few months in terms of compute demand. While companies like OpenAI, Meta, Google, and others are engaged in training frontier models, they are also looking to have a robust inference stack onboard, as that’s where most hyperscalers earn money.
When Google announced Ironwood TPUs, the industry hyped it as an inference-focused option, and the ASICs were touted as a replacement for NVIDIA, mainly because there were claims that Jensen had yet to offer a solution that dominated inference throughput. We have the Rubin CPX, but I’ll discuss that later. When we talk about inference, compute demand changes dramatically, since with training, the industry requires throughput over latency and high arithmetic intensity, which is why modern-day accelerators are beefed up with HBM and massive tensor cores.
https://wccftech.com/no-nvidia-isnt-acquiring-groq-but-jensen-just-executed-a-surgical-masterclass/amp/

