Test was carried out by data center operator Aquatron in partnership with the Los Alamos National Laboratory
Independent tests carried out by data center operator Aquatron in partnership with the Los Alamos National Laboratory (LANL) have found that Nvidia’s H200 chips outperformed Intel’s Gaudi 3 accelerator by at least a factor of ~9.0 across three test cases, DCD can exclusively report.
This contradicts the claims made by Intel in its July 2024 sales brochure, where the company projected its Gaudi 3 AI accelerator would offer 1.5x faster inferencing compared to Nvidia’s H200, when running common Large Language Models.

Aquatron and LANL used brand new Supermicro servers, each containing eight Nvidia H200 chips or Intel Gaudi 3 chips, to run what it claims to be the world’s first 16 precision Llama 3.1 405B data validation test for the two chip sets.
Across three test cases – AI research, history, and coding scenarios – the H200 showed a consistent 9-9.5x speedup across all benchmarks, maintaining ~25-26 tokens per second output generation, regardless of output length. By comparison, Gaudi 3 only showed ~2.7 tokens per second.
Aquatron and LANL also found that, while both accelerators show performance degradation with longer outputs, Nvidia’s H200 scales more efficiently and maintains a better efficiency as output length increases.
Consequently, for production deployments, Aquatron concluded that Nvidia’s H200 could potentially handle 9x more inference requests with the same latency, or reduce inference costs by a similar factor, noting that the consistent performance gap across the three tests suggests architectural advantages to the H200, rather than task specific optimizations.
The consistent performance advantage across different workloads also suggests H200 benefits from higher memory bandwidth and more efficient compute units, Aquatron noted.
Intel first unveiled its Gaudi 3 AI accelerator in April 2024. However, on the company’s Q3 2024 earnings call, former CEO Pat Gelsinger told analysts that uptake of Gaudi has been slower than anticipated and Intel had failed to meet its target of $500 million in revenue for Gaudi in 2024.
DCD has reached out to Intel for comment regarding the results published above.
This is not the first time Aquatron has deployed Intel’s Gaudi 3 chips, with CEO and founder Michael Kim saying the company has previously developed the world’s most compact 64-node Gaudi 3 cluster, a claim which he says has been confirmed by Intel.
Founded in 2023 by Kim, Aquatron says its aim is to “build AGI-ready data centers for our sovereign clients” and claims to have already deployed 560MW+ across four continents via the Aquatron Consortium, although there is little information available online about the company or its data centers.
The company has also developed the Aquatron POD, described by the company as an “all-in-one, plug-and-play system purpose-built for AGI inference and reasoning, featuring a record-low PUE of 1.051 and full-stack control from power to compute, all the way to 98 percent carbon capture.”
In November 2024, Korean IT service management firm ITCen Group signed a memorandum of understanding with Aquatron to build an AI-optimized data center in South Korea.
https://www.datacenterdynamics.com/en/news/nvidia-h200-outperforms-intel-gaudi-3-by-factor-of-nine-across-first-llama-31-405b-benchmark-test-exclusive/