The best gaming GPUs from team green.
Founded in 1993, Nvidia has seldom known defeat. It has bankrupted and forced many of its competitors out of the market, and it’s largely thanks to high quality products. It makes many of the best graphics cards, and has been a primary pusher of the hardware that’s enabling deep learning and AI. Still, there are shortcomings, like the high-priced RTX 40-series GPUs that power its latest cards, with some questionable features like Frame Generation.
But let’s get out of the somewhat depressing present and look back at Nvidia’s exciting past. It’s extraordinary how many great GPUs Nvidia has made over the 24 years since the GeForce brand was established, and most of them were able to go head-to-head with AMD’s best. Here are — in our opinion — Nvidia’s five best-ever gaming GPUs, starting from the bottom to the top, and considering both individual cards and families as a whole.
5 — GeForce RTX 3060 (12GB)

On paper, the RTX 30-series sounded pretty good in 2020. It featured a more fleshed out architecture with improved ray tracing and tensor cores, it offered tons more raw performance, and it even returned to an attractive pricing structure. In practice, retail pricing was nowhere near where it should have been. It was hard to find a 30-series GPU at anything close to MSRP — or any graphics card of the time, for that matter.
Nevertheless, in the months following the launch of the 30-series in late 2020, Nvidia continued to add new models to the lineup, working its way down the performance stack. The midrange was particularly important, as the prior RTX 20-series with its RTX 2060 and 2060 Super weren’t exactly amazing follow ups to the GTX 1060. Given that anyone would buy a GPU with a pulse in 2021, Nvidia didn’t really have to put much effort into a new GPU, but it surprised everyone with the RTX 3060.
Nvidia doesn’t always nail its midrange offerings, but the RTX 3060 was a great exception. It was significant step up from the RTX 2060 in performance, and most notably had double the VRAM at 12GB. That was more VRAM than even the regular RTX 3080 with its 10GB (though there was a 3080 12GB that wasn’t really available). Such a big improvement gen-on-gen was pretty remarkable for Nvidia. (We should also note that we’re not including the gimped RTX 3060 8GB in this discussion.)

What was even more remarkable was how AMD’s competing midrange cards, which were normally quite potent, weren’t all that powerful. The RX 6600 and 6600 XT had decent horsepower but only 8GB of VRAM, had to rely on inferior FSR 1.0 upscaling instead of DLSS, and came out months later. AMD competed pretty well against the rest of the 30-series and usually had a VRAM capacity advantage, but the Navi 23 cards were the exception.
Of course, the GPU shortage was a thing, and the RTX 3060 wasn’t immune — even with Nvidia’s first “self-owned” attempt at locking out Ethereum mining. Eventually the shortage subsided in 2022, and that made the 3060 one of the most affordable GPUs. The 3060 never quite reached its MSRP of $329, while the 6600 and 6600 XT both fell well below $300 in time. Additionally, FSR 2.0 offered quality and performance improvements that made it more competitive with DLSS, further reducing the 3060’s advantage.
Still, the RTX 3060 stands as one of Nvidia’s best midrange GPUs ever. Beyond pricing and availability issues, it was a great card that certainly gave AMD a run for the money. It was also uniquely good among the rest of the 30-series, which was almost always way too expensive and/or paired with not nearly enough VRAM to make much sense. It’s a pity the RTX 4060 threw most of that progress away.
4 — GeForce GTX 680

It’s rare for Nvidia to make a serious mistake, but one of its worst was the Fermi architecture. First featured in the GTX 480 in mid-2010, Fermi was not what Nvidia needed as it only offered a modest performance boost over the 200-series while consuming tons of power. Things were so bad that Nvidia rushed out a second version of Fermi and the GTX 500-series before 2010 ended, which thankfully resulted in a more efficient product.
Fermi doubtlessly caused Nvidia to do a little soul searching, and the company rethought its traditional strategy. For most of the 2000s, Nvidia lagged behind Radeon (owned first by ATI and then AMD) when it came to nodes. While newer nodes offered better efficiency, performance, and density, they were also much more expensive to use, and there were often “teething pains.” By the mid 2000s, Nvidia’s main strategy was to make big GPUs on older nodes, which often was enough to put GeForce in first place.
The experience with Fermi was so traumatic for Nvidia that the company decided that it would get to the 28nm node right alongside AMD in early 2012. Kepler, Nvidia’s first 28nm GPU, was a different chip than Fermi and prior Nvidia architectures. It used the latest process, its biggest version was relatively lean at just under 300mm2, and it offered great efficiency. The contest between the rival flagships from Nvidia and AMD was set to be very different in 2012.
Although AMD fired the first shot with its HD 7970, Nvidia countered three months later with its Kepler-powered GTX 680. Not only was the 680 faster than the 7970, it was more efficient and smaller, which were the very areas where AMD excelled with the HD 4000- and 5000-series GPUs. Granted, Nvidia only had a thin lead in these metrics, but it was rare — maybe even unprecedented — that Nvidia was ahead in all three.
Nvidia didn’t keep the performance crown for long with the arrival of the HD 7970 GHz Edition and better performing AMD drivers, but Nvidia still held the edge in power efficiency and area efficiency. Kepler continued to give AMD trouble, as a second revision powered the GTX 700-series and forced the launch of a very hot and power hungry Radeon R9 290X. True, the R9 290X did beat the GTX 780, but it was very Fermi-like, and the GTX 780 Ti took back the crown anyway.
Although not particularly well-remembered today, the GTX 680 probably should be. Nvidia achieved a very impressive improvement over the disappointing Fermi architecture using AMD’s own playbook. However, that might be because it got overshadowed by a later GPU that did the same thing but even better.
3 — GeForce GTX 980

From the emergence of the modern Nvidia versus AMD/ATI rivalry in the early 2000s to the early 2010s, both GeForce and Radeon traded blows generation after generation. Sure, Nvidia won most of the time, but usually ATI (and later AMD) weren’t far behind; the only time where one side was totally beaten was actually when ATI’s Radeon 9700 Pro decimated Nvidia’s GeForce 4 Ti 4600. However, Nvidia came pretty close to replicating this scenario a couple of times.
By the mid-2010s, the stars must have aligned for Nvidia. Semiconductor foundries around the world were having serious issues getting beyond the 28nm node, including Nvidia’s and AMD’s GPU manufacturing partner TSMC. This meant that Nvidia could get comfortable with its old strategy of making big GPUs on old nodes without worrying about AMD countering with a brand-new node. Additionally, as AMD was risking bankruptcy, it effectively had no resources to compete with a wealthy Nvidia.
These two factors coinciding at the exact same time made for the perfect storm. Nvidia had already done a very respectable job with the Kepler architecture with the GTX 600- and 700-series, but the brand-new Maxwell architecture for the GTX 900-series (and the GTX 750 Ti) was something else. It squeezed even more performance, power efficiency, and density out of the aging 28nm node.

The flagship GTX 980 wiped the floor with both AMD’s R9 290X and Nvidia’s own last-gen GTX 780 Ti. The GTX 980 was faster, more efficient, and smaller, but unlike the 680, the 980’s leads were absolutely massive in these regards. The 980 was nearly twice as efficient as the 290X, performed about 15% or so faster, and shaved off nearly 40mm2 in die area. Compared to the 780 Ti, the 980 was almost 40% more efficient, about 10% faster, and had a die size over 160mm2 smaller.
This wasn’t a victory quite on the level of the Radeon 9700 Pro, but it was massive all the same. It was essentially on par with what AMD did to Nvidia with the HD 5870. Except, instead of responding with a bad GPU, AMD had nothing to throw back at Nvidia. All AMD could do in 2014 was hang on for dear life with its aging Radeon 200-series.
In 2015, AMD tried its best to compete again, but only at the high-end. It decided to refresh the Radeon 200-series as the Radeon 300-series from the low-end to the upper midrange, and then use its brand-new Fury lineup for the top-end. Nvidia however had an even bigger Maxwell GPU waiting in the wings to cut AMD’s hopeful R9 Fury X off, and the GTX 980 Ti did exactly that. With 6GB of memory, the 980 Ti became the obvious choice over the 4GB-equipped Fury X (which was actually a decent card otherwise).
Though a great victory for Nvidia, the GTX 900-series permanently altered the landscape of gaming graphics cards. The Fury X was AMD’s last competitive flagship until the RX 6900 XT in 2020, and that was largely because AMD stopped making them every generation. AMD is back to regularly making flagship GPUs (knock on wood), but Maxwell really mauled Radeon and it didn’t recover for many years.
2 — GeForce 8800 GTX

The early 2000s saw the emergence of modern graphics cards as both Nvidia and ATI made progress in crucial areas. Nvidia’s GeForce 256 introduced hardware accelerated transform and lighting visuals, while ATI’s Radeon 9700 Pro revealed that GPUs should pack more computational hardware and could be really big. When Nvidia took a big loss at the hands of the 9700 Pro in 2002, it really took that lesson to heart and began making bigger and better GPUs.
Although ATI had started the arms race, Nvidia was dead set on winning it. Both Nvidia and ATI had made GPUs as large as 300mm2 or so by late 2006, but Nvidia’s Tesla architecture went up to nearly 500mm2 with the flagship G80 chip. Today, that’s a pretty typical size for a flagship GPU, but back then it was literally never-before-seen.
Tesla debuted with the GeForce 8800 GTX in late 2006, and it delivered a blow to AMD not far off of what the Radeon 9700 Pro did to Nvidia just four years prior. Size was the deciding factor between the 8800 GTX and ATI’s flagship Radeon X1950 XTX, which was almost 150mm2 smaller. The 8800 GTX was super fast, as well as pretty power hungry for the time, so you also have the 8800 GTX to thank for normalizing GPUs with 150+ watt TDPs — even if that seems pretty quaint nowadays.
Although ATI was the one to invent the BFGPU, it couldn’t keep up with the 8800 GTX. The HD 2000-series, which only got as large as 420mm2, couldn’t catch up to the G80 chip and Tesla architecture. ATI instead changed tactics and began to focus on making smaller, more efficient GPUs with greater performance density. The HD 3000-series flagship HD 3870 was surprisingly small at just under 200mm2, and the following HD 4000- and 5000-series would follow with similarly small die sizes.
More recently, Nvidia tends to follow up powerful GPUs with even more powerful GPUs to remind AMD who’s boss, but back then Nvidia wasn’t quite like that. The Tesla architecture was so good that Nvidia decided to use it again for the GeForce GTX 9000-series, which was pretty much the GeForce 8000 series with a slight performance bump. Granted, the 9800 GTX was almost half the price of the 8800 GTX, but it still made for a boring GPU.
Although the 8800 GTX is pretty old now, it’s remarkable how modern it is in other ways. It had a die size consistent with today’s high-end GPUs, it used a cooler with aluminum fins, and it had two 6-pin power connectors. It only supported up to DirectX 10, which didn’t really go anywhere, so it can’t really be used for modern gaming, but otherwise it’s very recognizable as a modern GPU.
https://www.tomshardware.com/features/best-nvidia-gpus-of-all-time

