When we set out to roll up our collective sleeves and learn what we could about NVIDIA's new
Turing GPU architecture and the new features and capabilities that power the
GeForce RTX series graphics cards, we knew there would be a lot of ground to cover. After all, it has been over two years -- May 2016 to be exact -- since
NVIDIA launched a new GPU (Pascal) architecture. That's what you'd call an eternity, with respect to cutting-edge computing technology. And frankly, with the lack of serious competition at the high-end from its rival AMD, what would be the motivation?
Instead, it appears that NVIDIA took their time, invested in a few key areas like
machine learning, and re-engineered, and in some cases re-invented, 3D graphics and game rendering technologies to lay the foundation for future advancements, beyond just the standard kickers that come from newer manufacturing processes, beefier GPU engines and a higher speed memory interface.
So yes, NVIDIA's Turing has hundreds more shader cores on the high end, and in some configs goosed-up GPU clocks, a much higher bandwidth GDDR6 memory interface, and a new tuned and optimized architecture with respect cache resources and the GPU instruction pipeline. As a result, you can expect Turing to be notably faster in legacy games that do not specifically take advantage of its new on-board
Ray Tracing engines or
Tensor cores. In addition, NVIDIA has tuned and augmented Turing's shader pipeline, such that game developers can make more efficient use of shader resources and achieve higher performance and levels of visual fidelity as a result, in future and existing titles.
Performance? To Be Continued...
All of these upgrades and enhancements alone should equate to significantly better performance in general for Turing, versus NVIDIA's previous generation Pascal. However, when you factor in what NVIDIA has brought to the table in terms of new, emerging graphics technologies, like Turing's RT cores for real-time Ray Tracing in games and its Tensor cores for
DLSS Anti-Aliasing and better visuals via inferencing and machine learning, you have an impressive amount of true innovation that should, by all estimations, offer major advancements in 3D graphics for gaming and
pro-graphics workstation applications alike.
That's where we are at as of today, though in a few short days you'll hear from us again with respect to specific performance details for NVIDIA's GeForce RTX 2080 and GeForce RTX 2080 Ti graphics cards (so stay tuned). Beyond that, there's the notion of industry adoption for NVIDIA's new Ray Tracing and Tensor core technologies and their associated feature sets. How quickly will we see these features in AAA-title games? We've already heard of titles like
Shadow Of The Tomb Raider,
Battlefield V, Metro Exodus and
Mechwarrior 5 (among others) and their support for NVIDIA RTX Ray Tracing or DLSS, so it appears NVIDIA is off to a good start.
How this plays out in the graphics landscape moving forward versus AMD (and maybe Intel in a couple of years) remains to be seen, but for now, NVIDIA is well-positioned to lead the graphics charge with innovative, ground-breaking technologies that will bolster both performance and image quality for more immersive, next-generation gaming. And as we've mentioned, stay tuned here for more on Turing's performance in the days ahead...