UPSCALING SHOWDOWN
DLSS, FSR, & XESS
Boosting frame rates via AI and clever programming
NVIDIA HAS BEEN pushing so-called ‘neural rendering’ techniques since the launch of DLSS in 2018. While DLSS had a bit of a slow burn at launch, there are now more than 500 games and applications that use Nvidia RTX features.
The core idea of neural rendering is to leverage AI models to improve the quality and performance of games and other graphics applications. As pixels become increasingly complex to render, figuring out ways to reduce the number of fully rendered pixels and then interpolating to fill in the gaps can provide a better overall experience.
However, Nvidia’s solutions are designed to only work on Nvidia GPUs. Enter teams red and blue with alternatives that can work on a wider set of hardware. Upscaling and frame generation are here to stay, but how do the various AMD, Intel, and Nvidia solutions stack up, and what does the future hold for neural rendering techniques?
Join us as we cover the state of the upscaling industry and related technologies.
–JARRED WALTON
Nvidia uses the umbrella term ‘neural rendering’ for its AI-based DLSS features that aim to boost frame rates by reducing the number of fully rendered pixels your GPU has to generate.
© EPIC GAMES
UPSCALING 101: THE ALGORITHMS
Fundamentally, upscaling isn’t a new idea. From the very first 2D sprite, games have been using upscaling algorithms. More recently, real-time upsampling of video content became an important feature, and we’ve seen various solutions on DVD, Blu-ray, and HDTV devices over the past couple of decades. Even before DLSS arrived, upscaling was available in games. All you need to do is run a game at a lower resolution than your display’s native resolution, and some form of upscaling happens, either via the GPU or the monitor.
But we’re more interested in the modern upscaling algorithms in games. At present, the three contenders are Nvidia DLSS, AMD FSR, and Intel XeSS, but there are different versions of each of those, with later iterations generally providing improved quality and additional features like frame generation. Let’s quickly cover the basics of the three solutions, starting with Nvidia DLSS.
NVIDIA DEEP LEARNING SUPER SAMPLING
Nvidia DLSS launched in 2018 as a spatial upscaling algorithm. It required game-specific training on Nvidia’s supercomputers, with tens of thousands of images, provided in pairs of lower-resolution ‘input’ and high-resolution ‘ground truth’ quality outputs. These would feed a deep learning algorithm that would be trained to create higher quality outputs given lower-resolution inputs. A key part of the algorithm is that it handles both upscaling and antialiasing—the removal of ‘jaggies’ on high-contrast edges.
It all sounded nice in theory, but it proved to be less than ideal in practice. Only a handful of games ever bothered to try to implement DLSS 1.x, and several of those would later get DLSS 2.x upgrades. There were multiple issues. First, all the per-game training added complexity—game developers couldn’t simply plug in a working solution; they had to capture pairs of frames and send those to Nvidia’s supercomputer. Second, the quality was lacking, with perceivable blurriness. Finally, the algorithm wasn’t very flexible, so games like Battlefield V, for example, would only allow the use of DLSS at 4K on an RTX 2080 Ti—if you were playing at 1440p, the option was locked out. That was also because the spatial algorithm didn’t scale to higher FPS very well, so upscaling to a target 1080p could result in a reduction in fps.