An Introduction to Real-Time Ray Tracing
by Eric Haines and Tomas Akenine-Möller
Computer graphics is the science of generating images from geometrical data, image textures, virtual lights sources, and material data. For example, given a geometrical model of a glass of milk standing on a table, a computer graphics program, called a renderer, can generate an image from a certain camera view of that scene. Given different light sources, it is possible to obtain different types and locations of shadows on the table, for example. This process of generating an image is called rendering, or simply, to render an image.
There are two main methods, called rasterization and ray tracing, for rendering images. Up until now, most of all computer games have used the former method, where a graphics processing unit (GPU), e.g., a graphics cards in a desktop computer, is used to draw triangles (geometry), one at a time, and calculate the color of each pixel on a triangle. GPUs are incredibly fast at this process and keep getting better each year. Rasterization using GPUs can generate amazing images and much research has been devoted to make images more and more realistic with this method.
However, what we ideally want to do is to follow photons around in a scene. For example, for a photon emitted from a light source, follow it down until it hits the glass on the table, follow it as it gets refracted in the glass into the milk, and then gets absorbed or emitted again. Photorealism is the goal in computer graphics where we strive after generating images with the computer that we cannot distinguish from photographs taken of a real scene. Rasterization cannot easily do this, since it has been optimized for generating visibility information from a single point out through many positions on an image plane. Ray tracing, on the other hand, can create a ray that starts from any point and has a certain direction. That ray can be traced until it hits the first object in that direction. A photon can be simulated with a ray and such rays often start from different locations and bounce off in different directions. This flexibility means that ray tracing is the best method for generating photorealistic images, since it can simulate the physical behavior of light.
A classic image, generated with ray tracing, is shown below. Rasterization cannot render this image efficiently, due to the many recursive reflections. This scene was created by Eric more than 30 years ago in order to benchmark ray tracing algorithms.
Ray tracing is the primary method used nowadays in the feature film industry as well, but in those cases an image may take minutes or even many hours to generate. For real-time graphics, where we want images generated at speeds of at least 30 frame per second and sometimes up to 240 times per second (e.g, approximately 4 milliseconds per image), ray tracing has not generally been available. One possible solution to reach this goal could be to use more processors, since ray tracing is considered massively parallelizable, but Moore’s law is slowing, and so this method is hitting its limits and so is not generally a viable solution. Instead it makes sense to add custom, dedicated hardware for basic ray tracing functions in a GPU to increase performance. This is exactly what the recent RTX GPU architecture from NVIDIA has done and it enables ray tracing in games. For example, the games Battlefield V and Shadow of the Tomb Raider both use ray tracing to achieve increased realism.
Ray tracing in real time is the holy grail for computer graphics, and we are on the cusp of a new paradigm in rendering. We are at only the beginning on this new journey, however, and in the next years, it will be exciting to see how new research changes the look of real-time graphics.
As a first step, we have edited a book called Ray Tracing Gems, which contains stand-alone articles about ray tracing, with a focus on real-time. The articles were authored by the best in the industry and academia. We wanted to spread practical tips, tricks, algorithms and techniques for real-time ray tracing in order to act as a catalyst for these approaches. For example, we cannot (yet) afford hundreds or thousands of rays per pixel, but rather only a small numbers. One popular solution is to use a denoising algorithm that attempts to remove the unwanted graininess sometimes seen in ray traced images and so obtain close to cinematic image quality. Ray Tracing Gems contains several articles on this topic.
See the cover of Ray Tracing Gems below, which was rendered using ray tracing and denoising.
About the Author
Tomas Akenine-Möller is a Distinguished Research Scientist at NVIDIA, Sweden, since 2016, and currently on leave from his position as professor in computer graphics at Lund University. Tomas coauthored Real-Time Rendering and Immersive Linear Algebra, and has written 100+ research papers. Previously, he worked at Ericsson Research and Intel.
Eric Haines currently works at NVIDIA on interactive ray tracing. He co-authored the books Real-Time Rendering, 4th Edition and An Introduction to Ray Tracing, edited The Ray Tracing News, and cofounded the Journal of Graphics Tools and the Journal of Computer Graphics Techniques. He is also the creator and lecturer for the Udacity MOOC Interactive 3D Graphics.
This article was contributed by Eric Haines and Tomas Akenine-Möller, editors of Ray Tracing Gems.