In addition to my previous post on raytracing shizzle on the GPU and the performance vs. quality issue... feel free to skip if you can't be arsed, I won't be offended.
These images were taken a few moments ago from my own GPU path tracer. The rendering is running wholly on the GPU at 100% utilisation, the GPU being an EVGA 1080 Ti FTW3). I am rendering a single frame of a very, very simple scene. My software is fairly well optimised and, due to the nature of the tracing, produces accurate reflections, refractions, shadows, caustics, etc. - as you would expect from this sort of technology. No rasterisation is going on here - it's pure path tracing and all courtesy of CUDA, thus running millions of instructions per pixel per second in parallel in real time.
Below is the image sampled at 2 samples per pixel. There's a lot of 'noise' and the quality is far from acceptable. It took around 0.1s to render.
Below is the image sampled at 10 samples per pixel. There's still a lot of 'noise' and the quality is a little better. It took around 0.5s to render.
We're (below) now at 100 samples per pixel and things are showing improvement. It took around 6.5s to render the frame and the soft shadows are still noisy.
In the image below we're at 1000 samples per pixel and we're now starting to reach acceptable(ish) quality levels. It took a whole 71 seconds to reach this quality level. For a single frame. Assuming a gamer wants 60fps then that equates to a single frame being rendered and delivered to the screen in 16 milliseconds. This took 71 seconds - some 4400-odd times slower. Welcome to the computationally expensive world of ray tracing!
Here's another example (below) showing off something a little more 'raytracey'. The samples and times are shown again in the images.
Of course, the examples I give here are very simple and not particularly scientific. But they show how expensive the process is for even the most simple of 3D scenes. You're basically looking in excess of 60 seconds per frame for the above rather than 60 frames per second.
But that's where the technology gets really cool. nVidia are sampling at a very low rate hence the results tend to resemble those seen in the first images of my examples. So how come the resulting images don't look that bad? That's where the AI, deep-learning and de-noising tech on the Tensor Cores can come into play. They can take that noisy image and 'estimate' what the final image should look like based on learning. They are effectively inferring what the final image should look like given a very poor input image quality. In fact, similar technology can also be used to upscale 1080p or 4k images to 8k, 16k+ resolutions whilst keeping quality losses to a minimum. The technology can amazingly 'infer' the detail and produce impressive results.
Take a look at the following video. The video shows a low sampled ray traced image in real time alongside a technique that can (using AI and deep-learning) 'infer' the final result. This is actually pretty incredible; being able to 'guesstimate' what the final image should be from such a poor quality input. This sort of tech is becoming big in the movie industry and has real tangible benefits in all realms of visuals / image generation. Path tracing with these de-noising techniques are key to bringing this tech into the realms of real time at acceptable fidelity.
Another video showing noise reduction at work - part collaboration with nVidia. Basically training an AI to remove noise from images without the AI actually knowing what noise is and what the final image should look like. Sounds stupid but it works and the results are incredible. This is what the Tensor Cores will be used for in time - running these algorithms using computed knowledge and producing great visuals even from less than ideal input imagery. Yeah, I'm getting my geek on but I truly did not expect to see this level of technology for at least another 10-15 years.
I'm done. I know I'm preaching to the few rather than the many but this is (in part) why the new RTX range is so impressive. Not so much purely for gamers but in terms of the future and what is coming. It's just going to take some time to get there. I have to be honest and say I welcome this change in direction and I hope AMD follow suit (and Intel seeing as they are entering the GPU arena in 2020). It's the right way to go if visual realism is the holy grail.