I recently picked up one of my homebrew projects and started to re-work and refactor it (mainly for fun, research and professional reasons). It's basically a real-time renderer that utilises path tracing and has support for both direct and indirect lighting (global illumination). It only runs on nVidia hardware right now as I'm using CUDA and CUDA is nVidia specific. Due to the work involved I probably won't change any time soon and I'm a bit of a Team Green fan and developer anyway so...
For anyone interested, I'm using CUDA, C++, and OpenGL 4.3 - the OpenGL interop is purely used so that I effectively have a canvas and buffers to read/write to/from rather than utilising it for rasterisation. My path tracer isn't using any form of rasterisation or GPU shader programming at all; it simply iterates through each screen pixel and determines which colour it should be by firing billions of rays out into the 3D world and then calculating what that pixel colour should be. Simples.
I was able to improve the performance of the path tracer significantly by following some recent research on GPU path tracing and implementing their findings. Whilst the results were promising it did somewhat increase the complexity of the codebase and I'm still yet to finish/fix some of the features that I'm bringing across into version 2.0 of my path tracer. Some of the problems I'm facing... well... I'm not even sure how or if I can overcome them yet. But that's part of the fun and challenge! I was also desperate to stick some sort of user interface in there so spent a few hours integrating ImGui. It works great! I haven't even begun to consider what I want and need in the UI yet so it's all a bit 'debug' and 'developer' based and parts of it are already out of date due to the changes I'm making. But the main thing is that it proved the UI integration worked and that it worked well - hence ImGui is staying.
Denoising is hugely important in today's rendering tech as we move more and more towards real-time ray-traced / path-traced graphics that align closer to the so-called rendering equation. The problem is that these rendering techniques can produce noisy images and can sometimes require trillions and trillions of calculations before acceptable imagery is produced. The more times a pixel is 'sampled' the better the resulting image appears. However, this can require thousands and thousands of samples and those samples can take a lot of time to calculate. Not good when you are targeting real-time and 30fps/60fps or higher. Hence, denoising allows lower sample counts to be used by intelligently working out what the pixel colour should be. It effectively removes the need to perform thousands of sample calculations when it only needs, say, a few and then makes a best guess from those. Results and research in this area are proving very promising and I'm hoping to integrate some form of denoising directly into my path tracer at some point (either directly in the per-frame updates or as a final on-demand step).
Oh yeah, stepping up from a 2080 Ti to a 3090 also helped improve the performance quite a bit!
Anyways - here are a few images.
sipt2_20211220_222148_1920x1080_s6358 by
Andy Eder, on Flickr
sipt2_20211218_000848_2560x1440_s5671 by
Andy Eder, on Flickr
grab_20210124_005532_s6350 by
Andy Eder, on Flickr
sipt2_20220206_022113_1920x1080_s32501 by
Andy Eder, on Flickr
sipt2_20220209_035644_1920x1080_s3993 by
Andy Eder, on Flickr
sipt2_20220219_045124_1920x1080_s11073 by
Andy Eder, on Flickr
sipt2_20220219_173550_1920x1080_s11636 by
Andy Eder, on Flickr
sipt2_20220302_122310_1920x1080_s34516 by
Andy Eder, on Flickr
sipt2_20220303_031939_1920x1080_s10119 by
Andy Eder, on Flickr
sipt2_20220306_010809_1920x1080_s15395 by
Andy Eder, on Flickr