Sorry, I forgot to reply - been a bit busy!
Ah, fair point. From a non-coding point-of-view, I rely on other software tools to create the 3D scene (models, lighting setup, material properties, etc.) and my engine then takes the output from those tools, parses the information into a highly optimised format that my engine can use, and then kicks off the rendering process (to actually generate the imagery). The interactive nature of my engine means that I am free to navigate around the 3D scene and interact with various components within it (albeit very limited). In a nutshell, my engine is an interactive 3D scene viewer rather than some kind of full-blown editing suite.
What tools/workflow do you use, Rob? And which products? I'm always interested in this stuff and what people are using.
As mentioned in the paragraph above, my rendering engine is basically a 3D scene viewer with some interactivity thrown in for good measure. It mainly came into being due to:
- My interest in 3D CGI and the tech behind it
- I wanted to write a ray/path tracer that used the GPU rather than the CPU (I like a challenge!)
- I wanted a sandbox to play with new ideas and to implement new papers/algorithms in the computer graphics domain
- I wanted a sandbox to experiment with ideas that I could use as a hobbyist and also with clients who were interested in my graphics coding abilities
As a result, the rendering engine is a mish-mash of technologies, ideas, hacked algorithms, and research stuff - but geared specifically to physically based rendering (PBR) to achieve realistic results that closely approximate nature - i.e. geared towards what happens when light rays/photons hit surfaces and how those light rays move travel through different mediums, how light is scattered/reflected/refracted/transmitted, and so forth. At the heart, my engine is using very similar methods and algorithms to those used by Pixar, Industrial Light and Magic, Disney, Weta, etc. and in the tools they use. Strange that...
Due to it being a viewer rather than an editor, I am relying on other tools to author my 3D scenes. My current tool of choice is Blender. It's free and it's incredibly well-supported. It is also capable of producing some fantastic results comparable to those in higher-end products that cost thousands or that are utilised by visual fx studios. I basically use Blender to build the 3D scenes (using existing assets more often than not) and to also set textures/materials, lighting, camera, etc. I then export that scene from Blender and my render engine has custom code that can import/consume that data, and then render it. Simples!
In fact, hold on... some examples...
View attachment 1623887
There you go, Blender in all its glory! As you can see, I use it to position objects, set texturing and material properties, and so forth. At this moment in time, I am going through a huge rewrite so lighting and camera information that I set up in the scene are currently... hence my render engine currently handles that by using image-based lighting (IBL). More on that in a moment.
View attachment 1623888
So yeah, I fiddle with the scene in Blender and - when I'm ready to go - I export the scene and load it into my render engine. It's quite useful being able to compare the path-traced results that my engine produces directly against those produced by the Cycles renderer built into Blender. There are a lot of similarities but, again, this is due to the fact that - at their heart - they are using very similar technologies and algorithms (which is kinda becoming standard in the CG rendering domain these days).
Does my engine have a GUI? Well, sort of.
Again, my engine is not an editor and is very much a research project/hobby so the GUI is very limited and changes to reflect what I am working on. Or what I am trying to fix when I hit issues. And that happens a lot. As much as I love this s**t, it can be a complete mind f**k at times. I won't lie, a lot of physics and mathematics goes over my head sometimes and I have to seek help or just rely on mathematical proofs/equations being correct even if I can't understand some of the more complex stuff! LOLz! Anyways, I digress... my GUI is mainly focused towards allowing me to see useful information (such as camera position and view direction) and other debug stuff so that I can track down and fix issues with the rendered imagery. It also provides me with a simple means with which to have limited interactivity with components in the scene. So, for example, I can access the materials I set up in Blender and tweak them on the fly in my rendering engine. Here I have changed the car's paintwork from red to green...
View attachment 1623889
In addition to playing with materials, I can change the camera aperture, focal length, depth of field, etc. It's not a fancy GUI by any means, but it's functional! I can also tweak lighting, which is useful due to the fact that I currently cannot import lighting information from Blender. Well, I can't import DIRECT lighting information (i.e. explicit light sources such as point lights, cone lights, directional lights, etc.) - lights which are typically used to represent any light source in a 3D scene other than, perhaps, the sun! This will be fixed at some point and my render engine does have support for direct lights but... it will be a while until that is in and working again.
So, in the meantime, I am relying on IBL (image-based lighting) to light the scene. This is no bad thing as it is what movies and visual fx studios use to provide the global illumination (INDIRECT) lighting for their 3D scenes. It gives a very realistic look to the scene and helps ground objects within the environment. Basically, an image is used to wrap the entire scene and this image stores radiance data as opposed to just RGB colour values. That way the image contains both colour information and also light intensity - hence the image may contain the sun with realistic lighting values encoded in the image, which I can then use to light the scene as if in the real world. It means my rendering engine produces true HDR images (which I can then tonemap back to standard SDR if needs be). In the image above, the cartoon AE86 car is lit purely by the image of the shopping mall you see around it. Whilst it looks like a shopping mall image, that image also contains the light intensities (as described) so light sources in the image are representative of real-life light sources and can be treated as such to light the car. It's pretty cool and a common way to light scenes these days, especially in visual fx and archviz. My render engine can load HDRi images for IBL and I can then manipulate their intensity and/or rotate the environment image to find better lighting angles, etc.
Thank you, mate - it keeps me busy and entertained.
I've really broken quite a few things at the moment but - thankfully - it's still just about capable of producing half-decent images. It's just a shame I've broken the normal mapping stuff and transmissive materials aren't rendering correctly! Oops. That said, it is being rewritten so it will get fixed in time. I've rendered a few more images recently (I'll share some in a following post shortly) - these images will be using the newly rewritten/re-implemented BRDFs I've been working on for diffuse and specular response (using the GGX, Lambertian and Microfacet model for those interested!)
It's possible mate. There are no reasons why I couldn't write it as a plugin for Blender or some other system that uses plug-in renderers. Obviously, that would require some additional effort to 'speak' to the target system but it's do-able. Alas, my code is a long way from being feature-complete and I don't think it ever will be. This is purely a fun sandbox project. That's not to say that I might branch off and produce a commercial-grade variant for release though. Yeah, like I'm going to get time to do that!