ClioSport.net

Register a free account today to become a member!
Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • When you purchase through links on our site, we may earn an affiliate commission. Read more here.

Realtime Graphics / Game Engines



SharkyUK

ClioSport Club Member
A few more...

51943325922_d9e9d74fa1_k.jpg
sipt2_20220306_021407_1920x1080_s25665 by Andy Eder, on Flickr

51944389828_ee10ae72e5_k.jpg
sipt2_20220306_030904_1920x1080_s19942 by Andy Eder, on Flickr

51944626409_a6b1f4fa81_k.jpg
sipt2_20220307_031731_1920x1080_s30720 by Andy Eder, on Flickr

51943325767_65be1ab623_k.jpg
sipt2_20220307_223341_1920x1080_s45824 by Andy Eder, on Flickr

51944626169_6095bafe5c_k.jpg
sipt2_20220308_032013_1920x1080_s6622 by Andy Eder, on Flickr

51943325497_2eab006a32_k.jpg
sipt2_20220308_183543_1920x1080_s5029 by Andy Eder, on Flickr

51944389253_25a267fb7b_k.jpg
sipt2_20220309_010029_1920x1080_s20892 by Andy Eder, on Flickr

51944389098_1fabf3d8c3_k.jpg
sipt2_20220311_234530_1920x1080_s22281 by Andy Eder, on Flickr

51969289184_2603283788_k.jpg
SIPT Test Render by Andy Eder, on Flickr

51986105822_416da356db_k.jpg
SIPT Test Render by Andy Eder, on Flickr

I also put a recent-ish video together showing v2.0 running for the first time. Sadly YouTube compression does not play well with path traced graphics so the quality isn't a true representation of the final quality!

 

Robbie Corbett

ClioSport Club Member
A few more...

View attachment 1587308sipt2_20220306_021407_1920x1080_s25665 by Andy Eder, on Flickr

View attachment 1587309sipt2_20220306_030904_1920x1080_s19942 by Andy Eder, on Flickr

View attachment 1587310sipt2_20220307_031731_1920x1080_s30720 by Andy Eder, on Flickr

View attachment 1587311sipt2_20220307_223341_1920x1080_s45824 by Andy Eder, on Flickr

View attachment 1587312sipt2_20220308_032013_1920x1080_s6622 by Andy Eder, on Flickr

View attachment 1587313sipt2_20220308_183543_1920x1080_s5029 by Andy Eder, on Flickr

View attachment 1587314sipt2_20220309_010029_1920x1080_s20892 by Andy Eder, on Flickr

View attachment 1587315sipt2_20220311_234530_1920x1080_s22281 by Andy Eder, on Flickr

View attachment 1587316SIPT Test Render by Andy Eder, on Flickr

View attachment 1587317SIPT Test Render by Andy Eder, on Flickr

I also put a recent-ish video together showing v2.0 running for the first time. Sadly YouTube compression does not play well with path traced graphics so the quality isn't a true representation of the final quality!


Absolutely amazing, what are you hoping your application will do that others don't?

It did look incredibly fast in your youtube, I think faster than whatever engine blender uses. I found the 3090 to be a big step up in terms of rendering speed too.
 

SharkyUK

ClioSport Club Member
Absolutely amazing, what are you hoping your application will do that others don't?

It did look incredibly fast in your youtube, I think faster than whatever engine blender uses. I found the 3090 to be a big step up in terms of rendering speed too.

Thanks, mate. (y)

My path tracer, in many ways, is fundamentally similar to that of the Cycles renderer in Blender. Having had a very quick look through the Cycles code it would appear that they have been following the same research as I have and getting ideas from that, although our realisations of that research obviously differ in terms of interpretation and implementation. I do believe that my path tracer is faster than Cycles but it's not quite so clear cut once you start digging deeper.

Blender's renderer is a LOT more flexible and powerful than mine and is capable, at least for now, of handling far more complex material types (and, consequently, the way light interacts with those material types). I sacrificed that complexity in a bid to see what levels of performance I could reach whilst still maintaining real-time (or interactive) frame rates. Whilst Blender is still very much focused on "offline" rendering (where it doesn't matter if a single frame takes hours to render), I wanted to see how path tracing could be used in a real-time scenario as typically found in gaming (or, indeed, pre-vis for movies and VFX). To take it a step further, I did not want to go down the route of leveraging the likes of RTX as, whilst impressive, it only represents (and accelerates) a small part of what path tracing is about and is closely tied to the existing shader and rasterisation hardware in GPUs as we know them. I'm thinking ahead a few hardware generations where we might start to look to move away from rasterisation and to where GPUs are increasingly becoming general parallel computing powerhouses.

51944627304_6db9444a5c_k.jpg
sipt2_20220219_035724_1920x1080_s100365 by Andy Eder, on Flickr

As daft as it may seem, it could actually make the rendering developer's jobs easier as it means fewer hacks and smoke and mirrors are needed to generate imagery. Right now, modern rasteriser-based game renderers are comprised of multiple complex systems that have to work together in harmony to produce the visuals we demand of them. Even RTX cannot fully harmonise this (although it can certainly help with reflections, shadows and global illumination to a point). Hands down, the modern rasteriser-based renderer is a real b!tch to implement and maintain. :LOL: That's not to say a path traced solution is simple (it brings its own problems) but the unified nature of the algorithm means complexity should be reduced. Do you want shadows? No problem, path tracing automatically handles that. Soft shadowing and prenumbras? No problem, path tracing handles that. Caustics? No problem, path tracing has your back. Ambient occlusion? Global illumination? No problem, no fancy auxiliary systems are needed as it's inherent to path tracing. It could also mean a reduction in the complexity of uber-shaders or not having to write hundreds of shaders to handle every different type of surface/material interaction that your game/application may require. Once your path tracing pipeline is set up you can simply implement well-known BRDF/BSDF/BSSDF/etc. as per your needs - these are just physically-based algorithms that generate realistic (or good approximations) of how light and surface interact together. If you need special effects then you can simply write stylised variations to suit.

51969069383_7228a42ae9_k.jpg
SIPT Test Render by Andy Eder, on Flickr

I'm not really sure where I'm headed with my project as it's very much for fun and research, almost like a sandbox for experimentation. I think I will probably look to improve its material handling and to align it with current trends and algorithm adoption as per many of the big-boy commercial renderers out there. The downside here is that it's quite an undertaking as I'll have to refactor a lot of existing code. But that's the challenge and where the fun lies for me. Always learning! I've been collaborating and working with a couple of businesses recently (alongside my other contract work, and who certainly know a thing or two about computer graphics) and the research and advances being made in this area are very exciting (both on the hardware side of things and on the software side, too).
 

SharkyUK

ClioSport Club Member
Thread bump! Please post some more content Sharky

I've not had as much time as I'd like to work on things recently but here are a few more test renders from some ongoing GPU path tracing research I'm doing.

52469687937_5c15c8104d_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52470729733_6291d0f6eb_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52470185611_930c688a97_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52470729018_484e861f4b_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52470462759_e6f45be8f4_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52469686937_c655e550fc_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52469686882_a457bc1940_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52469686852_f9628632f9_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52469686627_ed46b01a4e_k.jpg
SIPT Test Render
by Andy Eder, on Flickr
 

SharkyUK

ClioSport Club Member
A few more test renders. All a bit boring but the project is undergoing a significant rewrite in places and the test renders at least offer some confidence that things are working (or thereabouts!) :ROFLMAO:

52498237874_0107a5d540_k.jpg
Test Render
by Andy Eder, on Flickr

52498509558_58980a6f9b_k.jpg
Test Render
by Andy Eder, on Flickr

52497472082_e7e88ca3f3_k.jpg
Test Render
by Andy Eder, on Flickr

52498509633_552f40b02c_k.jpg
Test Render
by Andy Eder, on Flickr

52498509488_a6f56996fb_k.jpg
Test Render
by Andy Eder, on Flickr

52498509378_86d7f2fe87_k.jpg
Test Render
by Andy Eder, on Flickr

52497471892_4529224127_k.jpg
Test Render
by Andy Eder, on Flickr

52498237649_667ea0479a_k.jpg
Test Render
by Andy Eder, on Flickr

52498237589_f2628482cd_k.jpg
Test Render
by Andy Eder, on Flickr
 

SharkyUK

ClioSport Club Member
I would properly love to see a full walkthrough video of how you got to this point, it's incredibly detailed but i'm not sure what's actually involved in getting there if you see what I mean.

How many years have you got? :ROFLMAO:

I've been doing this sort of thing for over 30 years now mate (computer graphics programming/software development). It boils down to a lot of reading/research/experimentation and having a decent grasp of physics and maths. And then couple that with computer programming skills so that you can put things into practice. Simples :ROFLMAO:

It would take a long time to provide a detailed walkthrough of what I'm actually doing here and, as much as I would love to do it, I really don't have the time. I also think most people would probably find it pretty boring! Hence, I'll give you the TLDR; version...

These images are created using a piece of research software that I've been working on over the last few years in some form or other. Some of it has been for pure sh*ts and giggles (as this is also a hobby/passion as well as a career) and some of it has been for customers and clients, some of whom you may have heard of. But I'm not going to name-drop. :p This method of generating imagery is through path tracing, which is a form of ray tracing. So I've basically written a path tracer using C++ and CUDA, which means it runs largely on the GPU and has massively parallel compute performance (compared to CPU rendering).

What does a path tracer do? Basically, for every pixel on the screen, it 'shoots' millions (or billions and trillions) of simulated rays from a virtual camera out into a 3D world. Each individual ray is followed and checks are performed to see if it hits something. If it hits something then magic happens whereby we have to determine what happens next. And what happens next depends on what the ray has hit; the interplay between the light (ray) and the object's surface properties (material) determine the next steps. For example, if the ray hits a mirror then the ray is simply reflected just like a mirror would reflect it. If the ray hits a glass object then we have to create additional rays (which increases the complexity and reduces the performance). If you think about glass, it is both see-through and reflective - hence you have to trace a ray that goes through the glass (taking into account the fact that light bends going through some materials) and you have to trace another ray to account for the reflective aspect of the glass. And so on and so forth. As well as determining the interplay between the light and surface material, you also have to factor in light contributions from direct light sources (street lights, flames, etc.) as well as indirect light sources (global illumination). It all gets a bit complicated and expensive, hence why ray tracing can really hit performance on modern GPUs. It's also the reason why games still only use ray tracing in a very limited way.

sipt2_20221122_013408_1920x1080_s4694.png


Anyways... I could go on but hopefully, that gives an idea of what I'm doing. I'm simulating how light bounces around in the world based on real-world physics. This sort of tech is used by Disney, Pixar, etc. to generate the visuals for their movies. It's pretty cool stuff. Just a bit computationally expensive and sometimes hard to get your head around the concepts and algorithms. (y)
 

Robbie Corbett

ClioSport Club Member
How many years have you got? :ROFLMAO:

I've been doing this sort of thing for over 30 years now mate (computer graphics programming/software development). It boils down to a lot of reading/research/experimentation and having a decent grasp of physics and maths. And then couple that with computer programming skills so that you can put things into practice. Simples :ROFLMAO:

It would take a long time to provide a detailed walkthrough of what I'm actually doing here and, as much as I would love to do it, I really don't have the time. I also think most people would probably find it pretty boring! Hence, I'll give you the TLDR; version...

These images are created using a piece of research software that I've been working on over the last few years in some form or other. Some of it has been for pure sh*ts and giggles (as this is also a hobby/passion as well as a career) and some of it has been for customers and clients, some of whom you may have heard of. But I'm not going to name-drop. :p This method of generating imagery is through path tracing, which is a form of ray tracing. So I've basically written a path tracer using C++ and CUDA, which means it runs largely on the GPU and has massively parallel compute performance (compared to CPU rendering).

What does a path tracer do? Basically, for every pixel on the screen, it 'shoots' millions (or billions and trillions) of simulated rays from a virtual camera out into a 3D world. Each individual ray is followed and checks are performed to see if it hits something. If it hits something then magic happens whereby we have to determine what happens next. And what happens next depends on what the ray has hit; the interplay between the light (ray) and the object's surface properties (material) determine the next steps. For example, if the ray hits a mirror then the ray is simply reflected just like a mirror would reflect it. If the ray hits a glass object then we have to create additional rays (which increases the complexity and reduces the performance). If you think about glass, it is both see-through and reflective - hence you have to trace a ray that goes through the glass (taking into account the fact that light bends going through some materials) and you have to trace another ray to account for the reflective aspect of the glass. And so on and so forth. As well as determining the interplay between the light and surface material, you also have to factor in light contributions from direct light sources (street lights, flames, etc.) as well as indirect light sources (global illumination). It all gets a bit complicated and expensive, hence why ray tracing can really hit performance on modern GPUs. It's also the reason why games still only use ray tracing in a very limited way.

View attachment 1621106

Anyways... I could go on but hopefully, that gives an idea of what I'm doing. I'm simulating how light bounces around in the world based on real-world physics. This sort of tech is used by Disney, Pixar, etc. to generate the visuals for their movies. It's pretty cool stuff. Just a bit computationally expensive and sometimes hard to get your head around the concepts and algorithms. (y)
I'm not sure if he meant how did you create the engine from a coding point of view or instead - how did you use it to create the rendered images?

I use a few different rendering tools, mostly for showing product concepts to people (most can't see past something that doesn't look semi realistic) so have an ok grasp on a very general/basic work flow - materials, lighting etc. But I would be interested in how you use your program Sharky, does it have a GUI, how do you assign materials, set lighting, camera settings etc.

Some of the results look fantastic! easily on par with commercial programs that have had thousands (if not more) of man hours put into them. I think its incredibly impressive.

Is there scope for a cut down version which could be built into other applications?
 

dann2707

ClioSport Club Member
How many years have you got? :ROFLMAO:

I've been doing this sort of thing for over 30 years now mate (computer graphics programming/software development). It boils down to a lot of reading/research/experimentation and having a decent grasp of physics and maths. And then couple that with computer programming skills so that you can put things into practice. Simples :ROFLMAO:

It would take a long time to provide a detailed walkthrough of what I'm actually doing here and, as much as I would love to do it, I really don't have the time. I also think most people would probably find it pretty boring! Hence, I'll give you the TLDR; version...

These images are created using a piece of research software that I've been working on over the last few years in some form or other. Some of it has been for pure sh*ts and giggles (as this is also a hobby/passion as well as a career) and some of it has been for customers and clients, some of whom you may have heard of. But I'm not going to name-drop. :p This method of generating imagery is through path tracing, which is a form of ray tracing. So I've basically written a path tracer using C++ and CUDA, which means it runs largely on the GPU and has massively parallel compute performance (compared to CPU rendering).

What does a path tracer do? Basically, for every pixel on the screen, it 'shoots' millions (or billions and trillions) of simulated rays from a virtual camera out into a 3D world. Each individual ray is followed and checks are performed to see if it hits something. If it hits something then magic happens whereby we have to determine what happens next. And what happens next depends on what the ray has hit; the interplay between the light (ray) and the object's surface properties (material) determine the next steps. For example, if the ray hits a mirror then the ray is simply reflected just like a mirror would reflect it. If the ray hits a glass object then we have to create additional rays (which increases the complexity and reduces the performance). If you think about glass, it is both see-through and reflective - hence you have to trace a ray that goes through the glass (taking into account the fact that light bends going through some materials) and you have to trace another ray to account for the reflective aspect of the glass. And so on and so forth. As well as determining the interplay between the light and surface material, you also have to factor in light contributions from direct light sources (street lights, flames, etc.) as well as indirect light sources (global illumination). It all gets a bit complicated and expensive, hence why ray tracing can really hit performance on modern GPUs. It's also the reason why games still only use ray tracing in a very limited way.

View attachment 1621106

Anyways... I could go on but hopefully, that gives an idea of what I'm doing. I'm simulating how light bounces around in the world based on real-world physics. This sort of tech is used by Disney, Pixar, etc. to generate the visuals for their movies. It's pretty cool stuff. Just a bit computationally expensive and sometimes hard to get your head around the concepts and algorithms. (y)
Mats that's awesome thanks for explaining! And also loving the graphics hahaha
 

sn00p

ClioSport Club Member
  A blue one.
How many years have you got? :ROFLMAO:



I've been doing this sort of thing for over 30 years now mate (computer graphics programming/software development). It boils down to a lot of reading/research/experimentation and having a decent grasp of physics and maths. And then couple that with computer programming skills so that you can put things into practice. Simples :ROFLMAO:

I describe it is programmers brains being wired differently, it's one of those skills that I think you can either do or you can't - it's like playing the bass guitar, it's easy to play badly, it's much harder to play well. I did comp-sci at university back when they actually taught proper programming, Java hadn't really gripped academia and we had classes in prolog, pascal, C, assembler and a lot of embedded stuff, engine management, rtos, AI and all that, it still baffled me that people got to the final year and couldn't even get "hello world" to compile.

It's about solving puzzles, and I think most programmers don't particular care for the goal, it's the journey that floats our boats.

That's also why I enjoy reverse engineering, because you're trying to outsmart somebody who thinks they've been smart.
 

c4pob

ClioSport Club Member
  A terrible one
Excuse my ignorance but is the point of your programme to live render or produce static shots? I presume it’s the former? Got any vids of it working?
 

SharkyUK

ClioSport Club Member
Sorry, I forgot to reply - been a bit busy!

I'm not sure if he meant how did you create the engine from a coding point of view or instead - how did you use it to create the rendered images?

Ah, fair point. From a non-coding point-of-view, I rely on other software tools to create the 3D scene (models, lighting setup, material properties, etc.) and my engine then takes the output from those tools, parses the information into a highly optimised format that my engine can use, and then kicks off the rendering process (to actually generate the imagery). The interactive nature of my engine means that I am free to navigate around the 3D scene and interact with various components within it (albeit very limited). In a nutshell, my engine is an interactive 3D scene viewer rather than some kind of full-blown editing suite.

I use a few different rendering tools, mostly for showing product concepts to people (most can't see past something that doesn't look semi realistic) so have an ok grasp on a very general/basic work flow - materials, lighting etc. But I would be interested in how you use your program Sharky, does it have a GUI, how do you assign materials, set lighting, camera settings etc.

What tools/workflow do you use, Rob? And which products? I'm always interested in this stuff and what people are using.

As mentioned in the paragraph above, my rendering engine is basically a 3D scene viewer with some interactivity thrown in for good measure. It mainly came into being due to:
  • My interest in 3D CGI and the tech behind it
  • I wanted to write a ray/path tracer that used the GPU rather than the CPU (I like a challenge!)
  • I wanted a sandbox to play with new ideas and to implement new papers/algorithms in the computer graphics domain
  • I wanted a sandbox to experiment with ideas that I could use as a hobbyist and also with clients who were interested in my graphics coding abilities
As a result, the rendering engine is a mish-mash of technologies, ideas, hacked algorithms, and research stuff - but geared specifically to physically based rendering (PBR) to achieve realistic results that closely approximate nature - i.e. geared towards what happens when light rays/photons hit surfaces and how those light rays move travel through different mediums, how light is scattered/reflected/refracted/transmitted, and so forth. At the heart, my engine is using very similar methods and algorithms to those used by Pixar, Industrial Light and Magic, Disney, Weta, etc. and in the tools they use. Strange that... ;) :p

Due to it being a viewer rather than an editor, I am relying on other tools to author my 3D scenes. My current tool of choice is Blender. It's free and it's incredibly well-supported. It is also capable of producing some fantastic results comparable to those in higher-end products that cost thousands or that are utilised by visual fx studios. I basically use Blender to build the 3D scenes (using existing assets more often than not) and to also set textures/materials, lighting, camera, etc. I then export that scene from Blender and my render engine has custom code that can import/consume that data, and then render it. Simples! :ROFLMAO: In fact, hold on... some examples...

20221214_blender01.png


There you go, Blender in all its glory! As you can see, I use it to position objects, set texturing and material properties, and so forth. At this moment in time, I am going through a huge rewrite so lighting and camera information that I set up in the scene are currently... hence my render engine currently handles that by using image-based lighting (IBL). More on that in a moment.

20221214_blender02.png


So yeah, I fiddle with the scene in Blender and - when I'm ready to go - I export the scene and load it into my render engine. It's quite useful being able to compare the path-traced results that my engine produces directly against those produced by the Cycles renderer built into Blender. There are a lot of similarities but, again, this is due to the fact that - at their heart - they are using very similar technologies and algorithms (which is kinda becoming standard in the CG rendering domain these days).

Does my engine have a GUI? Well, sort of. :ROFLMAO:

Again, my engine is not an editor and is very much a research project/hobby so the GUI is very limited and changes to reflect what I am working on. Or what I am trying to fix when I hit issues. And that happens a lot. As much as I love this s**t, it can be a complete mind f**k at times. I won't lie, a lot of physics and mathematics goes over my head sometimes and I have to seek help or just rely on mathematical proofs/equations being correct even if I can't understand some of the more complex stuff! LOLz! Anyways, I digress... my GUI is mainly focused towards allowing me to see useful information (such as camera position and view direction) and other debug stuff so that I can track down and fix issues with the rendered imagery. It also provides me with a simple means with which to have limited interactivity with components in the scene. So, for example, I can access the materials I set up in Blender and tweak them on the fly in my rendering engine. Here I have changed the car's paintwork from red to green...

20221214_sipt01.png


In addition to playing with materials, I can change the camera aperture, focal length, depth of field, etc. It's not a fancy GUI by any means, but it's functional! I can also tweak lighting, which is useful due to the fact that I currently cannot import lighting information from Blender. Well, I can't import DIRECT lighting information (i.e. explicit light sources such as point lights, cone lights, directional lights, etc.) - lights which are typically used to represent any light source in a 3D scene other than, perhaps, the sun! This will be fixed at some point and my render engine does have support for direct lights but... it will be a while until that is in and working again.

So, in the meantime, I am relying on IBL (image-based lighting) to light the scene. This is no bad thing as it is what movies and visual fx studios use to provide the global illumination (INDIRECT) lighting for their 3D scenes. It gives a very realistic look to the scene and helps ground objects within the environment. Basically, an image is used to wrap the entire scene and this image stores radiance data as opposed to just RGB colour values. That way the image contains both colour information and also light intensity - hence the image may contain the sun with realistic lighting values encoded in the image, which I can then use to light the scene as if in the real world. It means my rendering engine produces true HDR images (which I can then tonemap back to standard SDR if needs be). In the image above, the cartoon AE86 car is lit purely by the image of the shopping mall you see around it. Whilst it looks like a shopping mall image, that image also contains the light intensities (as described) so light sources in the image are representative of real-life light sources and can be treated as such to light the car. It's pretty cool and a common way to light scenes these days, especially in visual fx and archviz. My render engine can load HDRi images for IBL and I can then manipulate their intensity and/or rotate the environment image to find better lighting angles, etc.

Some of the results look fantastic! easily on par with commercial programs that have had thousands (if not more) of man hours put into them. I think its incredibly impressive.

Thank you, mate - it keeps me busy and entertained. :ROFLMAO: I've really broken quite a few things at the moment but - thankfully - it's still just about capable of producing half-decent images. It's just a shame I've broken the normal mapping stuff and transmissive materials aren't rendering correctly! Oops. That said, it is being rewritten so it will get fixed in time. I've rendered a few more images recently (I'll share some in a following post shortly) - these images will be using the newly rewritten/re-implemented BRDFs I've been working on for diffuse and specular response (using the GGX, Lambertian and Microfacet model for those interested!)

Is there scope for a cut down version which could be built into other applications?

It's possible mate. There are no reasons why I couldn't write it as a plugin for Blender or some other system that uses plug-in renderers. Obviously, that would require some additional effort to 'speak' to the target system but it's do-able. Alas, my code is a long way from being feature-complete and I don't think it ever will be. This is purely a fun sandbox project. That's not to say that I might branch off and produce a commercial-grade variant for release though. Yeah, like I'm going to get time to do that! :ROFLMAO:
 

Panda.

ClioSport Club Member
  850 T5
Incredible depth of knowledge on it all @SharkyUK especially to be able to code your own rendering software etc! And the renders are stunning and a credit to your efforts!! I wouldn’t even know where to start!!


I’ll stick with Autocad, sketchup and Lumion for now!!
 

SharkyUK

ClioSport Club Member
Makes my 3D renders for the house builder I work for seem so basic! 🤣

I'm sure they are just fine for what you are doing mate!

I describe it is programmers brains being wired differently, it's one of those skills that I think you can either do or you can't - it's like playing the bass guitar, it's easy to play badly, it's much harder to play well. I did comp-sci at university back when they actually taught proper programming, Java hadn't really gripped academia and we had classes in prolog, pascal, C, assembler and a lot of embedded stuff, engine management, rtos, AI and all that, it still baffled me that people got to the final year and couldn't even get "hello world" to compile.

It's about solving puzzles, and I think most programmers don't particular care for the goal, it's the journey that floats our boats.

That's also why I enjoy reverse engineering, because you're trying to outsmart somebody who thinks they've been smart.

Exactly that, Ade - it's the constant challenge, the puzzle-solving and the journey. It's addictive and it's a massive buzz. That's why I loved working in the games industry and would love to do so again. If they paid decent money!

Excuse my ignorance but is the point of your programme to live render or produce static shots? I presume it’s the former? Got any vids of it working?

It can be used for both really. It uses Monte Carlo methods to render the scene which means that, over time, the rendered image finally resolves towards a cleaner and more accurate result/image. Depending on the lighting and material complexity within the scene, it's quite possible that the rendered image appears quite noisy as the algorithms attempt to calculate the correct physical representation of what we see as light bounces around and enters our eyes. The longer the time spent rendering, the more 'perfect' the image. Let me find an example...

sipt2_20221214_213827_1920x1080_s832.png


The first image (above) was captured within a few milliseconds of the rendering process starting. You can see the noise/grain in the image. This is because the algorithm hasn't traced enough light paths to generate a satisfactory image. In this example, each individual pixel had been sampled 800 times, which means each pixel has had 800 light paths traced for it (although each light path can bounce multiple times before terminating). However, the process is accumulative and refines the image over time...

sipt2_20221214_214012_1920x1080_s51596.png


...so the second image (above) is a lot clearer and a closer approximation to what we'd expect to see in the real world. The lights shining through the glass-like spheres are clearer (the white blobs are simulated lights I added programmatically) and the floor looks a lot clearer and defined. In this case, each pixel has had light paths traced over 50,000 times each. I won't do the maths but it's a LOT of calculations. We are talking billions due to the way this process works (and the reason why ray and path tracing is generally so slow).

And here's another example, using the same image to provide the lighting for the Lego model. Noisy image...

sipt2_20221214_214702_1920x1080_s140.png


...to a cleaner image (after a period of time).

sipt2_20221214_215159_1920x1080_s33699.png


That's why AI de-noising is such a big thing these days. It means we can get cleaner images in a fraction of the time (or at least that's the intention, and massive strides are being made in this area). The AI 'filters' the noise away.
 

SharkyUK

ClioSport Club Member
Here are a few more test images I made recently with the new changes I'm working on. Not great, a few things are broken as already said...! :ROFLMAO:

Some material tests...

52563857673_ecb4095b8b_k.jpg
SIPT Materials Test
by Andy Eder, on Flickr

52563322236_c7599f51da_k.jpg
SIPT Materials Test
by Andy Eder, on Flickr

52563857588_4b5c002362_k.jpg
SIPT Materials Test
by Andy Eder, on Flickr

52562882807_ed10405c7f_3k.jpg
SIPT Materials Test Render
by Andy Eder, on Flickr

52563874613_6b6c9c8a1e_k.jpg
SIPT Materials Test Render
by Andy Eder, on Flickr

And some more tests...

52563867393_96dffebb36_k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52562880757_ade67d9fbf_3k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52562880237_296ba8f769_3k.jpg
SIPT Test Render
by Andy Eder, on Flickr

52563626644_c68cb6f3c8_k.jpg
SIPT Test Render
by Andy Eder, on Flickr
 

SharkyUK

ClioSport Club Member

Robbie Corbett

ClioSport Club Member
Sorry, I forgot to reply - been a bit busy!



Ah, fair point. From a non-coding point-of-view, I rely on other software tools to create the 3D scene (models, lighting setup, material properties, etc.) and my engine then takes the output from those tools, parses the information into a highly optimised format that my engine can use, and then kicks off the rendering process (to actually generate the imagery). The interactive nature of my engine means that I am free to navigate around the 3D scene and interact with various components within it (albeit very limited). In a nutshell, my engine is an interactive 3D scene viewer rather than some kind of full-blown editing suite.



What tools/workflow do you use, Rob? And which products? I'm always interested in this stuff and what people are using.

As mentioned in the paragraph above, my rendering engine is basically a 3D scene viewer with some interactivity thrown in for good measure. It mainly came into being due to:
  • My interest in 3D CGI and the tech behind it
  • I wanted to write a ray/path tracer that used the GPU rather than the CPU (I like a challenge!)
  • I wanted a sandbox to play with new ideas and to implement new papers/algorithms in the computer graphics domain
  • I wanted a sandbox to experiment with ideas that I could use as a hobbyist and also with clients who were interested in my graphics coding abilities
As a result, the rendering engine is a mish-mash of technologies, ideas, hacked algorithms, and research stuff - but geared specifically to physically based rendering (PBR) to achieve realistic results that closely approximate nature - i.e. geared towards what happens when light rays/photons hit surfaces and how those light rays move travel through different mediums, how light is scattered/reflected/refracted/transmitted, and so forth. At the heart, my engine is using very similar methods and algorithms to those used by Pixar, Industrial Light and Magic, Disney, Weta, etc. and in the tools they use. Strange that... ;) :p

Due to it being a viewer rather than an editor, I am relying on other tools to author my 3D scenes. My current tool of choice is Blender. It's free and it's incredibly well-supported. It is also capable of producing some fantastic results comparable to those in higher-end products that cost thousands or that are utilised by visual fx studios. I basically use Blender to build the 3D scenes (using existing assets more often than not) and to also set textures/materials, lighting, camera, etc. I then export that scene from Blender and my render engine has custom code that can import/consume that data, and then render it. Simples! :ROFLMAO: In fact, hold on... some examples...

View attachment 1623887

There you go, Blender in all its glory! As you can see, I use it to position objects, set texturing and material properties, and so forth. At this moment in time, I am going through a huge rewrite so lighting and camera information that I set up in the scene are currently... hence my render engine currently handles that by using image-based lighting (IBL). More on that in a moment.

View attachment 1623888

So yeah, I fiddle with the scene in Blender and - when I'm ready to go - I export the scene and load it into my render engine. It's quite useful being able to compare the path-traced results that my engine produces directly against those produced by the Cycles renderer built into Blender. There are a lot of similarities but, again, this is due to the fact that - at their heart - they are using very similar technologies and algorithms (which is kinda becoming standard in the CG rendering domain these days).

Does my engine have a GUI? Well, sort of. :ROFLMAO:

Again, my engine is not an editor and is very much a research project/hobby so the GUI is very limited and changes to reflect what I am working on. Or what I am trying to fix when I hit issues. And that happens a lot. As much as I love this s**t, it can be a complete mind f**k at times. I won't lie, a lot of physics and mathematics goes over my head sometimes and I have to seek help or just rely on mathematical proofs/equations being correct even if I can't understand some of the more complex stuff! LOLz! Anyways, I digress... my GUI is mainly focused towards allowing me to see useful information (such as camera position and view direction) and other debug stuff so that I can track down and fix issues with the rendered imagery. It also provides me with a simple means with which to have limited interactivity with components in the scene. So, for example, I can access the materials I set up in Blender and tweak them on the fly in my rendering engine. Here I have changed the car's paintwork from red to green...

View attachment 1623889

In addition to playing with materials, I can change the camera aperture, focal length, depth of field, etc. It's not a fancy GUI by any means, but it's functional! I can also tweak lighting, which is useful due to the fact that I currently cannot import lighting information from Blender. Well, I can't import DIRECT lighting information (i.e. explicit light sources such as point lights, cone lights, directional lights, etc.) - lights which are typically used to represent any light source in a 3D scene other than, perhaps, the sun! This will be fixed at some point and my render engine does have support for direct lights but... it will be a while until that is in and working again.

So, in the meantime, I am relying on IBL (image-based lighting) to light the scene. This is no bad thing as it is what movies and visual fx studios use to provide the global illumination (INDIRECT) lighting for their 3D scenes. It gives a very realistic look to the scene and helps ground objects within the environment. Basically, an image is used to wrap the entire scene and this image stores radiance data as opposed to just RGB colour values. That way the image contains both colour information and also light intensity - hence the image may contain the sun with realistic lighting values encoded in the image, which I can then use to light the scene as if in the real world. It means my rendering engine produces true HDR images (which I can then tonemap back to standard SDR if needs be). In the image above, the cartoon AE86 car is lit purely by the image of the shopping mall you see around it. Whilst it looks like a shopping mall image, that image also contains the light intensities (as described) so light sources in the image are representative of real-life light sources and can be treated as such to light the car. It's pretty cool and a common way to light scenes these days, especially in visual fx and archviz. My render engine can load HDRi images for IBL and I can then manipulate their intensity and/or rotate the environment image to find better lighting angles, etc.



Thank you, mate - it keeps me busy and entertained. :ROFLMAO: I've really broken quite a few things at the moment but - thankfully - it's still just about capable of producing half-decent images. It's just a shame I've broken the normal mapping stuff and transmissive materials aren't rendering correctly! Oops. That said, it is being rewritten so it will get fixed in time. I've rendered a few more images recently (I'll share some in a following post shortly) - these images will be using the newly rewritten/re-implemented BRDFs I've been working on for diffuse and specular response (using the GGX, Lambertian and Microfacet model for those interested!)



It's possible mate. There are no reasons why I couldn't write it as a plugin for Blender or some other system that uses plug-in renderers. Obviously, that would require some additional effort to 'speak' to the target system but it's do-able. Alas, my code is a long way from being feature-complete and I don't think it ever will be. This is purely a fun sandbox project. That's not to say that I might branch off and produce a commercial-grade variant for release though. Yeah, like I'm going to get time to do that! :ROFLMAO:
Flippin awesome answer - need to read through it again so I can give it the justice your effort deserves.
 

Robbie Corbett

ClioSport Club Member
Sorry, I forgot to reply - been a bit busy!



Ah, fair point. From a non-coding point-of-view, I rely on other software tools to create the 3D scene (models, lighting setup, material properties, etc.) and my engine then takes the output from those tools, parses the information into a highly optimised format that my engine can use, and then kicks off the rendering process (to actually generate the imagery). The interactive nature of my engine means that I am free to navigate around the 3D scene and interact with various components within it (albeit very limited). In a nutshell, my engine is an interactive 3D scene viewer rather than some kind of full-blown editing suite.


What tools/workflow do you use, Rob? And which products? I'm always interested in this stuff and what people are using.

As mentioned in the paragraph above, my rendering engine is basically a 3D scene viewer with some interactivity thrown in for good measure. It mainly came into being due to:
  • My interest in 3D CGI and the tech behind it
  • I wanted to write a ray/path tracer that used the GPU rather than the CPU (I like a challenge!)
  • I wanted a sandbox to play with new ideas and to implement new papers/algorithms in the computer graphics domain
  • I wanted a sandbox to experiment with ideas that I could use as a hobbyist and also with clients who were interested in my graphics coding abilities
As a result, the rendering engine is a mish-mash of technologies, ideas, hacked algorithms, and research stuff - but geared specifically to physically based rendering (PBR) to achieve realistic results that closely approximate nature - i.e. geared towards what happens when light rays/photons hit surfaces and how those light rays move travel through different mediums, how light is scattered/reflected/refracted/transmitted, and so forth. At the heart, my engine is using very similar methods and algorithms to those used by Pixar, Industrial Light and Magic, Disney, Weta, etc. and in the tools they use. Strange that... ;) :p

Due to it being a viewer rather than an editor, I am relying on other tools to author my 3D scenes. My current tool of choice is Blender. It's free and it's incredibly well-supported. It is also capable of producing some fantastic results comparable to those in higher-end products that cost thousands or that are utilised by visual fx studios. I basically use Blender to build the 3D scenes (using existing assets more often than not) and to also set textures/materials, lighting, camera, etc. I then export that scene from Blender and my render engine has custom code that can import/consume that data, and then render it. Simples! :ROFLMAO: In fact, hold on... some examples...

View attachment 1623887

There you go, Blender in all its glory! As you can see, I use it to position objects, set texturing and material properties, and so forth. At this moment in time, I am going through a huge rewrite so lighting and camera information that I set up in the scene are currently... hence my render engine currently handles that by using image-based lighting (IBL). More on that in a moment.

View attachment 1623888

So yeah, I fiddle with the scene in Blender and - when I'm ready to go - I export the scene and load it into my render engine. It's quite useful being able to compare the path-traced results that my engine produces directly against those produced by the Cycles renderer built into Blender. There are a lot of similarities but, again, this is due to the fact that - at their heart - they are using very similar technologies and algorithms (which is kinda becoming standard in the CG rendering domain these days).

Does my engine have a GUI? Well, sort of. :ROFLMAO:

Again, my engine is not an editor and is very much a research project/hobby so the GUI is very limited and changes to reflect what I am working on. Or what I am trying to fix when I hit issues. And that happens a lot. As much as I love this s**t, it can be a complete mind f**k at times. I won't lie, a lot of physics and mathematics goes over my head sometimes and I have to seek help or just rely on mathematical proofs/equations being correct even if I can't understand some of the more complex stuff! LOLz! Anyways, I digress... my GUI is mainly focused towards allowing me to see useful information (such as camera position and view direction) and other debug stuff so that I can track down and fix issues with the rendered imagery. It also provides me with a simple means with which to have limited interactivity with components in the scene. So, for example, I can access the materials I set up in Blender and tweak them on the fly in my rendering engine. Here I have changed the car's paintwork from red to green...

View attachment 1623889

In addition to playing with materials, I can change the camera aperture, focal length, depth of field, etc. It's not a fancy GUI by any means, but it's functional! I can also tweak lighting, which is useful due to the fact that I currently cannot import lighting information from Blender. Well, I can't import DIRECT lighting information (i.e. explicit light sources such as point lights, cone lights, directional lights, etc.) - lights which are typically used to represent any light source in a 3D scene other than, perhaps, the sun! This will be fixed at some point and my render engine does have support for direct lights but... it will be a while until that is in and working again.

So, in the meantime, I am relying on IBL (image-based lighting) to light the scene. This is no bad thing as it is what movies and visual fx studios use to provide the global illumination (INDIRECT) lighting for their 3D scenes. It gives a very realistic look to the scene and helps ground objects within the environment. Basically, an image is used to wrap the entire scene and this image stores radiance data as opposed to just RGB colour values. That way the image contains both colour information and also light intensity - hence the image may contain the sun with realistic lighting values encoded in the image, which I can then use to light the scene as if in the real world. It means my rendering engine produces true HDR images (which I can then tonemap back to standard SDR if needs be). In the image above, the cartoon AE86 car is lit purely by the image of the shopping mall you see around it. Whilst it looks like a shopping mall image, that image also contains the light intensities (as described) so light sources in the image are representative of real-life light sources and can be treated as such to light the car. It's pretty cool and a common way to light scenes these days, especially in visual fx and archviz. My render engine can load HDRi images for IBL and I can then manipulate their intensity and/or rotate the environment image to find better lighting angles, etc.



Thank you, mate - it keeps me busy and entertained. :ROFLMAO: I've really broken quite a few things at the moment but - thankfully - it's still just about capable of producing half-decent images. It's just a shame I've broken the normal mapping stuff and transmissive materials aren't rendering correctly! Oops. That said, it is being rewritten so it will get fixed in time. I've rendered a few more images recently (I'll share some in a following post shortly) - these images will be using the newly rewritten/re-implemented BRDFs I've been working on for diffuse and specular response (using the GGX, Lambertian and Microfacet model for those interested!)



It's possible mate. There are no reasons why I couldn't write it as a plugin for Blender or some other system that uses plug-in renderers. Obviously, that would require some additional effort to 'speak' to the target system but it's do-able. Alas, my code is a long way from being feature-complete and I don't think it ever will be. This is purely a fun sandbox project. That's not to say that I might branch off and produce a commercial-grade variant for release though. Yeah, like I'm going to get time to do that! :ROFLMAO:

Makes a lot of sense mate, whilst reading the first section I was thinking it would make for an awesome plug in for Blender!

What tools/workflow do you use, Rob? And which products? I'm always interested in this stuff and what people are using.

I primarily use Solidworks which is a 3D CAD package. Solidworks has a built in renderer called photoview 360 which is very easy to use but slow. Solidworks has a ton of materials in a built in library which also helps as there is no way really to create a custom texture the way you can with blender. The renders are CPU only and high res ones take an eternity even on my work station beast which has the worlds core supply. Whilst its very easy to press go and have something work its very difficult to actually get something that is approaching photo-realistic, I find the lighting is enormously important and take a long time to get right.

For example - here when we were trying to workout bathroom layouts I modeled a 1:1 bathroom in solidworks - render is good enough for what I was doing but not exactly realistic:
1671106077967.png



More usually I would use the Solidworks tools for quickly rendering product images to show to other people:

1671106161046.png


1671106174709.png

1671106188340.png


These are concept renders for an amplifier I have been designing on/off for a few years.

Several years ago Solidworks released something called Solidworks Visualize which for me gives basically identical results to the above, just that it does 90% of the work using the GPU so renders are substantially faster - especially on a 3090.

Then more recently I've started to use Blender which is a whole new level. I'm not good with it yet but the results are already much better than the Solidworks rendering tools. I've got examples on another computer which I will pull but no where near as good as the renders you have posted for the 911 etc!! So I would use Solidworks to create the 3D model then bring it into Blender to apply textures etc. It takes me a very very long time at the moment!

In all programs I find the lighting and camera settings make or break the realism. I generally have been unable to create properly photorealistic renders and I think a lot of it comes from the fact that my camera settings are not what would really be used by a photographer, and your brain just knows thats not physically possible. But for showing customers something sort of real it works out well enough.

Does my engine have a GUI? Well, sort of. :ROFLMAO:

I figured that might be the case 🤣 I sometimes write python apps for doing specific things and I ALWAYS kick myself a few months later for not taking the time to make a simple GUI so that someone else could also use it. Normally several months later I've forgotten the super specific way that I had to feed in data etc. so spend days relearning what stupid things I did. Even just for debugging quickly or hashing in a last minute function I find a GUI helpful... but rarely do I spend the time to actually make one 🤦‍♂️

This is fantastic Sharky and makes for very interesting reading!!! Still not done processing your great reply.
 

SharkyUK

ClioSport Club Member
Cheers, Rob - thanks for the quality reply!

I primarily use Solidworks which is a 3D CAD package. Solidworks has a built in renderer called photoview 360 which is very easy to use but slow. Solidworks has a ton of materials in a built in library which also helps as there is no way really to create a custom texture the way you can with blender. The renders are CPU only and high res ones take an eternity even on my work station beast which has the worlds core supply.

Ah yes, Solidworks. I do recall you mentioning it before (either in this thread or elsewhere). I don't have much experience with it at all, certainly not recently. I think it's been around for a while now in one form or another and seems to be widely used across manufacturing and product design. I wouldn't be surprised if there were additional paid-for add-ons that might offer better visualisations of the designs created within Solidworks, but it does seem a little strange that the built-in viewer/renderer is a little on the slow side given the tech in use today. That said, I find it interesting that you are looking to use Blender and familiarise yourself with it. Whilst it's been around for a while, it really has taken off in recent years and offers some superb 3D modelling and rendering tools. I imagine it would make a good companion for something like Solidworks - i.e. you have the technical angle covered in terms of the accurate product design and modelling courtesy of Solidworks, and then - potentially at least - the ability to go to town with the rendering and post-processing effects offered by Blender (including custom lighting and camera setups).

Blender really has had some significant financial backing over the last couple of years from some of the industry's biggest players - such as AMD, nVidia, Intel and many others. So much so, they have a dedicated core team of full-time employees now (although still use contributions from skilled developers and contributors thanks to the open-source nature of the software). It looks like it will have a long future and continue to be well supported (and adopted by more people from various industries). It makes a refreshing change from the likes of 3DS Max or Maya where it can cost upwards of thousands per year to cover the licensing costs alone.

Whilst its very easy to press go and have something work its very difficult to actually get something that is approaching photo-realistic, I find the lighting is enormously important and take a long time to get right.

Yeah, good lighting and materials can really make or break a 3D render, regardless of how good the actual 3D modelling may be. It's definitely something that comes with experience (and, of course, familiarity with the tools of choice) but it usually entails a healthy investment of time as well and that's something that individuals and employers don't always have! More so if your goal and focus is on the product and manufacturing rather than making it look Hollywood!

I have an easy job in many ways... I simply use the assets provided by skilled and/or well-seasoned creatives and I just have to push them through my renderer, and out pops a (sometimes) decent-looking image. Of course, I do also enjoy authoring my own 3D content but it is incredibly time-consuming and I simply don't have the time to do it to the level I'd want to do it! :ROFLMAO: Mind you, I probably wouldn't be able to anyway as some of the 3D artists and modellers out there are incredibly and I'm nowhere near that level. Hence, I stick to being a code monkey and providing them with the tools and software to visualise their creations. It works better that way! :ROFLMAO:

52566097674_506585d670_3k.jpg
SIPT Test Render
by Andy Eder, on Flickr

As in the render produced above, the look and feel of the light and material interplay really can bring the image to life, thus giving that sense of photorealism. However, the work required to get to that point is quite labour-intensive. For example, the worn/distressed and stained metal surface material of the mixing desk is not simply a texture applied to the underlying model. It is several textures that have material information encoded within them - such as how metallic the surface is, how rough the surface is, a normal map that is used to perturb the surface geometry to give the impression of additional high-frequency detail, opacity maps to determine how see-through parts of the texture are, and other texture layers, too. Then there are additional properties specified that all go towards determining how light rays bounce - or are transmitted through (or both) - from the surface. For example, index of refraction (to determine how much a transmitted light ray is deflected when entering or leaving a medium), clearcoat (to simulate extra specular shine, such as the additional shine you get from the top coat of clean car paintwork), and many more properties. Here's a small selection showing some of the texture components that go towards realising the metal material used in the render above (I've scaled the images down as the originals used to render the above are 8k textures and weigh in at several hundred megabytes!)... :ROFLMAO:

Interestingly, this process is generally used by both big-screen visual fx studios for their movies as well as game developers making modern games. It is the effort and time investment required to create these assets that significantly contribute to the massive costs associated with creating such movies or games.

(Sorry to those who find this boring...!)

So - here's the base colour texture (sometimes called an albedo map). It contains the base colour for the material, albeit devoid of any light or shadow. Don't forget, our rendering engine is responsible for creating light and shadows.

Rodec_Casing_Aniso_baseColor - thumb.png


Then we have an ambient occlusion map - which can be used by the renderer to indicate areas that might not receive as much indirect lighting as other parts of the model. Depending on the geometric complexity of the model, some may use a texture like this for performance reasons, whilst others may calculate the ambient occlusion from the geometry directly for ultimate quality and realism, but at the cost of performance.

Rodec_Casing_Aniso_AO - thumb.png


Next up, something called a normal map. This is used to realise additional high-frequency detail across a surface (such as scratches and/or other surface perturbances). The strange blue colour is due to how the associated data is encoded within the texture map. The normal map can be a great way to improve rendering performance by encoding detail in a texture rather than having to have the detail actually modelled within the 3D geometry.

Rodec_Casing_Aniso_normal - thumb.png


Then, we have the metallic map (which is generally only found on metallic surfaces, surprise, surprise). This indicates which part of the material are metallic, or how metallic the material might be. It is possible for a material to be metallic and non-metallic... imagine a rusted metal sheet. Parts might still be shiny and other bits dull where the rust has set in. This metallic map allows this information to be encoded within it. The brighter areas are "more" metallic.

Rodec_Casing_Aniso_metallic - thumb.png


Let's finish with the roughness map - which determines how rough the surface appears, and basically controls how a reflected light ray might be scattered afterwards. A rough surface will appear dull (think of tyre rubber) as light rays hit the surface and then bounce off in pretty much any random direction. It results in a matte look. A "less" rough surface will appear shiny and show reflections of things around it (think of plastic Lego blocks) because light rays are reflected in a tighter focus rather than randomly scattering in any old direction. The brighter areas are rougher and result in a more matte/dull look.

Rodec_Casing_Aniso_roughness - thumb.png


So there you go, a basic rundown of how textures can be used to simulate a given material and control the interplay of light with said surface material. There's a fair bit more to it of course but it might give some idea as to the shizzle that goes on behind the scenes in just creating and preparing the assets for rendering! And, again, that is just for the surface of the metal mixing desk. The whole thing has to be repeated for all other 3D objects in your scene/world...

Right - will leave it there for now as I need supper.
 

SharkyUK

ClioSport Club Member

SharkyUK

ClioSport Club Member
Those camera images are unreal

I just tried creating a quick video of it running in real-time... but, as feared, the YouTube compression makes the video pretty much unwatchable. :( The low bitrate of YouTube, and the 'noisy' nature of the path tracing, results in a terrible video, unfortunately. I'll post it up anyway...

 

Mr Squashie

CSF Harvester
ClioSport Club Member
  Clio 182
I just tried creating a quick video of it running in real-time... but, as feared, the YouTube compression makes the video pretty much unwatchable. :( The low bitrate of YouTube, and the 'noisy' nature of the path tracing, results in a terrible video, unfortunately. I'll post it up anyway...


I don't really know the ins and outs of YouTube, but is there any way to upload it as a 4k video to improve the bitrate?
 


Top