ClioSport.net

Register a free account today to become a member!
Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • When you purchase through links on our site, we may earn an affiliate commission. Read more here.

Euclideon SolidScan?



Darren S

ClioSport Club Member
Firmly in camp of @SharkyUK with this. Really quite impressive what they can do literally with points in terms of producing graphics.





However, am I right in saying that this technology only works for static backgrounds? Anything that requires movement - trees, leaves, water, flags etc - just couldn't be used with this?

That second clip listed above though looks superb in the woods - even if it no particularly life-like with the lack of any movement.
 

SharkyUK

ClioSport Club Member
@Darren S
Ah, Euclideon - I've not heard much about them since they released their 'unlimited detail engine' back in 2013/14. It was touted as being able to generate / render the most realistic graphics and visuals yet. Well, kind of. There are indeed a few issues with the fundamental technology that renders it (no pun intended) pretty useless for a lot of modern applications. In fact, you highlight one of the biggest problems yourself - the fact that the technology is only really useful for taking a 'static' snapshot of a given moment in time. It cannot be used to capture dynamic moving objects within a scene for example. Of course, several snapshots from the same location could be taken over a period of time to generate a 3D movie (for want of a better word) but that would not work so well due to the excessive amount of time required to capture the scene in the first place. I is an expensive process in terms of gathering the data and then processing it, preparing it for display.

At its heart, the technology creates a point cloud data set. As the term suggests it is just a huge collection (often in the order of billions) of 3D geo-located points. A laser scanning device is setup and a laser is fired from it out into the environment in all directions. When the laser hits something in the environment the distance to that point is measured (to obtain the location of that point) and then the colour of that hit point is also measured and stored in the attribute data for that specific point. Other data may also be gathered as well as position and colour; it very much depends on the application and the capabilities of the capturing device. By firing out billions of laser 'rays' (and recording the aforementioned hit point positions and colours) you gradually build up a point-based representation of the scene. By moving the laser scanning device to another location you can repeat the process again to capture parts of the environment that may not have been generated from previous scans. As you can imagine, this process takes time as the laser scanning device will generally have to be moved around the environment until all the necessary angles and viewpoints have been captured by the scanner. Once the point cloud has been captured... that's when the fun really begins.

One of the main problems with point cloud data is in how the vast data set is interpreted. Sure, it easy enough to render the points. After all, you have [at least] a position and a colour so a modern PC and graphics card will happily draw millions of them at interactive frame rates. With a bit of clever streaming technology parts of the data set can be loaded on demand to keep memory and display requirements to a minimum, rather than having to have the entire data set reside in memory in its entirety (not possible given the size of these data sets!) But this is the crux of the problem. It's nothing more that a huge unordered and unstructured set of points. There's no information that links another point to any other. Take two adjacent points in the environment... are they points on the same object or two completely unrelated objects? There is no coherence or meaning behind that collection of data points and trying to give meaning to them is far from trivial. Imagine trying to take that data set, simplify it, and then turn it into a polygonal structure for rendering on modern hardware. Where would you start? How would you determine which points represent a blade of grass, which represent a brick, which represent a cat. It's impossible. Even taking those points and treating them in some voxel format is not feasible. You are still left with the problem of identifying what a given point represents and no way of indicating how it relates to any other point(s) in the data set.

For these problems above, and others which I won't go into, it's unlikely you'll see many games built on Euclideon SolidScan technology. There was talk about it many years ago but I'm yet to see any results. They went strangely quiet! The only feasible way a game could benefit from the technology is by capturing a very low-res point cloud data set and then passing this to a team of environment modellers. They then have to painstakingly go through the points and manually provide meaning and links to other points, eventually building up a polygonal representation of the environment. The effort required for this is beyond the realms of practicality. It would just be quicker to build the environment using traditional 3D modelling tools and techniques.

In some circumstances this technology can be useful. It is used extensively for capturing historical records for digitisation of relics in museums and for archaeological excavations. Even the police use a mobile version of the technology for capturing road traffic accidents and/or scene of crime. Due to the problems I've already highlighted this technology, at least in its current state, is not particularly useful for anything beyond capturing a nice 3D representation of 'something' - be it an environment or an object. Until context can be captured along with the points themselves (thus allowing automated processes to extract and make meaning of the unordered / unstructured data) I don't see this tech evolving much further than simply being a cool way of providing impressive looking 3D visuals.

(Apologies for the lengthy reply but this is technology I've worked on myself and it's cool to revisit it again!) :p
 

Cookie

ClioSport Club Member
I understood some of those words. These sort of datasets end up stored on the sort of arrays I work on, that's about as close as I get to your work Sharky :p
 

McGherkin

Macca fan boiiiii
ClioSport Club Member
The only feasible way a game could benefit from the technology is by capturing a very low-res point cloud data set and then passing this to a team of environment modellers. They then have to painstakingly go through the points and manually provide meaning and links to other points, eventually building up a polygonal representation of the environment. The effort required for this is beyond the realms of practicality. It would just be quicker to build the environment using traditional 3D modelling tools and techniques.

Correct me if I'm wrong but haven't racing game designers been doing this for years?

e.g.
https://www.lfs.net/rockingham
 

SharkyUK

ClioSport Club Member
Correct me if I'm wrong but haven't racing game designers been doing this for years?

e.g.
https://www.lfs.net/rockingham
Yes, absolutely right - laser scanning has been used for a few games now (as well as by F1 teams for their simulations). However, the key difference is that the laser scanned data is not used and realised as point primitives as seen in Euclideon's technology. The likes of LFS and Assetto Corsa use the point cloud as a basis on which they then create a polygonal model. The generation of the polygonal representation from the cloud data is still a very labour intensive process, even with some of the highly optimised workflows and processes used to identify which data in the point clouds are the most important in terms of providing the basis for the 3D model that ends up in the game.

Typically, in a racing game, the raw point cloud data is taken into some editing software (usually custom written) and a coarser representation of the circuit is also provided in some geometrical form (be it some shape file or other polygonal based method). These coarser representations are used to highlight the rough extents of the circuit and effectively used to cull point data that is not really particularly useful for the generation of the circuit in question. This is an important step as it can reduce the size of the data set massively and means the developers then deal with a smaller subset of the data that represents the important elements of a circuit; for example the track surface itself, peripheral circuit furniture, grandstands, etc. The designers can then start to 'tag' and identify points in the data set and provide attributes so that their tools can start to analyse and make sense of the data. i.e. a designer will start tagging those areas that represent the track and label them as 'track' (to give a simplified example of what I'm trying to explain). Once done [sticking with our 'track' points] the software can then start to analyse the point data and attempt to auto-generate a polygonal subdivision surface from those points. Due to the previously mentioned unordered and unstructured nature of the points the success of this step can very much depend on the intelligence of the analysing tools. Once a subdivision surface has been produced a very high-resolution polygonal model of the track will exist. This model can then be shaped, cut, tweaked and modified to ensure it's as close a fit to the original point data as possible. Once happy with the high resolution model it serves multiple purposes, an important one being that it can be used to generate normal maps of the track surface (to give the impression of increased detail). The normal maps, whilst giving the impression of subtle bumps and tyre marks, etc. are purely visual and have no impact on the physics. The higher resolution model also serves as the high-fidelity template for the 3D model that ends up in the game. The circuit model can be dynamically altered to increase/decrease the polygon complexity as needed to ensure that the target hardware (PC, console, etc.) can handle the geometry at interactive rates. In areas where the circuit is bumpy then more polygons may be needed to accurately recreate the surface in-game, on smooth straights potentially lesser are needed. It is during this process that attributes are also assigned to the polygons that comprise the circuit; so, for example, the track surface may be given attributes for 'grippiness when dry', 'grippiness when wet', surface type (which determines the road noise of the tyre on that surface) and so forth. This is what I was referring to earlier in terms of providing context and meaning to the data for it to be any use.

Going back to the polygonal track model - this is where things can get interesting. There may be two or more versions of the model in memory at once whilst driving the circuit. You get a higher resolution version displayed visually (providing the eye-candy) and a lower fidelity version which is used by the physics engine to simulate the interactions between car and circuit. The version used for the eye-candy is often too complex for the physics engine to deal with hence the need for a simplified representation.

:up:
 

SharkyUK

ClioSport Club Member
I understood some of those words. These sort of datasets end up stored on the sort of arrays I work on, that's about as close as I get to your work Sharky :tongueout:
Some of those data sets can get pretty big (as you'll know!)

Here are some example data sets generated by the visualisation software I wrote - all rendered in real time with atmospheric simulation and light propagation and all based on point cloud data sets.

grab_20131128_141122_zps2ed27814.jpg

grab_20131128_142053_zps6580b0a2.jpg

grab_20131128_143903_zpsc34ce49c.jpg
 

botfch

ClioSport Club Member
  Clio 182
I work with scanners daily and there's no way the technologies there or even close. We still struggle to deal with clouds bigger than 30-40 mil and the scanners are gathering the data at 500 000 points per second.
As Sharky outlined above, the scanners pick up colour but its a snap shot of the time the scan was done, so it would be impossible to change the lighting conditions later on without retexturing everything.

We can however mesh the cloud, dumb it down, retexture and stick it through something like unreal engine to create VR environments and potentially with a lot of work a game.
 

SharkyUK

ClioSport Club Member
I work with scanners daily and there's no way the technologies there or even close. We still struggle to deal with clouds bigger than 30-40 mil and the scanners are gathering the data at 500 000 points per second.
:cool: Cool! What sort of work are you doing mate? (If you can say! I'm just curious as I have an interest in this sort of stuff).
 

botfch

ClioSport Club Member
  Clio 182
:cool: Cool! What sort of work are you doing mate? (If you can say! I'm just curious as I have an interest in this sort of stuff).

We do all sorts but the bulk of our work is measuring heritage buildings, statues etc. Then we produce anything from 2d drawings to Vr for various uses.

My favorite is the stuff we do is projection mapping, so we produced the model for the vid below for example.

 

Darren S

ClioSport Club Member
Good stuff. Going off the rapid rate that drones are progressing these days - it wouldn't be beyond the realms of plausibility to have a couple of these mapping drones in the boot of a police car.

They arrive on the scene of an RTA and have some form of marker stick/pole to identify the centre of the investigation. The drones are then instructed out to a distance in feet - maybe just 20ft away - depending on the likes of trees, bushes, etc, being in the way. The drones then circle around - perhaps even altering their altitude in the process - in order to wirelessly stream the points scanned, back to a PC in the rear of the police car.

The PC then can generate a image that can not only be zoomed into, but also manipulated from any angle. You could even have the drone(s) fitted with a FLIR camera - the captured images of which could be overlaid on top of the points captured. Would be useful evidence to assist in proving joyriding/escape attempts if the four wheels and and brakes of the crashed vehicle were white hot.

All a bit sci-fi at the minute - but auto-levelling drones are commonplace, if still expensive for now. The end result however - when used in parallel with the more traditional static images of an accident, would be extremely useful in court.
 

SharkyUK

ClioSport Club Member
Good stuff. Going off the rapid rate that drones are progressing these days - it wouldn't be beyond the realms of plausibility to have a couple of these mapping drones in the boot of a police car.

They arrive on the scene of an RTA and have some form of marker stick/pole to identify the centre of the investigation. The drones are then instructed out to a distance in feet - maybe just 20ft away - depending on the likes of trees, bushes, etc, being in the way. The drones then circle around - perhaps even altering their altitude in the process - in order to wirelessly stream the points scanned, back to a PC in the rear of the police car.
Exactly mate, and this sort of thing is already happening :) I'm not aware of any police forces using drone-based scanners as yet but there are moves being made in the security and defence domains. I briefly worked on a system for a major defence prime integrator that generated a point cloud data set (along with a few other bits). It was vehicle mounted and the vehicle would simply be driven around an environment, scanning as it went. It used LIDAR (Lilght Intensity Direction and Ranging), visible imaging, thermal imaging, 3D "through-the-wall" radar (that builds up pictures of a building's structure and detects objects inside), and x-ray backscatter (to detect hidden objects, such as explosives, inside vehicles).
 

botfch

ClioSport Club Member
  Clio 182
Trouble atm is the data back from scanners moving around is still rubbish.
They also already have cameras fitted for the colour overlay, even that's not particularly fantastic data tho.
 


Top