@Darren S
Ah, Euclideon - I've not heard much about them since they released their 'unlimited detail engine' back in 2013/14. It was touted as being able to generate / render the most realistic graphics and visuals yet. Well, kind of. There are indeed a few issues with the fundamental technology that renders it (no pun intended) pretty useless for a lot of modern applications. In fact, you highlight one of the biggest problems yourself - the fact that the technology is only really useful for taking a 'static' snapshot of a given moment in time. It cannot be used to capture dynamic moving objects within a scene for example. Of course, several snapshots from the same location could be taken over a period of time to generate a 3D movie (for want of a better word) but that would not work so well due to the excessive amount of time required to capture the scene in the first place. I is an expensive process in terms of gathering the data and then processing it, preparing it for display.
At its heart, the technology creates a point cloud data set. As the term suggests it is just a huge collection (often in the order of billions) of 3D geo-located points. A laser scanning device is setup and a laser is fired from it out into the environment in all directions. When the laser hits something in the environment the distance to that point is measured (to obtain the location of that point) and then the colour of that hit point is also measured and stored in the attribute data for that specific point. Other data may also be gathered as well as position and colour; it very much depends on the application and the capabilities of the capturing device. By firing out billions of laser 'rays' (and recording the aforementioned hit point positions and colours) you gradually build up a point-based representation of the scene. By moving the laser scanning device to another location you can repeat the process again to capture parts of the environment that may not have been generated from previous scans. As you can imagine, this process takes time as the laser scanning device will generally have to be moved around the environment until all the necessary angles and viewpoints have been captured by the scanner. Once the point cloud has been captured... that's when the fun really begins.
One of the main problems with point cloud data is in how the vast data set is interpreted. Sure, it easy enough to render the points. After all, you have [at least] a position and a colour so a modern PC and graphics card will happily draw millions of them at interactive frame rates. With a bit of clever streaming technology parts of the data set can be loaded on demand to keep memory and display requirements to a minimum, rather than having to have the entire data set reside in memory in its entirety (not possible given the size of these data sets!) But this is the crux of the problem. It's nothing more that a huge unordered and unstructured set of points. There's no information that links another point to any other. Take two adjacent points in the environment... are they points on the same object or two completely unrelated objects? There is no coherence or meaning behind that collection of data points and trying to give meaning to them is far from trivial. Imagine trying to take that data set, simplify it, and then turn it into a polygonal structure for rendering on modern hardware. Where would you start? How would you determine which points represent a blade of grass, which represent a brick, which represent a cat. It's impossible. Even taking those points and treating them in some voxel format is not feasible. You are still left with the problem of identifying what a given point represents and no way of indicating how it relates to any other point(s) in the data set.
For these problems above, and others which I won't go into, it's unlikely you'll see many games built on Euclideon SolidScan technology. There was talk about it many years ago but I'm yet to see any results. They went strangely quiet! The only feasible way a game could benefit from the technology is by capturing a very low-res point cloud data set and then passing this to a team of environment modellers. They then have to painstakingly go through the points and manually provide meaning and links to other points, eventually building up a polygonal representation of the environment. The effort required for this is beyond the realms of practicality. It would just be quicker to build the environment using traditional 3D modelling tools and techniques.
In some circumstances this technology can be useful. It is used extensively for capturing historical records for digitisation of relics in museums and for archaeological excavations. Even the police use a mobile version of the technology for capturing road traffic accidents and/or scene of crime. Due to the problems I've already highlighted this technology, at least in its current state, is not particularly useful for anything beyond capturing a nice 3D representation of 'something' - be it an environment or an object. Until context can be captured along with the points themselves (thus allowing automated processes to extract and make meaning of the unordered / unstructured data) I don't see this tech evolving much further than simply being a cool way of providing impressive looking 3D visuals.
(Apologies for the lengthy reply but this is technology I've worked on myself and it's cool to revisit it again!)