Very true regarding the light - hence why I believe that it is the advancement in this area that will take games to that next level. The interplay of a single ray of light bouncing around the most basic of scenes soon becomes prohibitively expensive to calculate; and all - ultimately - to determine what the final colour of a pixel will be. To get around this massive computational overhead engines, such as the new Crytek and Unreal engines, use a plethora of offline (non-realtime) processes to effectively bake global lighting information into encoded data formats that can then be quickly accessed in-game to simulate light interplay in the world. I won't go into too much detail but here's a very basic example of how indirect lighting (as part of a global illumination system) is typically used in a rasterised game as opposed to doing it 'properly' via raytracing... (sorry if it's boring and feel free to skip it!)
Modern game engine:
Something called spherical harmonics can be used to encode indirect lighting in a scene - i.e. indirect light is light in a scene that has bounced off objects and has not come directly from a light source. In effect, this provides a base environment lighting colour on top of which other lighting models can be applied - such as the diffuse lights, specular highlights, reflections, etc. It is the indirect lighting that adds so much to the scene and that is responsible for getting a consistent lighting in the game. Spherical harmonics allow an area to be defined in the world, and 'light probes' can be placed in the world by the artist / level designer. Once the probes are placed the tool then effectively transforms any light sources with influence in that area and determines what the base colour contribution would be for the scene where the light probe is positioned. This can be determined by running some heavier lighting calculations to bounce lights off environments so that the indirect lighting picks up colour from floors, walls, sky, etc. As this is an offline process it can take seconds, minutes or hours depending on the detail and complexity. Once a colour value is determined it is encoded using spherical harmonics and these values are made available in game, to the GPU shaders, in some efficient data format. At this point, thousands if not millions of lighting calculations have been consolidated into a single colour for a known point in the world.
Now, when it comes to visualising that world in realtime the 3D geometry in the scene can submit it's world position to the shader and the indirect light information stored in the spherical harmonic encoded data can quickly be read. So for position x,y,z we can quickly determine what the indirect lighting colour should be (as a result of the offline processing that was done before). This means a lookup or two can be performed to get a base colour and we don't have to spend aaaaages calculating the ray's journey as it bounces time after time over thousands of polygons.
Raytracing:
Ok, we are in our 3D world and we need to determine the indirect lighting for that same x,y,z position. We shot a primary ray from our viewpoint for the pixel that corresponds to that x,y,z position and we calculated where the ray hit; that's the first hit point. At this point we have various things to do depending on the complexity of the material of the object (such as calculate reflections, refractions, etc.) But we're just interested in the indirect lighting. To do this we need to fire off a whole bunch of secondary rays - from the hit point - away from the object and see what they next hit in the scene. To make this look half decent we are probably looking at 500-1000 rays... and, of course, each and every one of those rays we've just fired out could also bounce from another object and spawn another 1000 rays each... When we hit another object we then query to see how much of an influence that object has on the scene for our very first hitpoint. That is, how much does the presence of the newly hit object affect our first hit object. Does it contribute some of it's colour for example? Ultimately we sample millions of rays and points to determine indirect lighting contributions for our original hit point and then use this to set that pixel to the correct colour. It takes a LONG time and is not feasible realtime. But it does mean we can get some great effects such as colour bleeding, radiosity like effects and so forth.
^ Apologies if that doesn't make sense - it's kind of difficult to explain and is a very basic outline of one particular aspect of the lighting pipeline! Here's a couple of pics.
(Realtime) Direct lighting only - i.e. only light contributions are from objects directly hit by rays from the light sources...
(Realtime) With indirect lighting only - i.e. with light contributions from light 'bounces' as well as direct lighting from the light sources...
(Non-realtime) With indirect lighing contributions from shooting 1000 sample rays per secondary hit. Notice how light bleeds so you can see the wall colours hinted on the ceiling, etc.
Crap explanation - I'm sorry but I lost enthusiasm half way through as it's actually quite difficult to explain.