Interestingly, the shift to ever-more advanced and high-fidelity visuals has been/is somewhat of a double-edged sword. (It's something I have been fortunate enough to experience during my time in the games industry, I'm old enough
to have seen the shift from 2D to 3D to the latest all-singing graphics technology).
The upside is that visuals
can look stunning running on modern and current hardware. Whilst I acknowledge that consoles can't compete at the same level as a decent gaming rig, it can be argued that both PC and console are certainly capable of producing a good level of visuals to satisfy the gamer. That's all well and good but there is a downside to this; and this is the fact that (to produce this level of visual splendour) the amount of effort and amount of work needed to produce the assets for a given game has shot through the roof. I daresay the average gamer has no idea what goes into creating the assets that finally end up in a game (and why should they, they are the customer and simply wish for a decent game for their hard-earned!) but I think they'd be surprised if they were to see a typical workflow for a modern day title!
To give some idea of how things have changed, I remember working on a game in the late 1990's. The budget for the game was considerably less than £300k. The core development team consisted of approximately 14 people - around 8 programmers and 6 artists. Programmers were either engine/technology developers (working on the 3D engine, physics, etc.) or gameplay programmers (working on the UI's, the gameplay mechanics, localisation, etc.) The art team consisted of a couple of texture artists, a couple of 3D model builders, an animator, a level designer and a concept artist. And that was all that was needed in terms of that core development team. The required assets for the game were really basic back then - no normal maps, no shader technology, low-poly models, low resolution textures... it was all quite straight forward (ignoring the difficulties of getting things working on what we'd now consider seriously underpowered and ancient hardware!) A large portion of the budget and effort was spent on the programming aspect of the game. More often than not the artists would be waiting for the code monkeys to get something fixed or implemented so that they could see their latest creations up and running in the game world. Code monkeys were God's; they were revered. They wore the daddy-pants! (I'm not biased).
But gradually things changed and we rolled into the 2000's... Game and graphics technology was still quite modest by today's standards but emerging technologies (improved streaming, etc.) meant that games were getting larger; game worlds, environments, levels were getting larger and consequently more 'stuff' was needed to fill that space. It was no big deal - many games were still quite modest in terms of what they could produce visually. Despite having vast areas to 'game' in novel technologies meant that there wasn't a massive increase in terms of the effort required to produce the content for those levels. 3D graphics were still quite basic (by today's standards) and despite getting excited by multi-textured objects and blended textures the effort required to produce them wasn't too bad. A large proportion of the development effort was still with the coders; developing new techniques that allowed vast worlds to be realised - BSP trees, lightmapping, portals, potential visibility sets and so forth. In my experience, around the early 2000's (in terms of the core development team) it wasn't unusual to see a 50/50 in terms of numbers of coders/artists working on a title.
Then boom... suddenly things really started to take off. "Next-gen" had become a buzz word and GPU technology was taking off in a big way. DirectX (and other API's) were maturing nicely and offered a rich feature set for developers to explore. With it came a wealth of new ideas and methods for producing visuals. Normal maps, specular maps, reflection maps, depth buffers, g-buffers, multiple render targets, and the list goes on. Gamers' expectations suddenly ramped up as they saw what was possible with the new technologies. It was also an exciting time for the developers! But then a 'shift' started to take place...
It was found that programming effort for a given title was becoming less when balanced against the requirements for art assets for that game. Admittedly the required coding knowledge was perhaps a little more specialist (in terms of getting to grips with shader technology and optimising engines to work with GPU's) but the amount of effort needed to produce the artwork had skyrocketed. It was no longer simply a case of building a 3D object, creating a diffuse texture set for it, keyframe animating it and then dropping it into a game engine. It had become a WHOLE lot more. Suddenly there was a requirement to produce (as an example) textures for a game object - and then to create additional textures that allowed shader technology to get clever with those textures and thus allow techniques that we now take for granted in today's games. Hmm... not sure that last sentence makes sense(!) so to give an example... A game I worked on in the mid 2000's had a requirement for a 'creature'. The creature was first modelled at a high resolution (with many polygons and high-fidelity textures) and this version was used for media purposes (videos, screenshots) and ALSO to generate normal maps for the game. Then a low-poly version of the creature was generated for in-game and this would take the aforementioned normal map to provide additional 'surface' detail. Then a specular map was generated for the creature; this determined how shiny the creature appeared when light hit it. And then supplementary textures were generated so that special effects could be performed in-game on that creature - e.g. for heat maps, etc. Then came the job of animating and so on and so forth... To cut a long story short (unlike my post) the amount of effort required (human and machine) - as you can see - ramped up considerably. On the game, the core development team consisted of approximately 8 programmers and 20 artists... Suddenly it was the artists that found themselves under pressure as they were required to produce more and more content in an effort to realise their in-game creations wrapped in "next-gen" goodness.
As a rough estimate, the dev house I worked for probably saw a 3x increase in effort and manpower required purely for the art side of things!
Due to the length of this post, and the fact I stopped to grab my dinner, I've lost my train of thought now! LOL! But I hope the gist of what I'm trying to say comes through. I think that some of the expansive sandbox games we perhaps were hoping for are as much restricted by available budget/manpower (due to the artistic requirements) as they are by arguably having a console as a lead platform (as opposed to the PC). It's easy to be wow'd by the latest engine tech from Crytek, Epic et al and their respective technologies are no doubt impressive. Heck, they make me w- salivate at times! But a little nod to the art guys is needed sometimes. The lengths they have to go to to produce the assets can be draining and they ultimately determine how the game looks. For all the greatest underlying technology in the world, modern games would just look like sh*te with programmer art throughout...