So where do you think things will head over the next 5-10 years? The possibility of cheaper, multiple GPUs per card? Or would we quickly start to see bottlenecks at the interface/bus level then?
Hi, D.
I think that the next 5-10 years will see very similar advances to what we've seen over the last 5 years; i.e.
- GPU's getting faster with each generation
- More shader units (for effectively running multiple shader programs across multiple units at the same time)
- Extensions to future shader languages so that they become more like a 'traditional' programming language (such as C/C++) and offer greater flexibility - such as improved branching capabilities and similar fundamental coding constructs
- Increased memory capacities so that more data (textures, 3D geometry, shader programs) can persist and be held in GPU memory without having to request it in from other parts of the system (such as system memory... sloooooow)
- Improvements to bus architectures to allow faster data transfer to the GPU (and, perhaps more importantly, faster writebacks from the GPU!)
- Multiple GPU cores on single/dual card solutions rather than 3- and 4-SLI type setups
- Smaller die-processes to keep energy consumption down, heat levels lower and performance higher
- Evolving graphics API's (like DirectX) leaning more towards parallelism and multithreaded support (as already seen in DirectX 11 and onwards)
You made an interesting point about the interface/bus level and bottlenecks; the sad truth us that we are already massively restricted by bus and current architectures. Sure, a faster architecture could be developed but it would mean throwing away a lot of the current technology and starting again PLUS it would be significantly more expensive to produce a system that could keep pace with the speed of modern day CPU's and GPU's. Faster memory (main, L1 and L2) is all well and good but when the path between them and other components is orders of magnitude slower then there's always going to be performance degradation.
Things like triple-SLi intrigue me, but having seen an average increase in throughput (at best) with my own SLi setup in the past (HL2 Lost Coast benchtest - single 7900GTX = 90fps, SLi'd 7900GTX = 112fps, single ATi 3850 = 160fps+) - again, are these just more or less gimmicks?
SLI / Crossfire setups are all well and good but sometimes the premium paid isn't worth it in terms of bang for your buck (as you've experienced yourself). For a multi GPU setup in such a configuration to work optimally, the balance of workload and throughput to the cards has to be scheduled carefully - and this is far from trivial. It's all too easy to stall the pipeline and introduce bottlenecks, thus reducing framerates by some considerable amount. And with each card and manufacturer being different, as are the configurations in terms of memory, shader units, etc. there is no easy way to ensure maximum performance in all cases. Personally I tend to go for the quickest single card solution I can find at the time I'm purchasing because I know that in 18 months the next revision of my single card solution will be on a par (in terms of performance) with the multiple GPU solutions currently available.
Of course, with the growing use of GPU's for other tasks as well as purely graphics operations - that extra GPU/card may be useful... as already mentioned, it could be utilised as a physics co-pro, video encoder/decoder, HD audio processor, ...
I guess as you say, whatever the hardware - the true benefits are only seen if 'programmed correctly'. Sloppy or lazy code would hamper even the best cutting edge hardware, I would have thought?
Indeed! Even the most experienced and capable programmers can make a cutting edge system crawl. Scarily it doesn't take much! Thankfully there are a lot of tools and development practices in place now that go some way to ensure that games/software make use of the now widespread multicore/processor/multithreaded systems available. It's been quite a steep learning curve for a lot of game developers I think as 'parallel' and threaded coding was something more typical of academia... but not anymore (and hasn't been for quite some years now).
Have you had any experience of the ATi Fire cards, m8? They always seem to command a premium when on paper, the specs merely look ok-ish. I've never needed to use any professional gfx setup, so I'm in the dark with that.
Yes mate, I've had firsthand experience of the ATI FirePro/FireGL, etc. as well as the nVidia Quadro series. They do command a premium as they often contain some nifty (if sometimes subtle) differences to more game focused cards. The thing to remember is that they are more geared towards the CAD and 3D modelling sectors hence tend to excel in areas that aren't particularly relevant to gamers. An example of this is the colour processing. Whilst the cards we use for games typically work in 32-bit colour (24-bits for colour, 8-bits alpha) this gives us the 16.7 million colours available for display. Whilst sounding impressive, this doesn't really cut it for high-end movie production, CAD, and suchlike. So, these cards (the Fire and Quaddro series) often work in 30-bit colour - giving a vastly improved range of colours (well over 1 billion). Some even work internally at 128-bit colour and, well, that's huge!) This makes a difference in those non-gaming sectors. Other difference can be the way in which primitives are rendered... if we take the gamer GPU then these are optimised for super-fast transformation and rendering of polygons. This is also beneficial to the Fire/Quaddro's but they may trade some performance here and have additional grunt for rendering point and line primitives (an example being wireframe 3D models) instead. You'll often find that some of these cards also tend to offer more output options - such as supporting more monitors out of the box, higher resolutions, and suchlike. I could go on but it gets boring...
In a nutshell, some of the gaming GPU's are derived from Fire/Quaddro setups but then tailored for performance. To summarise in a (long) sentence - game focused GPU's are designed for super-fast performance with comparitively light resource requirements whilst Fire/Quaddro GPU's are designed to handle large resource requirements and with precision. I hope that makes sense...
I wouldn't recommend buying a Fire/Quaddro if you're a gamer as you'll likely experience performance hits. If you're a 3D modeller, video editor or similar then it's a different story.
Cheers.