ClioSport.net

Register a free account today to become a member!
Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • When you purchase through links on our site, we may earn an affiliate commission. Read more here.

GPU Upgrade



  Not a 320d
Ok so my 8800GT OC feels a little sluggish these days especuially having 512mb of memory, it starts to struggle on higher resolutions, whats a good upgrade, i want to be able to play the likes of the New AVP, BF BC2, metro 2033 or whatever it is, and some others when they release in Feb.

Price limit....Well, depends. If its a worthwhile upgrade then i might stretch but theres not way im spending £4-500 on my computer. Something like on the link below.

http://www.overclockers.co.uk/showproduct.php?prodid=GX-160-XF&groupid=701&catid=56&subcat=1341

Current Spec -

Q6600 @ 3.19GHz
4GB of OCZ 1066 DDR2
550Watt Corsair PSU (Might be letting me down a little :eek: )
8800GT OC (EVGA)

Nothing special.:dead: but im no computer geek.:star:
 
  Monaro VXR
5870 or the cheaper 5850. The 5870 is a cracking card though. Will run basically anything you want to throw at it. Have one in my new rig and it is a great card. Faster than the nvidia cards except the 295 but that is a multi GPU card and more expensive.

Also the ATI cards are DirectX 11 and not DirextX 10 so more future proof as such.

Right now the ATI cards are the only ones to consider buying until the new nvidia tech comes along.
 

Darren S

ClioSport Club Member
5870 or the cheaper 5850. The 5870 is a cracking card though. Will run basically anything you want to throw at it. Have one in my new rig and it is a great card. Faster than the nvidia cards except the 295 but that is a multi GPU card and more expensive.

Also the ATI cards are DirectX 11 and not DirextX 10 so more future proof as such.

Right now the ATI cards are the only ones to consider buying until the new nvidia tech comes along.

Definitely.

Still running on my trusty 'ol 4870 - but if I were to buy a new one tomorrow - it would be ATi and not nVidia that I'd be spending the readies on....

D.
 

SharkyUK

ClioSport Club Member
Personally, I'd wait a month or two and see what the new nVidia card is like (GF100). It's supposed to be released sometime in Q1 so... Even if it's crap or too expensive, it might result in other cards being reduced in price.
 

Darren S

ClioSport Club Member
Ill take your advice and invest £300 or so on one then. Cheers.

Put another way m8 - I can run Mass Effect 2, Bioshock and Unreal Tournament 3 in 1920x1200 with most of the bells and whistles on - and I've only got a dual core AMD X2 6400 as well!

I think combine a current ATi card with our existing quad-core setup and you should be more than fine.

D.
 
  Not a 320d
Right, so a 5870, and keep my 8800GT to handle physics as i have 2 PCIE slots, good idea?.

Also a new power supply. Ill wait a while before i get it i think there are going to be some new releases soon, should bring them down in price.
 
  Monaro VXR
Using an 8800GT as a PhysX card is a bit pointless also power hungry and rather toasty cards.

Very few games make massive use of PhysX anyway. Also not officially supported has to be done with work arounds.
 
  Fiesta ST
Can be a pain in the arse to get the combination working also.

I got the 5770 and it plays anything I chuck at it and was only £120
 
  Not a 320d
Is the 8800GT a bad idea then? Power isnt going to be an issue, i just thought that as it was possible to do (all but it being a pain in the ass) it would be worth having it for the physics support. Id like to have DX11 support, theres a few games coming out im really looking forward to.
 

Darren S

ClioSport Club Member
Is the 8800GT a bad idea then? Power isnt going to be an issue, i just thought that as it was possible to do (all but it being a pain in the ass) it would be worth having it for the physics support. Id like to have DX11 support, theres a few games coming out im really looking forward to.

If you want DX11 support, then the 5700, 5800 or even 5900 series of ATi cards are really the way to go. The 5700's are a great card for the money. Personally, I'd get two if my mobo supported Crossfire. :)

As for the Physx support - that's a toughie, m8. I nabbed a 2nd hand Physx PCI card off eBay for around £30 and tbh, it's well worth it on the games that support it. Principally those running the Unreal Engine - both Unreal Tournament 3 and Rainbow Six : Las Vegas 2 showed a significant improvement after I fitted the Physx card.

Don't forget that Physx isn't the only PPU option. Anything using the Havok engine is geared to use the functionality of the ATi cards. If you can get something like the following however, then its well worth it, imo...

http://cgi.ebay.co.uk/BFG-Ageia-Phy...raphics_Video_TV_Cards_TW?hash=item19ba30ae10

Having had a quick search on eBay, there's an auction with about 25mins left! :D

D.
 
  Coupe/Defender V8
8800GT is still pretty pokey but it shows it's age anything over 1600x1200. The 5700s are good bang for buck.

Nvidia's Fermi is due at the end of March

Edit: Misread, 9800GT for Physx is the sweet spot tbh

Best bang for buck at the moment would be the Sapphire 2GB 4870 VaporX
 
  Monaro VXR
The Ageia PhysX cards are no longer going to be supported either so not worth spending on those.

Nvidia are basically going to make them unusable so you have to invest into one of their cards. Bit annoying really. I mean they would surely make more money if they took something like a 9800GT and turned it into a dedicated PhysX card and sold it for £150 people would buy it.
 

Darren S

ClioSport Club Member
8800GT is still pretty pokey but it shows it's age anything over 1600x1200. The 5700s are good bang for buck.

Nvidia's Fermi is due at the end of March

Edit: Misread, 9800GT for Physx is the sweet spot tbh

Best bang for buck at the moment would be the Sapphire 2GB 4870 VaporX

The 4870 cards are really impressive. Mine still performs well, even though its the 512MB version. Just nabbed a 4890 with 1GB off Fleabay though, which I'm hoping will help the higher resolutions a little. Would have liked to have got the XFX XXX version, mind. :)

D.
 
  Coupe/Defender V8
tbh Physx is a huge marketing gimmic. It's poorly compiled and heavily cpu dependant. Although it looks the nuts in Batman AA. Hardly worth having an extra card in your machine though, like
 

Darren S

ClioSport Club Member
The Ageia PhysX cards are no longer going to be supported either so not worth spending on those.

Nvidia are basically going to make them unusable so you have to invest into one of their cards. Bit annoying really. I mean they would surely make more money if they took something like a 9800GT and turned it into a dedicated PhysX card and sold it for £150 people would buy it.

Very true m8. But for the price of a current game, it's an easy install and one that can add some much needed clout to supported games.

I wish there was an industry standard on physics effects - that all the manufacturers complied with.

D.
 
  Not a 320d
Is it really worth spending the extra?

I bought an 8800GT for about £90-100 because it was cheap and could handle most games out there at the time, even runs crysis fairly well. Just seems a bit overkill spending £350. Id rarther skimp and put the money towards a few detailing products for my car or some brakes. I do play games now and again, but i dont want to spend money on my PC where its not REALLY needed or necessary as itll end up out of date in a few months. In April i should have about £800 spare, and thats after ive budgeted for Le mans, the spending money for le mans and a remap for the car.

Truth be told theres only 2 games i really want to make sure i can play well, on a reasonable resolution and at full graphics at a decent FPS, ones Metro 2033 which i think will support DX11 and the other Splinter Cell.

If im going to be able to do this ill need both a new power supply as my 520Watt is stretched as it is, and also the new graphics card. I already have 4GB of 1066DDR2 and a Q6600 running at 3.2GHz.

So what would be able to meet my requirememnts, im not fussed about future proofing as such. Just want to be able to play the games mentioned.

:eek:
 

Darren S

ClioSport Club Member
Is it really worth spending the extra?

I bought an 8800GT for about £90-100 because it was cheap and could handle most games out there at the time, even runs crysis fairly well. Just seems a bit overkill spending £350. Id rarther skimp and put the money towards a few detailing products for my car or some brakes. I do play games now and again, but i dont want to spend money on my PC where its not REALLY needed or necessary as itll end up out of date in a few months. In April i should have about £800 spare, and thats after ive budgeted for Le mans, the spending money for le mans and a remap for the car.

Truth be told theres only 2 games i really want to make sure i can play well, on a reasonable resolution and at full graphics at a decent FPS, ones Metro 2033 which i think will support DX11 and the other Splinter Cell.

If im going to be able to do this ill need both a new power supply as my 520Watt is stretched as it is, and also the new graphics card. I already have 4GB of 1066DDR2 and a Q6600 running at 3.2GHz.

So what would be able to meet my requirememnts, im not fussed about future proofing as such. Just want to be able to play the games mentioned.

:eek:

Post 9 it is then..... Longy has a 5770 and its fine from what he says. It also supports DX11 - and if your mobo supports Crossfire, you should be able to whack another 5770 in there at some point in the future, for not much money. :)

D.
 

SharkyUK

ClioSport Club Member
tbh Physx is a huge marketing gimmic. It's poorly compiled and heavily cpu dependant. Although it looks the nuts in Batman AA. Hardly worth having an extra card in your machine though, like

Hmmm, Physx isn't/wasn't a marketing gimmick. I think the intention was good but it was doomed to fail right from the off. Firstly, despite offering a fairly impressive processing capability, the read/write to and from the card was slow and involved far more CPU overhead than it should. There wasn't much point in executing multiple streams of instructions if the results took orders of magnitude longer to fetch from the card! Secondly, as the GPU market continued it's technological advance (i.e. the step up in power and capability with each successive generation of GPU) the Physx card was never going to be able to compete. For example, the GPU's around today are more general-purpose than ever and not purely for graphics intensive tasks - hence the reason why physics is now popular running on the GPU's (as well as other tasks such as HD audio processing). This trend will continue for the foreseeable future too I expect.

Just my two cents worth by the way... :eek: :D
 
  Not a 320d
Post 9 it is then..... Longy has a 5770 and its fine from what he says. It also supports DX11 - and if your mobo supports Crossfire, you should be able to whack another 5770 in there at some point in the future, for not much money. :)

D.


How well does it play it though. oh and i made a mistake. Metro is only DX10. I think for that reason ill go splash out £250 on a 5850. Im sure a card like that will be able to handle DX10.
 

Darren S

ClioSport Club Member
How well does it play it though. oh and i made a mistake. Metro is only DX10. I think for that reason ill go splash out £250 on a 5850. Im sure a card like that will be able to handle DX10.

The 5850 will have some decent clout, m8. I've got a 4870 (soon to be 4890) and for the most part, it can handle a lot of bells and whistles being switched on in games.

Of course, you'll get the hardware killers - Crysis being one of them - but with higher resolutions, having 1GB of memory onboard at least does really help out. This being one of the reasons why I'm switching from a 512MB 4870 to a 1GB 4870 - simply as I run most games at 1920x1200 resolution.

Personally, if you can afford a 5800 series card, then I'd go for that - but the cheaper 5700 series are very good performers in their own right.

D.
 
  Coupe/Defender V8
Hmmm, Physx isn't/wasn't a marketing gimmick. I think the intention was good but it was doomed to fail right from the off. Firstly, despite offering a fairly impressive processing capability, the read/write to and from the card was slow and involved far more CPU overhead than it should. There wasn't much point in executing multiple streams of instructions if the results took orders of magnitude longer to fetch from the card! Secondly, as the GPU market continued it's technological advance (i.e. the step up in power and capability with each successive generation of GPU) the Physx card was never going to be able to compete. For example, the GPU's around today are more general-purpose than ever and not purely for graphics intensive tasks - hence the reason why physics is now popular running on the GPU's (as well as other tasks such as HD audio processing). This trend will continue for the foreseeable future too I expect.

Just my two cents worth by the way... :eek: :D


Did you ever try a Physx card? I'd say no at a guess. They worked. Nvidia Physx doesn't, at least not very well. For the said reasons
 
  Not a 320d
Thanks for all the hele everyone. Been a huge help !! Ill pay the extra for the 5800 card just so ive got something that should handle the extra for a while as games progress.
 

SharkyUK

ClioSport Club Member
Did you ever try a Physx card? I'd say no at a guess. They worked. Nvidia Physx doesn't, at least not very well. For the said reasons
Indeed I did/have. I stand by my comments having had firsthand experience of them... worked 13 years as a 3D graphics programmer on various games and tech projects, including working with nVidia and Ageia so... I have a pretty good idea thanks.

Cheers for the reply, though. :rolleyes:

I'm not saying the Phsyx cards didn't work. They were just painfully slow, badly implemented and a dedicated physics co-pro was never going to win against the increasing GPU/general purpose processors found in todays systems.
 

Darren S

ClioSport Club Member
Indeed I did/have. I stand by my comments having had firsthand experience of them... worked 13 years as a 3D graphics programmer on various games and tech projects, including working with nVidia and Ageia so... I have a pretty good idea thanks.

Cheers for the reply, though. :rolleyes:

I'm not saying the Phsyx cards didn't work. They were just painfully slow, badly implemented and a dedicated physics co-pro was never going to win against the increasing GPU/general purpose processors found in todays systems.

It's a shame that the dedicated physics co-pro never really found the right backing and/or development. I mentioned it on a similar thread on here ages ago about the 'potential' of a pure physics co-pro - certainly within games and simulations.

Imagine having a co-pro working 100% to calculate suspension travel, braking forces and cornering G's within a racing game, for example? Or even extreme calculations such as tyre wear and aerodynamics? You could even start adding ballistic effects of rounds in FPS game from air density, gravity and wind?

It would (on the face of it) all seem like complete overkill for a fleeting second of on-screen action. Maybe that's one of the reasons why the uptake was as good as it should have been?

D.
 
  Punto/Clio GTT
i bought a 5750 few days ago as my nvidia 8800gts decided to blow up.

i gained 16fps in a game i play regular

also catalyst is pretty damn good software, the ability to overclock with ati software :)
 

SharkyUK

ClioSport Club Member
It's a shame that the dedicated physics co-pro never really found the right backing and/or development. I mentioned it on a similar thread on here ages ago about the 'potential' of a pure physics co-pro - certainly within games and simulations.

Imagine having a co-pro working 100% to calculate suspension travel, braking forces and cornering G's within a racing game, for example? Or even extreme calculations such as tyre wear and aerodynamics? You could even start adding ballistic effects of rounds in FPS game from air density, gravity and wind?

It would (on the face of it) all seem like complete overkill for a fleeting second of on-screen action. Maybe that's one of the reasons why the uptake was as good as it should have been?

D.

In terms of the Physx co-pro... it was a case of bad timing as much as anything. When it was released, GPU's were the the new, big 'in-thing' and very much in the ascendency. With the evolving power of the GPU's and the increasing shader instruction set, developers soon realised that a lot of non-graphical work could actually be carried out on the GPU... and often a lot quicker, too, and with less overhead (unless trying to read back results from the GPU/card which is always very slow with older hardware).

Simply put, the GPU's - coupled with the latest shader instruction sets - allow the developer to treat the GPU more like a general purpose processor (and a very fast one too with massive parallel processing capability if programmed correctly). The GPU has got to the point where it's so 'general purpose' that we now see PhysX type technologies running on the GPU (physics being only one example of usage). The distinction between the GPU and one of the processors in your multi-core PC/laptop is getting less and less; but obviously the GPU has specialist hardware for all of the graphics magic it deals with AND it has phenomenal number-crunching abilities to boot. CUDA, and other languages, are now making the GPU much more accessible and all the more easier to take advantage of.

Of course, there is another option... multicore/processor systems are pretty much the norm these days. It's quite possible that a core/processor (as in one of those comprising your CPU) could be used pretty much exclusively to run a particular system - such as physics simulation, etc. The problems come when you try to synchronise everything together but that's a whole different kettle of fish.

Apologies for the ramble, typed in a but of a rush.
 

Darren S

ClioSport Club Member
In terms of the Physx co-pro... it was a case of bad timing as much as anything. When it was released, GPU's were the the new, big 'in-thing' and very much in the ascendency. With the evolving power of the GPU's and the increasing shader instruction set, developers soon realised that a lot of non-graphical work could actually be carried out on the GPU... and often a lot quicker, too, and with less overhead (unless trying to read back results from the GPU/card which is always very slow with older hardware).

Simply put, the GPU's - coupled with the latest shader instruction sets - allow the developer to treat the GPU more like a general purpose processor (and a very fast one too with massive parallel processing capability if programmed correctly). The GPU has got to the point where it's so 'general purpose' that we now see PhysX type technologies running on the GPU (physics being only one example of usage). The distinction between the GPU and one of the processors in your multi-core PC/laptop is getting less and less; but obviously the GPU has specialist hardware for all of the graphics magic it deals with AND it has phenomenal number-crunching abilities to boot. CUDA, and other languages, are now making the GPU much more accessible and all the more easier to take advantage of.

Of course, there is another option... multicore/processor systems are pretty much the norm these days. It's quite possible that a core/processor (as in one of those comprising your CPU) could be used pretty much exclusively to run a particular system - such as physics simulation, etc. The problems come when you try to synchronise everything together but that's a whole different kettle of fish.

Apologies for the ramble, typed in a but of a rush.

No apologies needed - good to read that above!

So where do you think things will head over the next 5-10 years? The possibility of cheaper, multiple GPUs per card? Or would we quickly start to see bottlenecks at the interface/bus level then?

Things like triple-SLi intrigue me, but having seen an average increase in throughput (at best) with my own SLi setup in the past (HL2 Lost Coast benchtest - single 7900GTX = 90fps, SLi'd 7900GTX = 112fps, single ATi 3850 = 160fps+) - again, are these just more or less gimmicks?

I guess as you say, whatever the hardware - the true benefits are only seen if 'programmed correctly'. Sloppy or lazy code would hamper even the best cutting edge hardware, I would have thought?

Have you had any experience of the ATi Fire cards, m8? They always seem to command a premium when on paper, the specs merely look ok-ish. I've never needed to use any professional gfx setup, so I'm in the dark with that. :)

D.
 

SharkyUK

ClioSport Club Member
So where do you think things will head over the next 5-10 years? The possibility of cheaper, multiple GPUs per card? Or would we quickly start to see bottlenecks at the interface/bus level then?

Hi, D.

I think that the next 5-10 years will see very similar advances to what we've seen over the last 5 years; i.e.
  • GPU's getting faster with each generation
  • More shader units (for effectively running multiple shader programs across multiple units at the same time)
  • Extensions to future shader languages so that they become more like a 'traditional' programming language (such as C/C++) and offer greater flexibility - such as improved branching capabilities and similar fundamental coding constructs
  • Increased memory capacities so that more data (textures, 3D geometry, shader programs) can persist and be held in GPU memory without having to request it in from other parts of the system (such as system memory... sloooooow)
  • Improvements to bus architectures to allow faster data transfer to the GPU (and, perhaps more importantly, faster writebacks from the GPU!)
  • Multiple GPU cores on single/dual card solutions rather than 3- and 4-SLI type setups
  • Smaller die-processes to keep energy consumption down, heat levels lower and performance higher
  • Evolving graphics API's (like DirectX) leaning more towards parallelism and multithreaded support (as already seen in DirectX 11 and onwards)
You made an interesting point about the interface/bus level and bottlenecks; the sad truth us that we are already massively restricted by bus and current architectures. Sure, a faster architecture could be developed but it would mean throwing away a lot of the current technology and starting again PLUS it would be significantly more expensive to produce a system that could keep pace with the speed of modern day CPU's and GPU's. Faster memory (main, L1 and L2) is all well and good but when the path between them and other components is orders of magnitude slower then there's always going to be performance degradation.

Things like triple-SLi intrigue me, but having seen an average increase in throughput (at best) with my own SLi setup in the past (HL2 Lost Coast benchtest - single 7900GTX = 90fps, SLi'd 7900GTX = 112fps, single ATi 3850 = 160fps+) - again, are these just more or less gimmicks?

SLI / Crossfire setups are all well and good but sometimes the premium paid isn't worth it in terms of bang for your buck (as you've experienced yourself). For a multi GPU setup in such a configuration to work optimally, the balance of workload and throughput to the cards has to be scheduled carefully - and this is far from trivial. It's all too easy to stall the pipeline and introduce bottlenecks, thus reducing framerates by some considerable amount. And with each card and manufacturer being different, as are the configurations in terms of memory, shader units, etc. there is no easy way to ensure maximum performance in all cases. Personally I tend to go for the quickest single card solution I can find at the time I'm purchasing because I know that in 18 months the next revision of my single card solution will be on a par (in terms of performance) with the multiple GPU solutions currently available.

Of course, with the growing use of GPU's for other tasks as well as purely graphics operations - that extra GPU/card may be useful... as already mentioned, it could be utilised as a physics co-pro, video encoder/decoder, HD audio processor, ...

I guess as you say, whatever the hardware - the true benefits are only seen if 'programmed correctly'. Sloppy or lazy code would hamper even the best cutting edge hardware, I would have thought?

Indeed! Even the most experienced and capable programmers can make a cutting edge system crawl. Scarily it doesn't take much! Thankfully there are a lot of tools and development practices in place now that go some way to ensure that games/software make use of the now widespread multicore/processor/multithreaded systems available. It's been quite a steep learning curve for a lot of game developers I think as 'parallel' and threaded coding was something more typical of academia... but not anymore (and hasn't been for quite some years now).

Have you had any experience of the ATi Fire cards, m8? They always seem to command a premium when on paper, the specs merely look ok-ish. I've never needed to use any professional gfx setup, so I'm in the dark with that. :)

Yes mate, I've had firsthand experience of the ATI FirePro/FireGL, etc. as well as the nVidia Quadro series. They do command a premium as they often contain some nifty (if sometimes subtle) differences to more game focused cards. The thing to remember is that they are more geared towards the CAD and 3D modelling sectors hence tend to excel in areas that aren't particularly relevant to gamers. An example of this is the colour processing. Whilst the cards we use for games typically work in 32-bit colour (24-bits for colour, 8-bits alpha) this gives us the 16.7 million colours available for display. Whilst sounding impressive, this doesn't really cut it for high-end movie production, CAD, and suchlike. So, these cards (the Fire and Quaddro series) often work in 30-bit colour - giving a vastly improved range of colours (well over 1 billion). Some even work internally at 128-bit colour and, well, that's huge!) This makes a difference in those non-gaming sectors. Other difference can be the way in which primitives are rendered... if we take the gamer GPU then these are optimised for super-fast transformation and rendering of polygons. This is also beneficial to the Fire/Quaddro's but they may trade some performance here and have additional grunt for rendering point and line primitives (an example being wireframe 3D models) instead. You'll often find that some of these cards also tend to offer more output options - such as supporting more monitors out of the box, higher resolutions, and suchlike. I could go on but it gets boring...

In a nutshell, some of the gaming GPU's are derived from Fire/Quaddro setups but then tailored for performance. To summarise in a (long) sentence - game focused GPU's are designed for super-fast performance with comparitively light resource requirements whilst Fire/Quaddro GPU's are designed to handle large resource requirements and with precision. I hope that makes sense...

I wouldn't recommend buying a Fire/Quaddro if you're a gamer as you'll likely experience performance hits. If you're a 3D modeller, video editor or similar then it's a different story.

Cheers. :D
 

Darren S

ClioSport Club Member
Brilliant m8 - probably one of the best informative posts I've read on here! :cool:

The whole Fire/Quadro thing makes sense compared to the mainstream 'gamer' cards. I guess there might also be a factor of economies of scale in there affecting the price too? I'd hazard a guess that proportionally, both nVidia and ATi sell a lot more gamer cards than high-end pro ones - hence bumping up the price of the latter by quite some margin?

The first add-on card I got (well, one that had accelerator in the title anyway - lol) was a Matrox Mystique with 2MB onboard. After a year, I bought the daughterboard upgrade to take it to 4MB in total and gave by Windows 3.1 install the ability to view 32-bit colour. Great. But 110% utter useless as nothing I ran actually used that colour depth, so it a lot of ways, it was money wasted.

Are Matrox still a contender in the high-end market? I know that back in the day, the Mystique and Millenium cards were pretty good, followed by the Parhelia (sp?) range? I've only recently been made aware of Matrox still churning out cards as a company with some of the tri and quad monitor output cards, like you mentioned above.

What spec of kit do you use on a daily basis? :)

D.
 

SharkyUK

ClioSport Club Member
Brilliant m8 - probably one of the best informative posts I've read on here! :cool:

Cheers - glad you found it interesting mate.

The whole Fire/Quadro thing makes sense compared to the mainstream 'gamer' cards. I guess there might also be a factor of economies of scale in there affecting the price too? I'd hazard a guess that proportionally, both nVidia and ATi sell a lot more gamer cards than high-end pro ones - hence bumping up the price of the latter by quite some margin?

I'm not sure how the pricing is affected by the difference in numbers of units sold to mainstream gamers, and those sold to the high-end video market... but my own thinking would be very much in line with your own. But, having said that, [some] high-end pro cards do tend to ship with more expensive components to cope with the demands of higher resource requirements and higher graphics fidelity (which I'll touch on in a moment).

The first add-on card I got (well, one that had accelerator in the title anyway - lol) was a Matrox Mystique with 2MB onboard. After a year, I bought the daughterboard upgrade to take it to 4MB in total and gave by Windows 3.1 install the ability to view 32-bit colour. Great. But 110% utter useless as nothing I ran actually used that colour depth, so it a lot of ways, it was money wasted.

Ah, the Matrox Mystique... I do remember as I had one through my hands at some distant point in the past! My first 3D graphics card was the Diamond Edge 3D - which was also the first real consumer 3D graphics card. It cost a fortune, had limited game support (pre-dated DirectX/Direct3D) and was based on the nVidia NV1 chipset. I remember being in awe of the silky smooth framerates of Virtua Fighter and a few other titles. I also dabbled with various 3DFX cards (including running two Orchid Righteous 3D's in SLI mode - although SLI was something different back in those days!) - and even had the PowerVR daughter board (which required a separate 2D card). The PowerVR technology would go on to power the Sega Dreamcast and a few other graphics card solutions. I've always tended to prefer nVidia but that's mainly due to their (IMHO) better drivers and the fact I've had the opportunity to work alongside some of their tech guys (who were very helpful).

Are Matrox still a contender in the high-end market? I know that back in the day, the Mystique and Millenium cards were pretty good, followed by the Parhelia (sp?) range? I've only recently been made aware of Matrox still churning out cards as a company with some of the tri and quad monitor output cards, like you mentioned above.

Matrox still factor considerably in the high-end market, although perhaps not so much for their Quadro/Fire equivalents. They offer a range of imaging hardware and solutions to the multimedia, video, visualisation, etc. sectors but their top-end graphics hardware don't quite match the likes of nVidia's (at least in my experience). But where they do excel is in their ability to drive several displays. Their new Octal cards for example can drive 8 displays from a single card... not too shabby! I can't really comment much as I don't have experience of their latest generation hardware.

What spec of kit do you use on a daily basis? :)

I'm quite lucky to be honest - I have some very nice spec hardware to play with, including the very latest nVidia hardware such as GTX 295's and similar. However, my personal workstation currently consists of a new QuadroFX 5800 (talking of high-end cards) as I'm very much working on heavy-duty simulations and visualisation at the moment. Remember how I was talking about high-precision and fidelity? Well, this card comes with 64-bit precision for more accurate calculations and ultimately improved output... and a 4GB frame buffer. It's not as fast (bandwidth-wise) as the GTX 295 (about half the bandwidth) but it's still very capable and well geared towards the high resolution texture work I currently do and shader heavy programs I'm writing. With 240 (if I remember right) CUDA-programmable cores it can be quite a potent beast. Have you seen/heard about nVidia Tesla clusters? They are quite interesiting too... supercomputer performance using 960-core CUDA friendly GPU's in parallel arrays. Great stuff if you're a geek like me... :eek: The Quadro is driving two DELL 30" flat panel monitors each at resolution of 2560x1600.

CPU-wise, I'm using an Intel Xeon Quad Core machine primarily (Nehalem based) with 24GB RAM - running Windows 7 Ultimate (64-bit).

Cheers! :D
 
  Not a 320d
http://www.overclockers.co.uk/showproduct.php?prodid=MO-049-AC

Urgh, still umming and arring. Metro 2033 only uses DX10, and i doubt it will be demanding by the looks of it although i doubt my 8800gt will cope. Im tempted to wait now until april and buy the monitor above too. is it any good?

Mine has dead pixels and ive never really liked the contrast ratio on it, seems a bit light.

Im almost settled on the 5870 XFX XXX too but im still unsure if it will be overkill, would all i need to play the likes of games this year will be a 5850? or am i worth spending the extra £100.
 
  Clio 172 Cup
Sounds like your system is almost exatcly the same as the one I have just sold.

I was running 2 8800GT OC eVGA and never found a problem with any game.

Maybe just buy another card and pair them up?

Cheap fix.
 


Top