ClioSport.net

Register a free account today to become a member!
Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

  • When you purchase through links on our site, we may earn an affiliate commission. Read more here.

New toy! GPU and Geek content!



Yes... ... ... and no. :D

DX10 and DX11 obviously both build on the functionality provided by DX9, as well as introducing new capability and support for the ever-evolving hardware. However, performance difference between the three doesn't necessarily increase as the version number increases. Unfortunately, there is no one particular reason as to why this is the case either. It simply "depends" (talk about sitting on the fence!)

DX9 games were often found to run a little quicker than their DX10 counterparts in *some* instances. The reasons for this were many - for example, developers wanted to take advantage of the new features in DX10, wanted to pump extra geometry and textures to the card, etc. and inevitably traded some performance simply to have the "DX10" support box ticked. The same rings true with DX11 vs. DX9 (and DX10) - although DX11 seems to fair a little better when compared to DX9 than perhaps DX10 did. Again, there is no hard and true reason as to why - it depends so much on the underlying hardware and driver revisions to name a couple.

Why didn't DX10 and DX11 blow folks' socks off? I don't know... perhaps we are (as 'hardcore' gamers) are expecting too much! From a developer point-of-view, I don't think that DX10 perhaps got the support it necessarily warranted... why? Well, it kind of came out at a time when developers were generally going through a change in the way they typically worked. That is, with the mass production and uptake of multicore/multiprocessor systems, developers had to refactor code bases, update engines, parallelise their code, etc. Coders out there know that this is no easy task at the best of times. The single/serial processing 'thread' era was abruptly coming to an end and multicore/processor was the way to go; hence a period of change for many.

Interestingly (more so with DX11 and future versions that are on the way) Microsoft have embraced the multiprocessor/core technology that is commonplace these days by introducing intelligent methods whereby DX can now render on separate thread(s) - something that was *very* tricky to do on earlier versions and something even trickier to do if you wanted half-decent performance!

I'm getting boring now so will stop there. To summarise, there is no real definitive answer. LOL! :D

LOL :D

That's like the post equivilent of a man shrugging his shoulders :p
 

SharkyUK

ClioSport Club Member
Lol - I know so little! We have so much to learn, Master Yoda! :cool:

D.
LOL! No, no - not at all mate. The only difference is that yourself and most guys here have probably been out and about, socialising, enjoying yourselves and getting laid whilst I've been sat coding in a dark room for the last 25 years. I think you had the better deal. :lolup:

LOL :D

That's like the post equivilent of a man shrugging his shoulders :p
*shrugs shoulders* :D
 
  Evo 5 RS
DX10 is basically DX11. It's the same API with some new features added. So much so that DX10 based cards are capable of running more or less all of the features available in DX11. A very similar story to DX9/DX10 overhaul. The now dated UT3 Unreal engine was primarily coded in DX9. And that still stands to be the platform for most even now.

It just boils down to the fact it is always industry driven..and now the climate has changed it's slowed right down.

More importantly however is the bottleneck (Without going into the CPU bottleneck also) of the console generation. Both the RSX in the playstation 3 and the Xenos of the 360 are running on 'old hat' and require trickery, and yet if you take away the high resolution and all the FSAA that having a meaty gpu has to offer, in essence it looks the same more often than not.

PC gaming although always king in my books is dying


Mike
 

Darren S

ClioSport Club Member
DX10 is basically DX11. It's the same API with some new features added. So much so that DX10 based cards are capable of running more or less all of the features available in DX11. A very similar story to DX9/DX10 overhaul. The now dated UT3 Unreal engine was primarily coded in DX9. And that still stands to be the platform for most even now.

It just boils down to the fact it is always industry driven..and now the climate has changed it's slowed right down.

More importantly however is the bottleneck (Without going into the CPU bottleneck also) of the console generation. Both the RSX in the playstation 3 and the Xenos of the 360 are running on 'old hat' and require trickery, and yet if you take away the high resolution and all the FSAA that having a meaty gpu has to offer, in essence it looks the same more often than not.

PC gaming although always king in my books is dying


Mike

If this was the Middle Ages, I'd have you burnt at the stake for such blasphemy!!!!


(even if you maybe right....)
:)

How is SLi these days? I remember years and years back I was one of the first to jump on board the SLi wagon and it had nothing but problems.

I had a twin 7900GTX setup back in 2006/2007 and it worked ok in XP, but Vista x64 hated it with a passion when I tried it on there.

As mentioned on here, a lot depends on the developer's ability (and more than likely, time) to maximise the benefit of having multiple GFX cards in there. I tried the Lost Coast (sp?) Half-Life 2 demo on my 7900GTX setup to record FPS. On a single card, it managed 90fps and when set to SLi-mode, it only managed 112fps. Compare that to my m8's new ATi 3850 at the time that was scoring 170+fps on its own.

I've got a 4890 now, with another arriving in the next day or two that I picked up cheap. I've never Crossfire'd before, but I'd like to think that Windows 7 x64 is a lot more tolerant of multiple GFX cards than Vista ever was.

D.
 
  Evo 5 RS
Being that it's always been hardware based it was always way too picky for my liking - it's a lot better now though mate - not limited to firmware version etc.

Still a waste of time though ;) (at the high end at least)
 
  BMW e46 320 Ci Sport
DX10 is basically DX11. It's the same API with some new features added. So much so that DX10 based cards are capable of running more or less all of the features available in DX11. A very similar story to DX9/DX10 overhaul. The now dated UT3 Unreal engine was primarily coded in DX9. And that still stands to be the platform for most even now.

It just boils down to the fact it is always industry driven..and now the climate has changed it's slowed right down.

More importantly however is the bottleneck (Without going into the CPU bottleneck also) of the console generation. Both the RSX in the playstation 3 and the Xenos of the 360 are running on 'old hat' and require trickery, and yet if you take away the high resolution and all the FSAA that having a meaty gpu has to offer, in essence it looks the same more often than not.

PC gaming although always king in my books is dying


Mike

what makes you believe it's dying? because more games come out for consoles? i can't believe that given the capabilities of a high spec pc the gaming community are evnetually going to give up in favour of inferior consoles just because it appeals to the mass market more and gets more revenue. there will always be a place of pc gaming imo. look at rts games, forget playing them on a console.

maybe the future is a hybrid, where you caa install an xbox/ps OS and play xbox games on a regular pc which you can update and upgrade as much as you like, dual booting windows/osx/linux etc? i'd probably buy into that, you could buy like standarishd systems or beefire ones which you could customise to meet your needs, sounds like a good idea to me.
 

SharkyUK

ClioSport Club Member
All this talk of SLI... I (like a few others no doubt) remember 'proper' SLI rendering back in the day... using a couple of Orchid Righteous 3D's in SLI mode (although that SLI mode was a little different to how it's done today).

Take a look at the spec of the aforementioned baby right here and try not to laugh too much! :D
 
  172 Cup
All this talk of SLI... I (like a few others no doubt) remember 'proper' SLI rendering back in the day... using a couple of Orchid Righteous 3D's in SLI mode (although that SLI mode was a little different to how it's done today).

Take a look at the spec of the aforementioned baby right here and try not to laugh too much! :D

Those were the days. I can remember buying a Matrox Mystique and being blown away by the "performance" lol
 
  Evo 5 RS
aye, the old 3DFX Sli combined memory-output as opposed to processing power

Scan Line Interleave, as where as now it means scalable link interface. it was a lot more effective back then tbh
 

Darren S

ClioSport Club Member
Lol! The Orchid Righteous 3D! I paid £245 for mine from PC World Manchester (yes, naive in those days!). I'm sure I still have the original driver CDs for that at home.

I also got a Matrox Mystique as well and paid for the 2MB daughterboard for it. Iirc, that was around £65 for the memory upgrade alone.

D.
 

Darren S

ClioSport Club Member
Lol! The Orchid Righteous 3D! I paid £245 for mine from PC World Manchester (yes, naive in those days!). I'm sure I still have the original driver CDs for that at home.

D.


Lol - look what I found. Bring back any memories, Sharky?! ;)


DSCN4490.jpg



D.
 

SharkyUK

ClioSport Club Member
Lol - look what I found. Bring back any memories, Sharky?! ;)
Yep - sure does mate! Nice find... :cool:

Somewhere (at my folks place) I *think* I still have my Diamond Edge 3D card. This was based on the nVidia NV1 chipset and predates DirectX / Direct3D as we know it. I remember playing Virtua Fighter and Panzer Dragoon on it and being blown away by the silky smooth 3D graphics. It was the first real commercial 3D offering to market... which also incorporated 2D graphics, [cr@p] soundcard and game port support for Sega Saturn controllers. I also seem to remember it costing somewhere in the region of 400 quid.

I then bought the PowerVR card when that came out (which needed a 2D card already in place). The PowerVR tile-rendering technique never really took off in a big way (which was a shame as it worked wonders on the Dreamcast!) Happy days. :eek:

My earliest 'real' PC graphics card was a 512kb (yes, kilobyte!) Trident which I then upgraded to a 1MB Cirrus Logic job. (Prior to that I was a 16-bit Atari ST/Amiga type). Remember ISA and VLB (VESA local bus) before AGP and PCI / PCI-e took hold?! The powaah!
 
  alien green rs133
i remember running crossfire when the only place to get patch leads was from the usa, ended up getting paypal hacked for that trans action, all for the sake of 30 quid. and then crossfire was s**t, so got a nvidea card, happy days
 

Darren S

ClioSport Club Member
Yep - sure does mate! Nice find... :cool:

Somewhere (at my folks place) I *think* I still have my Diamond Edge 3D card. This was based on the nVidia NV1 chipset and predates DirectX / Direct3D as we know it. I remember playing Virtua Fighter and Panzer Dragoon on it and being blown away by the silky smooth 3D graphics. It was the first real commercial 3D offering to market... which also incorporated 2D graphics, [cr@p] soundcard and game port support for Sega Saturn controllers. I also seem to remember it costing somewhere in the region of 400 quid.

I then bought the PowerVR card when that came out (which needed a 2D card already in place). The PowerVR tile-rendering technique never really took off in a big way (which was a shame as it worked wonders on the Dreamcast!) Happy days. :eek:

My earliest 'real' PC graphics card was a 512kb (yes, kilobyte!) Trident which I then upgraded to a 1MB Cirrus Logic job. (Prior to that I was a 16-bit Atari ST/Amiga type). Remember ISA and VLB (VESA local bus) before AGP and PCI / PCI-e took hold?! The powaah!

Lol - I had one of those too. I'm sure it was a full-length card that as well. Had acres of green circuit board for apparently not much circuitry! It was a massive card - I'm pretty sure Clio V6's could do a 3-point turn on there with room to spare! ;)

D.
 
  Fiesta ST
I had a ATi Xpert@Play 4mb card (I think) for 2d and the good old Righteous Orchid 3DFX for 3D action. Mech Warrior was awesome :) I also remember upgrading my PC's Green Monitor to a CGA one! 4 colours I think.

Currently running 2 x 5770's in Crossfire and it runs great!
 

SharkyUK

ClioSport Club Member
Lol - I had one of those too. I'm sure it was a full-length card that as well. Had acres of green circuit board for apparently not much circuitry! It was a massive card - I'm pretty sure Clio V6's could do a 3-point turn on there with room to spare! ;)

D.

PMSL! You're probably right mate. Slightly off-topic but talking of big cards... did anyone else have a SoundBlaster AWE32? I had to upgrade my case to fit that big mother inside!
 
  172 Cup
Heaven benchmark is designed with Tessellation in mind. That's the main deciding factor on the end FPS/score.

Mine without Tessellation enabled -

170ee561.jpg


What Tessellation does -

12573821526izM8p4LAl_1_3_l.png


12573821526izM8p4LAl_1_9_l.png
 
Last edited:

SharkyUK

ClioSport Club Member
Ah - you were clearly posh. I had to make do with a SoundBlaster 16ASP! ;)

Posh? Far from it mate - believe me! LOL!

Tesselation, in a nutshell, is the 'automatic' addition of vertices within a polygonal mesh to give a more accurate representation of a surface. For example, curves may well appear smoother as a result and higher frequency details become more noticeable (as seen in the pics). Tesselation = more 3D geometry. (This is a very basic interpretation of it).
 

Darren S

ClioSport Club Member
Posh? Far from it mate - believe me! LOL!

Tesselation, in a nutshell, is the 'automatic' addition of vertices within a polygonal mesh to give a more accurate representation of a surface. For example, curves may well appear smoother as a result and higher frequency details become more noticeable (as seen in the pics). Tesselation = more 3D geometry. (This is a very basic interpretation of it).

Sounds like a bit of a pig for the programmer, I would imagine? Adding more polygon count to an object is one thing, but I guess you have to keep true to the original 'shape'?

Can graphics cards take that big increase in polygons easily in their stride these days? Or is that purely down to the amount of tesselation applied to the scene?

Cheers,
D.
 

SharkyUK

ClioSport Club Member
Sounds like a bit of a pig for the programmer, I would imagine? Adding more polygon count to an object is one thing, but I guess you have to keep true to the original 'shape'?

Can graphics cards take that big increase in polygons easily in their stride these days? Or is that purely down to the amount of tesselation applied to the scene?

Cheers,
D.
It's not too bad if your 3D geometry is authored with tessellation in mind... and if it is it's not so much of a problem for the coder as tessellation is performed by the hardware. Which is nice. That's why it is absolutely necessary to have a DX11 capable GPU to get hardware tessellation with DX at the moment. It was slated for release in DX10 but sadly never quite made it.

You basically supply a set of control points which define a patch; the patch being passed to a 'hull' shader (a hull is basically the shell of a 3D model). The hull shader determines the level of tessellation (i.e. how much additional geometry to generate, i.e. vertices) and this is passed onto the tessellator that generates the actual geometry. This is then further processed by another shader which calculates vertex positions, texture coordiates, etc. and then passes it onto the geometry shader for further processing and ultimately (down the line) rasterising; blatting it to screen. Gotta love modern hardware advances!

Modern cards can handle enormous polygons counts these days D. High tessellation certainly does have an impact but the numbers even a modest card can handle is pretty impressive. It's not so much the number of polys (up to a point)... it's the number of 'batches' of data that get sent to the card that determine performance, along with the number of state and texture switches. The higher the amount of data sent to the card with the lesser amount of interaction the better. If that makes sense. If not, take the following example...

In both examples I am rendering a 10 million polygon model with two basic texture and a simple light (using an i7 core and 480 GTX). This is a very simplified example though...

Case A:
Set texture 1
Begin drawing all polys that use texture 1
Start drawing polygon 1 -> Draw polygon 1 -> End drawing polygon 1
Start drawing polygon 2 -> Draw polygon 2 -> End drawing polygon 2
" "
Start drawing polygon N -> Draw polygon N -> End drawing polygon N

Set texture 2
Begin drawing all polys that use texture 2
Start drawing polygon N+1 -> Draw polygon N+1 -> End drawing polygon N+1
Start drawing polygon N+2 -> Draw polygon N+2 -> End drawing polygon N+2
" "
Start drawing polygon N+K -> Draw polygon N+K -> End drawing polygon N+K

Present on screen

Case B:
Batch all polys using texture 1 into a chunk of data
Batch all polys using texture 2 into a chunk of data

Set texture 1
Begin drawing all polys that use texture 1
Start drawing all polys using texture 1 in one go -> Draw texture 1 poly chunk -> End drawing texture 1 polys

Set texture 2
Begin drawing all polys that use texture 2
Start drawing all polys using texture 2 in one go -> Draw texture 2 poly chunk -> End drawing texture 2 polys

Present on screen

Result:
Massively different!
Case A will bring the most powerful system to it's knees... forget your 60-100-200 fps. It's quite easy to get into single figure FPS's.
Case B, using a clever bit of batching, will zip along all day in your hundreds of FPS's.

It's all in how the data is prepared and sent to the card; which is often more important than the sheer number of polys being thrown around.
 

Darren S

ClioSport Club Member
It's not too bad if your 3D geometry is authored with tessellation in mind... and if it is it's not so much of a problem for the coder as tessellation is performed by the hardware. Which is nice. That's why it is absolutely necessary to have a DX11 capable GPU to get hardware tessellation with DX at the moment. It was slated for release in DX10 but sadly never quite made it.

You basically supply a set of control points which define a patch; the patch being passed to a 'hull' shader (a hull is basically the shell of a 3D model). The hull shader determines the level of tessellation (i.e. how much additional geometry to generate, i.e. vertices) and this is passed onto the tessellator that generates the actual geometry. This is then further processed by another shader which calculates vertex positions, texture coordiates, etc. and then passes it onto the geometry shader for further processing and ultimately (down the line) rasterising; blatting it to screen. Gotta love modern hardware advances!

Modern cards can handle enormous polygons counts these days D. High tessellation certainly does have an impact but the numbers even a modest card can handle is pretty impressive. It's not so much the number of polys (up to a point)... it's the number of 'batches' of data that get sent to the card that determine performance, along with the number of state and texture switches. The higher the amount of data sent to the card with the lesser amount of interaction the better. If that makes sense. If not, take the following example...

In both examples I am rendering a 10 million polygon model with two basic texture and a simple light (using an i7 core and 480 GTX). This is a very simplified example though...

Case A:
Set texture 1
Begin drawing all polys that use texture 1
Start drawing polygon 1 -> Draw polygon 1 -> End drawing polygon 1
Start drawing polygon 2 -> Draw polygon 2 -> End drawing polygon 2
" "
Start drawing polygon N -> Draw polygon N -> End drawing polygon N

Set texture 2
Begin drawing all polys that use texture 2
Start drawing polygon N+1 -> Draw polygon N+1 -> End drawing polygon N+1
Start drawing polygon N+2 -> Draw polygon N+2 -> End drawing polygon N+2
" "
Start drawing polygon N+K -> Draw polygon N+K -> End drawing polygon N+K

Present on screen

Case B:
Batch all polys using texture 1 into a chunk of data
Batch all polys using texture 2 into a chunk of data

Set texture 1
Begin drawing all polys that use texture 1
Start drawing all polys using texture 1 in one go -> Draw texture 1 poly chunk -> End drawing texture 1 polys

Set texture 2
Begin drawing all polys that use texture 2
Start drawing all polys using texture 2 in one go -> Draw texture 2 poly chunk -> End drawing texture 2 polys

Present on screen

Result:
Massively different!
Case A will bring the most powerful system to it's knees... forget your 60-100-200 fps. It's quite easy to get into single figure FPS's.
Case B, using a clever bit of batching, will zip along all day in your hundreds of FPS's.

It's all in how the data is prepared and sent to the card; which is often more important than the sheer number of polys being thrown around.

Yikes. I actually understood all that! It did take three reads, mind! ;)

Good stuff! Makes sense now how game patches can sometimes promote frame-rate increases and optimisations with relatively little code...

D.
 
  Clio 172 PH2 Flamer
Just popped a fermi card in the system - the new gtx460 - got to say im very impressed will be slotting 2 in there soon, not half bad for 130 quid i think :)
 

Attachments

  • heaven.jpg
    heaven.jpg
    73.3 KB · Views: 36
  Evo 5 RS
I think there are a couple of Nvidia demonstrations for download which shows it off. It's been around for YONKS but the hardware has not really been there. It's still not now tbh - frame rate still takes a pretty big hit. The hardware is only as good as the lazy goons who work with it.

Metro 2033 in it's full glory is a classic example of this. It's choppy as sin

http://forums.nvidia.com/index.php?showtopic=168280
 
Last edited:
  Clio 172 PH2 Flamer
The 1090t @ 4ghz is running at 29 idle and 42 under full load from prime on a xigmatec dark knight HDT cooler - i was rather impressed - will have a full custom WC loop installed soon :) - Glad ive found some geekdom here :)
 
  ford cougar 2.0 16v
yeh but the new power color ati can be nicely overclocked and kept cool. but the nvidia card nearly made me switch back.
 


Top