Fair enough I've obviously misread it!! Cheers for clarifying !!
No worries mate.
Don't get me wrong, I'm a MASSIVE nVidia fan and really believe the 1070 and 1080 could do very well. A lot of performance potential with the impressive clock speeds and hopefully the pricing remains sensible on release here in the UK. For example, a 1070 at 300-something quid with 980Ti / Titan X level of performance is not to be sniffed at, especially when it's also drawing a lot less power. That said, if you extrapolate the data and theoretically run a Titan X / 980Ti at the quoted clock speeds of the 1080 then the performance gap isn't 'a quantum leap' as announced.
I'm just not as hyped up after reading between the lines and looking closer at the details of the products. I'm a little disappointed that the 256-bit bus and the GDDR5X is only seeing 320Gb bandwidth (not shabby, but would have liked a bit more) and I don't like how the press release present the skewed graphs that show some really ambiguous performance and efficiency results. Somethings just don't add up having looked at the specs and a lot of it is just carefully worded b****cks to build up the hype (which is obviously the whole point). I just think people are expecting some massive, massive leap in performance and I just don't see that happening with this product. It's impressive no doubt, but the claims just seem OTT. I look forward to seeing some real world benchmarks. I'm also hoping to get my hands on an early card for 'review' as I still have a couple of friends who work at nVidia and I've cheekily asked them if I could have a review sample! LOL! I daresay 'NO' will be the answer but what the heck... :tonguewink:
The larger performance gains they seem to be pushing seem very much linked to VR (even when you look at the specs of the test rig used to produce said metrics) so it will be very interesting to see how non-VR stuff works out - i.e. typical modern games. If early benchmark leaks are to be believed then current titles (such as Rise of the Tomb Raider, Witcher 3, etc.) have approximately a 20-25% performance advantage over a 980Ti when run on the newer 1080. To be honest, that is the level of performance I would expect, maybe going higher as drivers improve and newer games take advantage of newer Pascal features. I just cannot for the life of me see the quoted 2x performance of Titan X happening in the real world. I'm prepared to eat my words but working with these GPU's day in day out for over 20 years and having worked with nVidia for 2-3 years I just can't see it happening.
I'm also a bit miffed with some of their other technology stuff announced at their recent announcement; such as Ansel and VRWorks Audio. The world's first this, the world's first that... I'd contest some of those claims. The Ansel camera thing is basically an extension that has to be supported by the developer. It's trying to provide a unified camera system that exists on top of the camera systems that are built in to the specific title's game engine. I get that. But the developer has to do extra work to support it. nVidia say that it has driver level support for the feature... ok, so what? The driver doesn't know what part of the world the user is looking at, that's the job of the game's rendering engine. The rendering engine has to hook into Ansel at some level so as the user controls the view through Ansel the rendering engine knows to update the position of the viewport in the world and what exactly it should be drawing. And whilst I'm moaning on my soap box, the crowd all wooping at the ultra high resolutions of the output from Ansel. Jeez... that's so basic and old tech. Virtual viewports snapshots. I was taking 20,000 x 20,000 pixel screenshots for magazine articles, etc. on the Nintendo64 back in 1998!
And, and, and... LOL... VRWorks audio. They aren't the world's first to use tracing of rays to determine convolution filters for modulation of audio in a 3D environment. This has been around for years and featured on many popular sound cards. I just don't think it was particularly heavily used as it was (at least originally) so cumbersome and expensive to use. Hence developers tended to roll their own simpler audio / environment interaction routines. To my eyes they are just taking existing tech, giving it a new name, running it on a GPU and calling it a world first!
Ok, I'm calm now
Rant over. For the record I am still a big nVidia fan but I just find - as I get older - the utter bullshit harder to ignore. :tonguewink:
I look forward to seeing Pascal in Ti variants on 12GB or 16GB HBM / HBM2 memory clusters... that would be nice. And expensive.