NVIDIA GeForce Titan 780 3DMark score

Published by

Click here to post a comment for NVIDIA GeForce Titan 780 3DMark score on our message forum
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Yep but looking at these numbers the first question is what game is gonna get use outta this kinda gpu power now and even into near future, apart from Crysis 3 of course.
With new consoles this year you'll see graphics get pushed up a little more, just look at Unreal 4 engine and Luminous engine and whatnot. Plus 4K monitors are going to be all the rage soon.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
so many nay sayers its funny, usually those that just bought a new gpu. Deal with it :P
https://forums.guru3d.com/data/avatars/m/243/243536.jpg
so many nay sayers its funny, usually those that just bought a new gpu. Deal with it :P
Deal with it? how're you getting on with that imaginary mobo you have in your specs, good OCer is it? *high five anyone? haha :P
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
so many nay sayers its funny, usually those that just bought a new gpu. Deal with it :P
I'd say its more because a jump that massive has only occurred once before, that was with the 8800. It's just rare and the only evidence is a single screenshot from a chinese website. Thing people have to remember though is that nvidia has already built the chip. It's found in the K20X. Stick that thing into a gaming card and clock it 20% faster and you have 690 performance @ 225w. The only thing I'm skeptical about is the price. I can't see them launching a product like this at the standard $500 point.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Well yes that's what im saying, a lot can not grasp the fact it can be so fast.. And that it will make their new shiny GK104 just a poor mid-range gpu soon. @SLI 754 its great, best from both worlds :P
https://forums.guru3d.com/data/avatars/m/245/245026.jpg
I would say the picture itself is fake, BUT, on the other hand there is and always will be a plan that is already on the table with instructions and/or engineer ideas of what should future cards have. What I mean is, this card may be a simple K20X ''consumer'' level modified card, but what is behind the corner as a new lineup we will never know, and seriously if next-gen will run Unreal4 which looks really good, imagine a PC exclusive next-gen titles that will look better, but will also suck your PC dry. There are still games people cant max out in the current gen even with 690 or 680's in SLI, such as Witcher 2 with ubersampling, that game runs in like 30 FPS with those settings on 1920*1080p. By the way I dont see Crysis 3 raising the bar or setting any standard in anything really, game sucks graphic wise in my opinion, and even on Very high, without that V-sync on it runs butter smooth, and looks crappy. The real future that may crush even the K20X type of cards will be things such as VFX Tech Demo(look it up on Youtube), now imagine that kind of graphics with hundreds of NPC's thousands of rays, real time shadowing like in Metro etc...and tell me that Crysis3 is better in any way?
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
I thought they'd already said that Titan wasn't the 780, as their 7 series isn't due till after AMD's 8s.
data/avatar/default/avatar20.webp
What are your bases for THAT?
It's like Apple giving you iPhone 7 next year with all that stuff that are missing from the previous iPhones. They won't do it, cause they rather do it progressively and still get the most of the profits every year even without huge changes. I said Apple, but that's just an example, cause every company does this. A 780 being much better than cards with current price of 1000+ dollars/euro is unrealistic, unless it costs at least the same. Looking at the performance of the 780, may i ask how a 790 would perform? C'mon, this is fake. I hope not, but it is.
data/avatar/default/avatar25.webp
I'd say its more because a jump that massive has only occurred once before, that was with the 8800. It's just rare and the only evidence is a single screenshot from a chinese website. Thing people have to remember though is that nvidia has already built the chip. It's found in the K20X. Stick that thing into a gaming card and clock it 20% faster and you have 690 performance @ 225w. The only thing I'm skeptical about is the price. I can't see them launching a product like this at the standard $500 point.
Nope with 2688 cores and clocking it up 20% from the current clock of 732MHz and your TDP of 235 watts and thermals are out the window and you still won't be close to a GTX 690 never mind more than double SLI 680's. It's BS performance numbers without even thinking about it.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Nope with 2688 cores and clocking it up 20% from the current clock of 732MHz and your TDP of 235 watts and thermals are out the window and you still won't be close to a GTX 690 never mind more than double SLI 680's. It's BS performance numbers without even thinking about it.
Yeah you might be right now that I'm actually thinking about it. I mean the 690 and the GK110 is only a 13% difference in core count. That's not to bad. Considering the clock speed some Kepler units are hitting it might be possible to make it up the difference there but not without exceeding 300w TDP. I guess they could go over but I don't know. And I definitely don't see them going 15% over. Not with 2688 cores. So yeah, changing my opinion.
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
Just looking at some GK104 and GK110 specs, I noticed that GK110 has 960 DP cores (64 per SMX) that don't return in the official CUDA core count, whereas GK104 only has 64 (8 per SMX). That would bring GK104 to 1600 total CUDA cores and GK110 to 3840 if you could use the DP logic for SP calculations. (I never coded for a GPU so what the heck...at least I'm trying to make sense out of a score that would be roughly 2k higher than expected) And if you take X3300, divide it by 1600, multiply by 3840, times .8 for clock speeds and 1.15 for the GK110's memory advantage you get 7286, which is close to the 7107 score. Nonsensical perhaps, but if you'd want to make up a fake score, this could be the way lol. As for the TDP, a K20X has a HPC TDP rating, and Geforce is rated for games. Loads are completely different, with the HPC rating being comparable to running Furmark on a Geforce. A Geforce Titan with a 250W TDP would be a 350-400W K20XXX lol And yes I'm just trying to make the best out of the rumor season with this.
data/avatar/default/avatar09.webp
I do hope this is a genuine test result of the 780.. I held out on the 680's (still have my 3x 580's 3gb) because I didnt feel the 680's would give me much of a performance gain..
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
@Texter I think you pretty much nailed it and how it could/will be. Also higher ROP and texture unit count takes care of the rest πŸ™‚
https://forums.guru3d.com/data/avatars/m/193/193393.jpg
Thats a really high score !!! I doubt it to be true else I really have to get two of them. I get X8200 points with 3 stock clocked GTX 670s.
data/avatar/default/avatar23.webp
"insanity now serenity later" that score is something else wow!
data/avatar/default/avatar16.webp
April fools!, oh wait, oh its only February 1st.......
https://forums.guru3d.com/data/avatars/m/244/244064.jpg
Don’t believe everything you read!
data/avatar/default/avatar34.webp
Just looking at some GK104 and GK110 specs, I noticed that GK110 has 960 DP cores (64 per SMX) that don't return in the official CUDA core count, whereas GK104 only has 64 (8 per SMX). That would bring GK104 to 1600 total CUDA cores and GK110 to 3840 if you could use the DP logic for SP calculations. (I never coded for a GPU so what the heck...at least I'm trying to make sense out of a score that would be roughly 2k higher than expected) And if you take X3300, divide it by 1600, multiply by 3840, times .8 for clock speeds and 1.15 for the GK110's memory advantage you get 7286, which is close to the 7107 score. Nonsensical perhaps, but if you'd want to make up a fake score, this could be the way lol.
very nice point of view maybe it's almost correct, maybe the difference you found is a cpu bottleneck because it's just a 2600K there for a beast gpu πŸ˜‰
https://forums.guru3d.com/data/avatars/m/206/206905.jpg
I hope these figures are right cause if they are it shows how awesome the 790 could be.
data/avatar/default/avatar09.webp
Just looking at some GK104 and GK110 specs, I noticed that GK110 has 960 DP cores (64 per SMX) that don't return in the official CUDA core count, whereas GK104 only has 64 (8 per SMX). That would bring GK104 to 1600 total CUDA cores and GK110 to 3840 if you could use the DP logic for SP calculations. (I never coded for a GPU so what the heck...at least I'm trying to make sense out of a score that would be roughly 2k higher than expected) And if you take X3300, divide it by 1600, multiply by 3840, times .8 for clock speeds and 1.15 for the GK110's memory advantage you get 7286, which is close to the 7107 score. Nonsensical perhaps, but if you'd want to make up a fake score, this could be the way lol. As for the TDP, a K20X has a HPC TDP rating, and Geforce is rated for games. Loads are completely different, with the HPC rating being comparable to running Furmark on a Geforce. A Geforce Titan with a 250W TDP would be a 350-400W K20XXX lol And yes I'm just trying to make the best out of the rumor season with this.
I ask me what DP have to do with 3Dmark... The TDP rating between Tesla and Kepler is a bit different, but a lot more close of what you had before with Fermi.. just cause you set a max TDP limit and the turbo will work on this limit.. basically you are close of the board power you have set and constrain. Tesla use a fixed TDP who cant be passed over... basically its why the K20x have one SMX disabled and only provide 3.95Tflops SP ( who is under a simple 7970 ).. But on the other hand the K20x can allow 1/3 SP/DP rate... 235W is the limit fixed by Cray... ( 2x gpu on a rail with 4x CPU ). The core speed and other specs have been choose for been able to bin the cores for the K20x used in Cray system ( and they are extremely well binned with extremely high Asic quality, because you are constrained to a certain TDP and dont want get too much fluctuation on core speed.