6GB version of the GeForce GTX 780 Ti planned

Published by

Click here to post a comment for 6GB version of the GeForce GTX 780 Ti planned on our message forum
https://forums.guru3d.com/data/avatars/m/99/99142.jpg
Eclap, I am not complaiing about good value. Why did it take this long for NV and AMD to bring their prices down to a reasonable level?
7970 was selling for sub £300 when the 280x got released, 7950 for around £200. 280x was released at slightly less than the 7970, I don't see a reason to be unhappy about that.
https://forums.guru3d.com/data/avatars/m/243/243536.jpg
It be interesting to see how much more cash the fanboys have to fork out for 3GB of vram.
better than pictures, amd - all images, mantle all images, new uber driver all images. vanilla 290 /290x is a big freaking joke. welcome to NVidia section amd boy.
https://forums.guru3d.com/data/avatars/m/228/228458.jpg
As an investor with Nvidia, I would like to thank you for spending your hard earned cash. :thumbup:
https://forums.guru3d.com/data/avatars/m/99/99142.jpg
when the 280x was released, your right. 7970 was brought down for the same reason. Eclap, when did you buy your 7950's? Fall of 2013 or earlier? I got my value in the 760 I have, do you feel the same about your 7950's?
I bought the first one (Vapor-X) in April, for £265 or so. The 2nd one in November for £135. Yes, I think I got awesome value with both. I could easily sell both for £250+ each right now.
https://forums.guru3d.com/data/avatars/m/99/99142.jpg
Funny thing here is that the r9 280x now sits at around $420 US rather than it's original $330 it came out as due to chip shortages? I think it's time AMD and NV each found a different chip maker other than TSMC.
The prices are up because of coin mining. Nothing to do with AMD, it's the retailers. 7950 were selling for £350 on eBay a few weeks back.
https://forums.guru3d.com/data/avatars/m/99/99142.jpg
You can run 770 SLI on a 750w psu, no problem.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
then your saying this is wrong: http://www.realhardtechx.com/index_archivos/Page362.htm
Wow they are claiming that a 770 alone takes an extra 225W that seems a bit high. If you have a quality 750W unit you can run sli770's. That much is true. That site was just advising a healthy amount of head room.
https://forums.guru3d.com/data/avatars/m/228/228458.jpg
Headroom on a PSU is like a condom. Id rather have it and not need it, than need it and not have it. 🤓
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Vram argument is all about main s-treaming. If enough people get 12GB vram in next 2 months, developers will start to use such amounts of memory and next year we'll have games which will run like crap on poor 2/3/4GB vram cards. We do not have such cards available to mainstream, therefore there would not be sane developer which will make game with such requirements. Someone simply has to make this move towards higher amount of vram and it's HW manufacturers supported by market demand. People with 3 screens and high resolutions were minority for very long time. 4k monitors may change it during 2014 if they drop in price to be reasonable. And that's breaking point where developers will start making games which will use 5/6GB of vram. We already have few dozens of games which have texture pools higher than 2GB, question is if more than 2GB of textures are needed to render single frame. - no = required textures will be cached once, causing hitch and rendering will be fluent till another huge set of data is required - yes = for every frame some of resources will have to be fetched, which will decrease performance based on time it takes to get such data from system memory. The "no" case is usual and most have went through such situation, moving forward in fps game is fluent, turning around causes slowdowns as resources are transferred and then moving in new direction is OK again. For "yes" case, someone can record his experience with tri-fire HD5870 1GB as that has a lot of performance and very low amount of vram. Considering that each card have to get those new resources separately on limited PCIe bandwidth, there should be negative scaling.
https://forums.guru3d.com/data/avatars/m/99/99142.jpg
yes, a healthy amount of head room for the rest of the pc. I am more preventative maintenance than caution in the wind. This is quoting right from the same page from guru 3d that eclap just showed me: "Above, a chart of relative power consumption. Again the Wattage shown is the card with the GPU(s) stressed 100%, showing only the peak GPU power draw, not the power consumption of the entire PC and not the average gaming power consumption." In other words, I agree PhazeDelta.
what are you on about? I did say a single gtx 770 consumes around 200w, didn't I? why do you have to quote the single gtx 770 power draw from that article for me when I already told you what it is? erm, learn to read maybe? there's this too: System Wattage with GPU in FULL Stress = 304 Watt that's an overclocked 3960x rig with a single gtx 770. add another gtx 770 at around 200w power draw and you're looking at 505W power draw. at 100% load. so yeah, 750w is enough.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Fox, you just showed me something I have talked about before. When Maxwell comes out, NV wants to unify memory totally, not just make it a stat in windows wei or dxdiag,etc. Why do we see sony thinking ahead of the pc and using gddr5 as it's unified memory. I would wish that companies who propose to jedec would propose ddr 5 instead off ddr 4. Match the performance of the gpu's memory and cut out the BS so unified memory can actually be a benefit to the NV user at least until AMD comes up with something similar.
DDR4 are low power consumption chips, ddr5 are power hungry. Sony had ddr5 as unified for GPU/CPU because they use APU and want to get maximum performance from it. I like idea of scalable addressability. I am for some time thinking about A10-7850k as a development platform, but to my knowledge it would have to clock to 5GHz to match i5-2500k on 4.5GHz. So I may later push trigger with huma notebook as they promised kaveri there too in mid 2014. nVidia plan on unifying memory across multiple gpus, that's nice, but it will require much faster bus than PCIe 3.0 and optimally it would have to be directly between GPUs. May need to have new huge SLI bridge developed or it will work only for dual chip boards. In both cases sharing memory would be pretty expensive for end user due to additional lanes/transistors. I believe it may be easier to make GPUs as dumb block workhorse and make 3rd chip which would connect them together as memory controlled, cache and scheduler so they will be seen for OS as one chip. Something along line of intel C2D. It may look hard, but AMD is pretty close as they managed to pull 512bit bus for r9-290(x) while using around 20% less transistors for memory controller compared to 384bit HD7970/50. I like this possibility too, but getting there requires not only idea, but right decision for approach as there have to be some part shared and sharing too much or too little will prove to have negative effect on performance, cost or both. And btw. to the memory, I have high hopes for hybrid memory cube technology, as it's already low power draw per Gbit and can transfer data pretty fast + platform base does not have to mess with timings and stuff. I guess it will be in 1st servers in 2015 (even while it looks ready now) and consumers will get it 2016.
https://forums.guru3d.com/data/avatars/m/223/223735.jpg
Cool, although i think its somewhat pointless with Maxwell just around the corner.
https://forums.guru3d.com/data/avatars/m/216/216490.jpg
Cool, although i think its somewhat pointless with Maxwell just around the corner.
On the contrary. From what I'm hearing we are still way off for Maxwell. Possibly Q4 2014/early 2015. I hope I'm wrong so we have it sooner but from in another way I hope I'm not because I just received my 780Ti like half a week ago.. :P
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
On the contrary. From what I'm hearing we are still way off for Maxwell. Possibly Q4 2014/early 2015. I hope I'm wrong so we have it sooner but from in another way I hope I'm not because I just received my 780Ti like half a week ago.. :P
They can take the time they need I'm not hurting for any performance on either of my systems. I'd rather they get it right out of the box and not have the phantom 680 incident again.
https://forums.guru3d.com/data/avatars/m/124/124168.jpg
^^^lol.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
20nm is working for quite some time, but as presented by nVidia critics, it's far from economical and both nV and AMD decided not to spend again money for being 1st to get there.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
something you guys didn't notice. Gtx 760 is one half of a gtx 780 (1152x2=2304). So, somehow gtx 780 ti got more cores than expected. The one possibility is gtx 860 (the rumors showed 860 and 870 models coming 1st in feb) is half a gtx 780ti at 1440 cores. Don't know what 870 would be. you have to remember the smart marketing portion of nvidia at least for the 760. It was priced originally under a gtx 780 to give users a choice of one expensive card at one time or two cheap ones. Well, that went out fast at xmas.
A 760 (1152 cuda cores or 6 SMX clusters) is half the number of cores as the 780 (2304 cuda cores or 12 SMX clusters) the 780Ti has 2880 or 15 SMX clusters. Plus the differences in ROP. The GK110 has a total of 15SMX clusters on die the Titan had one disabled the 780 had 3 disabled the 780Ti is a full GK110. Also the 760 is on the GK104 chip which is only 8 SMX clusters on die but has two disabled. Edit: On a side note learn to edit your post if you want to add something one post after another is getting annoying and making me think you are post farming.
https://forums.guru3d.com/data/avatars/m/47/47825.jpg
Maybe a pita but not trolling but trying to up your post count instead of just using the edit button.