Nvidia releases Euro Suggested Retail Prices GTX 1060

Published by

Click here to post a comment for Nvidia releases Euro Suggested Retail Prices GTX 1060 on our message forum
https://forums.guru3d.com/data/avatars/m/259/259067.jpg
detailed synthetic testing [spoiler] http://i.imgur.com/qCbemya.png[/spoiler]
Do you think that Beyond3d synthetic tests matter? Beyon3d works exclusively with Cuda(nvidia tech),so bye bye.
data/avatar/default/avatar12.webp
It looking like NV are replacing the 980s place & price in the market place lots of retailers have already stopped selling 980s.
https://forums.guru3d.com/data/avatars/m/261/261779.jpg
So basically faster than a 980 and the same price... Are AMD's drivers still questionable? I see a lot of hate and a lot of positives for their drivers. A friend of mine hates AMD (I've been an Nvidia faboy for far too long now so can't really give him any advice) but only because he's had some real problems with their drivers, over the years. Reason I ask is, he's tempted to go for a 480... Any thoughts?
data/avatar/default/avatar21.webp
They are still doing this Founder's Edition nonsense? Didn't they already have enough complaints? All it does, is push up the Prices for all Cards.
https://forums.guru3d.com/data/avatars/m/249/249528.jpg
So basically faster than a 980 and the same price... Are AMD's drivers still questionable? I see a lot of hate and a lot of positives for their drivers. A friend of mine hates AMD (I've been an Nvidia faboy for far too long now so can't really give him any advice) but only because he's had some real problems with their drivers, over the years. Reason I ask is, he's tempted to go for a 480... Any thoughts?
I personally have had like 1 or 2 major problems over the course of 7 years being an amd user. Not that I'm an amd fanboy tho, never had an nvidia but i sure as hell would like me some 1070. History has proven that in long term driver support amd has their **** togather. I remember back in the day when my 290x was barely beating a GTX 780 and now.. 780 is left in the dust and my 290x is almost a gtx 980. Anyways a 480 for 200$ is a steal. Sadly enough i don't really know where on earth can anyone find it for that price. Here in bulgaria ot costs 300 euro. fkn euro, not even dollars. I mean what the actual f**k.
https://forums.guru3d.com/data/avatars/m/227/227994.jpg
It's not just DX12, it's specifically games (particularly console ports) with a focus on pixel shaders, post processing; compute shaders included. If the game's bottleneck actually lies on the shader throughput of the card, it will be advantageous to AMD (provided it doesn't hit other bottlenecks first) because of the higher shader throughputs on their cards. RX 480 is 5.8 at stock, 1060 is 5.1 at 2ghz. It will be the same with Vega 10 ; 4096 ALU and ~1200mhz (conservative, after seeing polaris) and that's 9800 gflops, 1400mhz is 11500 gflops. I think nvidia has closed the gap somewhat with the huge clocking gains on pascal, allowing them to continue producing relatively small, less complex dies (compared top gcn) and clock them much higher. A 1080 @ a very average 2ghz is 10gflops exactly. But my point is not all dx12 games will be like this, and there is nothing inherent in dx12 that lends itself better to gcn, tomb raider runs better on a 980 than a 480 by quite a lot http://www.guru3d.com/articles-pages/amd-radeon-r9-rx-480-8gb-review,10.html So the 1060 will be faster, than it as well probably In Forza Apex, warhammer they are on par http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,12.html http://www.benchmark.pl/testy_i_recenzje/radeon-rx480-test/strona/26295.html In Hitman it's on par with a stock 980ti http://www.guru3d.com/index.php?ct=articles&action=file&id=23008 but also quite a bit behind the 390x, so AMD appears to have lost a little bit of their advantage in compute heavy games (same with AotS), for one reason or another, the rx480 is performing worse relative to it's compute throughput than hawaii or fiji. A 390X is on par with a 480 on paper. detailed synthetic testing [spoiler] http://i.imgur.com/qCbemya.png[/spoiler]
Cool Graph, but the 1060 is aimed at 1080p users.
data/avatar/default/avatar32.webp
Suggested prices are only suggested. Look like prices looking in Poland (transport and tax included - 23%VAT) Rx 480 8GB Suggested: 1170 PLN (292$, 262Eur) Real life: 1349 PLN (337$, 303Eur) As We can see prices are now not even close to 229$ advertised by AMD And now what i believe prices will look for 1060 6GB Suggested: 1279 PLN (319$, 287Eur) Real life: 1479 PLN (369$, 332Eur) From good news today i saw first offer for 480 for 1279PLN (starting price was 1349) so i believe reference cards will start moving down toward suggested price to make place for AiB cards that will hit market in next 2-3 weeks 1060 will be probably 10-15% more expensive from 480 , question is is it gonna be faster more than 10-15% from 480.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Lol, it's slower and more expensive because nvidia charge more for bad dx12 support (sarcasm)
They did more than charging for bad DX12 support. They charged for bad DX11 support. The last worthy *60 card was the 560Ti. Look at the rest: 660 vs 7870, 760 vs 7870/270x (again, lol), 960 vs 380. It's not exactly a stellar record.
It's not just DX12, it's specifically games (particularly console ports) with a focus on pixel shaders, post processing; compute shaders included. If the game's bottleneck actually lies on the shader throughput of the card, it will be advantageous to AMD (provided it doesn't hit other bottlenecks first) because of the higher shader throughputs on their cards. [spoiler]RX 480 is 5.8 at stock, 1060 is 5.1 at 2ghz. It will be the same with Vega 10 ; 4096 ALU and ~1200mhz (conservative, after seeing polaris) and that's 9800 gflops, 1400mhz is 11500 gflops. I think nvidia has closed the gap somewhat with the huge clocking gains on pascal, allowing them to continue producing relatively small, less complex dies (compared top gcn) and clock them much higher. A 1080 @ a very average 2ghz is 10gflops exactly. But my point is not all dx12 games will be like this, and there is nothing inherent in dx12 that lends itself better to gcn, tomb raider runs better on a 980 than a 480 by quite a lot http://www.guru3d.com/articles-pages/amd-radeon-r9-rx-480-8gb-review,10.html So the 1060 will be faster, than it as well probably In Forza Apex, warhammer they are on par http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,12.html http://www.benchmark.pl/testy_i_recenzje/radeon-rx480-test/strona/26295.html In Hitman it's on par with a stock 980ti http://www.guru3d.com/index.php?ct=articles&action=file&id=23008 but also quite a bit behind the 390x, so AMD appears to have lost a little bit of their advantage in compute heavy games (same with AotS), for one reason or another, the rx480 is performing worse relative to it's compute throughput than hawaii or fiji. A 390X is on par with a 480 on paper. detailed synthetic testing [spoiler] http://i.imgur.com/qCbemya.png[/spoiler][/spoiler]
This is a matter of a paradigm shift happening with graphics processing. Of course you can have games that don't focus on compute performance, but would it make sense? The consoles are the way they are because they use the parallel architecture of the GPUs for general processing tasks, while doing graphics at the same time. It could also be argued that past a texture/graphics limit, compute/post processing becomes much more important for games, either for artistic purposes, or if you go for a "realistic" look. As a design, GCN has been in the center of this transition, it's not so weird to see that most games favor it, or complete negate it's graphics disadvantage and gain due to its compute advantage. Ever so slowly, even with Pascal, NVIDIA is trying to move to a "smarter" pipeline. My guess is that they are so careful because they know the (obvious by now) heat/power issue of investing in heavy compute and they try to do the perfect balancing act between graphics, compute perfomance, power and clockspeeds.
Do you think that Beyond3d synthetic tests matter? Beyon3d works exclusively with Cuda(nvidia tech),so bye bye.
Yes, yes they do. Especially if you want to take a look at the architecture. These results are quite fascinating. GCN is a bit less retarded on the graphics/polygon counts (although it's impressive what the 480 does with only 32ROPs, if this translate into Vega we're in for interesting times), and it's still the king of compute on its class. Personally, I liked what I saw.
https://forums.guru3d.com/data/avatars/m/206/206288.jpg
If retailers stick to the actual price then the card will be very competitive, and Nvidia have more brand recognition so (much to the annoyance of AMD fans) never have to price things less. The FE pricing is silly, but maybe it's an attempt to exploit the fact some people think the most expensive one is the best.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
Saying Nvidia charged for bad DX11 support is really stretching things especially when comparing to AMD who are trying to claw back some market-share by selling cheaper cards. Don't forget AMD rebranding with your comparison either. In terms of DX11; Nvidia has offered better support period, regardless of any particular card's performance. AMD has only recently started improving it's DX11 aspect of it's drivers. A look at AMD card release benchmarks tells you clearly they always seem to be w.i.p. Also, bare in-mind AMD's X versions for any particular card branch that further complicates things on AMD's side. Nvidia's clearer differentiation of cards I find is better, it's more clearly defined. There is a clear difference between GTX980 and GTX980ti for example. They let AIB decide on clocks for competition and pricing differences. If we look at both vendor's last generation cards, then, Nvidia's 900 series had only 6 products (inc TitanX), where-as AMD had 11 products (inc Pro Duo). If we remove the highest-end products, then, that's double the number of products from AMD. This really muddies the water for customers on AMD side and shows they're the ones constantly releasing products and charging more for small improvements per line. As for the power consumption to performance, it's not looking that good for RX480 tbh. If it translates to Vega, then, Vega must be a significantly lower-clocked card otherwise it too will be power-hungry and hot without a doubt. At this stage, I don't have much hope for big Vega to deliver either. Chances are they will go FuryX route and have to bundle water-cooling with it again.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Saying Nvidia charged for bad DX11 support is really stretching things especially when comparing to AMD who are trying to claw back some market-share by selling cheaper cards. Don't forget AMD rebranding with your comparison either.
The rebranding actually makes NVIDIA look even worse in comparison. A 7870 is still the better card literally three generations later. That's what my bad DX11 support was about. I mean it in the way of performance.
In terms of DX11; Nvidia has offered better support period, regardless of any particular card's performance. AMD has only recently started improving it's DX11 aspect of it's drivers. A look at AMD card release benchmarks tells you clearly they always seem to be w.i.p.
NVIDIA offers less overhead and more features with their DX11 driver. Its problem seems to be that its much more segmented than the AMD GCN driver, and that parts seem to fall behind ever so slowly. It was more or less admitted by NVIDIA with The Witcher 3, when they asked from the community to "keep an eye out" for further performance regressions. I don't believe the conspiracy theories about an intentional downgrade, but it sure seems like the older cards are second class citizens. Due to deciding to stick with GCN, AMD is still providing performance increases to cards like mine (a 4.5 year old 7970, I just got a small boost in 3DMark with this month's card). Their cards sure seem to need around 6 months until they come around though, which is a point FOR NVIDIA.
Also, bare in-mind AMD's X versions for any particular card branch that further complicates things on AMD's side. Nvidia's clearer differentiation of cards I find is better, it's more clearly defined. There is a clear difference between GTX980 and GTX980ti for example. They let AIB decide on clocks for competition and pricing differences.
This means literally nothing. If you spend $200-$300 for something, read a review about it. And I don't understand your qualms about the naming, to be frank.
If we look at both vendor's last generation cards, then, Nvidia's 900 series had only 6 products (inc TitanX), where-as AMD had 11 products (inc Pro Duo). If we remove the highest-end products, then, that's double the number of products from AMD. This really muddies the water for customers on AMD side and shows they're the ones constantly releasing products and charging more for small improvements per line.
Why does that matter? More products usually means more market segments covered. Where is the problem in that? Also if AMD rebrands are keeping up with newer NVIDIA stuff (see the Hawaii chips vs Maxwell with the 290/390), that translates to better value for older customers. Hell, a 290 is probably one of the best GPU investments ever made.
As for the power consumption to performance, it's not looking that good for RX480 tbh. If it translates to Vega, then, Vega must be a significantly lower-clocked card otherwise it too will be power-hungry and hot without a doubt. At this stage, I don't have much hope for big Vega to deliver either. Chances are they will go FuryX route and have to bundle water-cooling with it again.
Anything until the 275W mark is completely acceptable for performance cards, unless there are specific space/thermal requirements. Vega won't be GloFo also, it will be TSMC. We'll see how much that matters then. AMD's chart shows it to have significantly smaller power usage from Polaris.
It's not like "compute" is distinct from "graphics", my point is, if raw shader throughput is the requirement, then GCN tends to well provided it doesn't hit a bottleneck on the way. There's nothing stopping a 980ti matching a Fury X in AotS except clock; it needs to match those 8600gflops. Once it has, GCN really has no advantage in this respect
And if you clock Fury X's ROPs higher, then it doesn't have a graphics bottleneck either. 🤓 The whole point I was making was that the designs have different philosophies, and the GCN design seems to be the one adopted by the industry in general. NVIDIA has very good performance and very lean designs that are performing great DESPITE that, not because they are favored.
data/avatar/default/avatar10.webp
With both new, more powerful, consoles coming from AMD hardware, I think the scales will tip even more in their favor, in the future. AMD's cards in general, seem to have better performance over time than Nvidias cards. I think that will improve even more in the coming years.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
With both new, more powerful, consoles coming from AMD hardware, I think the scales will tip even more in their favor, in the future. AMD's cards in general, seem to have better performance over time than Nvidias cards. I think that will improve even more in the coming years.
True, even nvidia users knows that. Last few gen of GPU's showed that. With dx12 on the horizon, nvidia will need to play catching up.
https://forums.guru3d.com/data/avatars/m/99/99142.jpg
Yawn. when AMD aren't winning, there's always many out there telling us that it sure will change. Been like this for years.
https://forums.guru3d.com/data/avatars/m/206/206288.jpg
Nvidia have years of brand loyalty that would also take years to diminish before they would have to play catch to AMD. That's what keeps the hardcore AMD fans up at night.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
True, even nvidia users knows that. Last few gen of GPU's showed that. With dx12 on the horizon, nvidia will need to play catching up.
If cards didn't come out every 6months I would agree. As it stands, tech moves on quick enough to negate this. Also, Nvidia driver development doesn't just stop either. I don't think the impact is going to be anywhere near as big as you're hoping.
data/avatar/default/avatar09.webp
dx12 on the horizon? It's already here. From now on the game has changed.
https://forums.guru3d.com/data/avatars/m/45/45709.jpg
Not fair. Few days earlier, the "suggested" price of the 6 GB memory variant was 249 $.:puke2:
https://forums.guru3d.com/data/avatars/m/115/115710.jpg
Few days earlier, the "suggested" price of the 6 GB memory variant was 249 $.:puke2:
That was without VAT.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Yawn. when AMD aren't winning, there's always many out there telling us that it sure will change. Been like this for years.
Please tell me one AAA title that won't have DX12 this year.