AMD Vega To get 20K Units Released at launch and new Zen+ chatter
Click here to post a comment for AMD Vega To get 20K Units Released at launch and new Zen+ chatter on our message forum
sammarbella
PrMinisterGR
here at 42:20.
HBM is another class of performance from GDDR5x, both for latency and for effective bandwidth. It's not even close.
I'm sure that there will be a Vega card on the 1070 performance bracket. Everything that will decide this is going to be the price. Judging by initial rumored availability (which could be wrong), the mainstream part of Vega will most likely come later.
AMD has had, and still has, a huge perception issues. Youtubers and tech writers are doing a better job explaining how their tech works and progresses over time, than they do. Look at chips like Hawaii, as an example. AMD dug a hole in their own product by initially not investing in drivers for it (which was the stupidest thing they could ever have done), and packaging it with subpar cooling solutions. The same product that was "destroyed" by Maxwell is still faster than it, despite being a year older.
Same story with the Fury. They promised OC headroom for the Fury X, which was a mistake. They didn't promote the vanilla Fury at all, despite it being in the same price range as the 980 and destroying it at the same time in performance.
They have a true, big, marketing issue.
Listen Denial
Discreet GPU sales in 2006 were 85M. Discreet GPU sales in 2015 were only 44M.
Nvidia claims Pascal cost them "several billion". Their R&D budget is ~350M per quarter so about $1.4B per year and pascal took 2 years. I doubt all of it is Pascal itself, so let's just assume Pascal $2B.
https://venturebeat.com/2016/05/06/nvidia-launches-pascal-based-consumer-pc-graphics-chipps-360-degree-photo-art-and-3d-audio-for-vr/
Cost per transistor for basically everything after http://www.eetimes.com/author.asp?section_id=36&doc_id=1329887">28nm stalled, it isn't making GPU's cheaper yet GPU's have more and more transistors each year.
So basically Nvidia is charging roughly the same price as it did for it's GPUs in 2007, the overall market is only half the size, the R&D budget has increased ~3x and they don't even have the benefit of transistor cost scaling.
AMD has it even worse because of HBM and watercooling on it's consumer cards eating into it's margins. The analyst takeaway after the giant stock drop the other day (I lost $5000 on it myself) was basically AMD's margins are bad and they burned through 30% of their remaining liquid cash. And yet people want AMD to release a 1080Ti watercooled HBM monster sized card for like $500..
It's also why I laugh when people say Titan X/Tesla/Quadro cards are over priced. Those margins are essentially subsidizing the cost of consumer GPUs. Every company that buys a DGX-1 with 8, $10,000 GPU's in it are basically allowing you to buy your geforce card as cheap as you are. It's the whole reason why Nvidia branched into all these different high margin markets in the first place.
Edit: That also brings me into the resolution argument. For the past several years you needed the absolute best single chip card in order to game in the latest titles at 1080p. 1080p covers 90% of the gaming market. So basically 90% of gamers had to pay $500+ to enjoy newest games at reasonable framerates. Now that is covered by ~$250 1060/480/580s, which also have lower margins. And with Volta/Navi or whatever the Polaris replacement is, it's going to get even more ridiculous.. it's going to be like $150 for a 1080p card capable of handling every game at highest settings - especially now that the consoles are pushing 4K and effectively pausing graphic as the new horsepower is going to be entirely used for the resolution.
Eventually people will switch to 4K as the monitors come down in price. But 90% of the market isn't switching overnight.
The overpriced GPU argument is boring.
GTX8800 cost $600 at launch in 2007. The architecture had a R&D budget of $475M. PrMinisterGR
Denial
Fox2232
PrMinisterGR
rm082e
Loophole35
PrMinisterGR
http://imgur.com/jepDjrU.png[/spoiler]
Here's an image of a TSMC 130nm wafer, used for the 8800 GTX, from the Techpowerup review of the card.
[spoiler]http://imgur.com/rrs2QyR.jpg[/spoiler]
That wafer contains 118 dies there. Sure, it's more cheap to make it today, but not back then. In contrast, a modern TSMC 16nm wafer is 300mm in diameter. It contains 180 GP104 dies and it has similar build costs.
Let me quote an analysis by the investment website The Motley Fool.
That chip costs less than $50 to make. This obviously isn't the whole deal since you have the running costs of the company, the PCB itself and the extra components, profit margins for everyone in the chain etc. But in no way it ends up being "like the old cards were priced" at $700. That's a $250-$300 GPU there.
A small historical sample of NVIDIA's profit margin should put that theory to rest. Their profit margin in 2006 was at 18% and the current one sits at 31%. There are a lot of ways to streamline and increase it, but the major on is giving you (comparatively) less for more.
So yeah, I get it market wise. Just don't tell me it's like it used to be, because it isn't.
TL;DR: My issue isn't that NVIDIA is upselling. Any company that could get away with it would do the same. My issue is people saying that NVIDIA is not upselling and that prices were the same for similar manufacturing costs.
/babyrage
That's why I said that my opinion about it is completely subjective 🙂
I still feel like getting robbed. It's a feeling, not an objective truth. Your argument is obviously correct.
I remember the days when there were 4-5 companies. ATi, NVidia, 3dfx, VideoLogic/PowerVR, Rendition, Intel (anyone remember the 740), S3...
What we have now is a travesty.
This kind of calculation is so superficial man. You are forgetting a lot of things.
First: Complex PCBs were harder and more expensive to make back then. A PCB for a 256-bit memory interface was much more expensive to make in 2006 than it is in 2017.
Second: The actual production cost for a chip is a factor of the chip's size, the wafer's size, and the maturity of the process. Take a look at the size of the G80 chip and the size of GP104.
G80 on top.
As you can see from the PCIe connectors, the cards are basically to scale.
[spoiler]rm082e
https://s-media-cache-ak0.pinimg.com/originals/f6/d0/60/f6d060b26b75ec4362112f957fe20bc1.jpg
*sigh*
grndzro7
https://www.youtube.com/watch?v=m5EFbIhslKU&t=3s here https://www.reddit.com/r/Amd/comments/69voz3/opinion_if_vega_performance_regardless_of_the/dh9wh5m/
It should fill you in on why Vega is such a big deal.
Nvidia has abused it's TWIMTBP program by locking AMD out of game optimizations by withholding critical code. adding in stupid stuff like a tesselated wall, tesselated terrain below water level, and cheated on graphics quality to dominate benchmarks.
It isn't, the rumors started when Hynix pulled it's 2ghz HBM2 from it's website because AMD has an agreement for all the top end HBM from Hynix. There is no basis for the HBM2 shortage rumor.
I wrote a blurb based on fry178
Not saying NV isnt trying to make the most from selling their products, but not looking at the top end model (not the largest market share nor an indicator for price increase), and comparing the price for the 2ND and 3Rd fastest card from the past 15y, I dont see it.
Besides that, check how much prices for bread/groceries went up over the past 10-15y.
How many of you, complaining about nv milking, are happy we're paying more to get less quality food products??
Cutting cost there will save you more than spending 1-200 less on a gpu that I keep for years...
Denial
http://www.eurogamer.net/articles/2013-03-07-nvidia-apologises-to-tomb-raider-pc-players-plagued-by-geforce-issues
Or maybe you're referring to this?
Nvidia got it less than week before shipping, AMD got it two months before shipping.. or did they?
https://www.youtube.com/watch?v=-i8K5M98eME
Weird. Here is a demo of Hairworks in Witcher 3 a year before it was released. It's possible the code got added a few months before it launched but we wouldn't know because when Project Cars came out and this thread was made:
http://www.reddit.com/r/pcmasterrace/comments/367qav/mark_my_word_if_we_dont_stop_the_nvidia_gameworks/
And AMD's Richard Huddy responded on twitter:
"Thank for supporting/ wanting an open and fair PC gaming industry." to the thread.. which of course blew up everywhere about how Nvidia was sabotaging performance again.. well it turns out AMD doesn't check in very often with developers:
https://arstechnica.com/gaming/2015/05/amd-says-nvidias-gameworks-completely-sabotaged-witcher-3-performance/
So maybe the code was added 2 months before or maybe AMD only checked it two months before release. All I know is the source for Hairworks is now available and last time I checked there was no major gain in performance by some driver AMD released in response to it. Turns out the performance issues was just poor geometry performance of their architecture and not some intentional ploy like Richard Huddy suggested with this statement:
"We were running well before that... it's wrecked our performance, almost as if it was put in to achieve that goal."
Also the Crysis 2 water bull**** has been debunked a hundred times. When you turn on wireframe mode it removes the culling. Both the CryEngine developers have stated this along with people who mod the engine.
As for the tessellated walls:
Here is the default Crysis setting: http://abload.de/img/fulltessellation2jrki.jpg
Here is AMD's "Optimized" setting:
http://abload.de/img/amdreducedtessellatioclrrn.jpg
So much for it not being necessary. Similar issues happen when you optimize tessellation with Nvidia's Godrays. Areas around dense objects like fences look like crap.
Don't get me wrong. Nvidia has done it's fair share of crap - but recently there has been a ton of misinformation and nonsense being spread about stuff like this and I'm tired of reading it.
I'm not asking why Vega is such a big deal. I understand all the technical improvements made to the architecture. Solfaur said that 20K is nothing. I don't know if I agree with that but w/e. MorganX responded "You gotta start somewhere when you innovate" implying that some innovation of AMD's is limiting it's supply to 20K.
You mean like when AMD added TressFX code into Tomb Raider (2013) at the last minute and screwed up Nvidia's performance?
Redemption80
Yeah, i thought the general consensus these days was that the Crysis 2 tessellation scandal was just lies made up by AMD fanboys and people with no understanding of culling.
The misinformation is what puts me off switching to AMD, it just feels like a distraction that is used to push GPU sales for AMD, even things like async compute have been hijacked and are now just a marketing gimmick.
PrMinisterGR
Believing that AMD are the "good guys" is equally blind as not believing that NVIDIA is fleecing everyone with current prices for the hardware given.
Just a couple of years ago AMD locked out of VSR everything below a 7970, for no reason at all. There were people with HD 4/5/6000 cards using it just fine. They even attempted to cut it for the 7000 series with some cr*p about "missing scalers" that were proven wrong by people like me running modded 7970's to 280x's, and the simple fact that the feature was literally working for everything. Even the current limits it has are artificial. Soon after no official AMD rep is in this forum, and everyone has moved into the walled garden of ignorance that the official AMD forums are. A place where if your technical knowledge is enough to call them on their bullsh*t you get shadowbanned.
They never acknowledged how bad their DX11/OpenGL driver is/was.
The did the same in the past with supersampling AA support in older cards and gave equally cold explanations along with truths about the "market".
They are as sh*tty or worse than NVIDIA, they just lack the market size for them to step on their clientele. My only argument for them is that after all this time having to promote and support open software (because there was no other way), they most likely have a more open and collaborative company culture compared to NVIDIA, because it was an adapt or die issue for them.
HeavyHemi
Stormyandcold
Redemption80
PrMinisterGR