AMD Vega To get 20K Units Released at launch and new Zen+ chatter

Published by

Click here to post a comment for AMD Vega To get 20K Units Released at launch and new Zen+ chatter on our message forum
https://forums.guru3d.com/data/avatars/m/260/260826.jpg
I definitely could have worded my post better, but ... This. Specifically, gimping GPUs to and extreme extent IMO, to maximize profits due to a monopoly, keeps the least common denominator very low which in turn, prevents developers from targeting a more capable baseline. The same goes for Intel and cpu cores. To say that their monopolies stifle innovation does not necessarily mean they themselves are not innovative.
Nvidia is not guilty to don't have competition in GPU market from AMD like Intel was a de-facto monopoly in desktop CPUs for the last 5 years due to sub-par AMD CPUs performance. Competition should sell competitive products at similar performance and lower price to force innovation and better specs for customers.If the best AMD can do is provide a similar performing product at similar or even higher price (Fury X i'm looking at you) this is not real competition. IF Ryzen "refresh" really provides a 15 % improvement in IPC (both meanings..:D) over initial versions that could force Intel "innovate" and sell better performing CPUs at a lower prices. I bought a Nvidia Shield TV 2015 in late 2016, Nvidia relaunched the EXACT same product without a couple of ports a few months ago (in 2017), same specs, same SoC, same price.It's still the best media box in the market after 2 years. Who is forcing Nvidia to improve the best when there is no competition? When you sell your product without competition : you are selling the best product customers can...buy. If Vega performance is similar to 1070 and his price is similar it will be another "innovative" AMD HBM fail.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Can anyone explain why AMD insists on HBM? Is say GDDR5X 512bit not enough? Fury did not seem to benefiy from HBM much?
Listen here at 42:20. HBM is another class of performance from GDDR5x, both for latency and for effective bandwidth. It's not even close.
Nvidia is not guilty to don't have competition in GPU market from AMD like Intel was a de-facto monopoly in desktop CPUs for the last 5 years due to sub-par AMD CPUs performance. Competition should sell competitive products at similar performance and lower price to force innovation and better specs for customers.If the best AMD can do is provide a similar performing product at similar or even higher price (Fury X i'm looking at you) this is not real competition. IF Ryzen "refresh" really provides a 15 % improvement in IPC (both meanings..:D) over initial versions that could force Intel "innovate" and sell better performing CPUs at a lower prices. I bought a Nvidia Shield TV 2015 in late 2016, Nvidia relaunched the EXACT same product without a couple of ports a few months ago (in 2017), same specs, same SoC, same price.It's still the best media box in the market after 2 years. Who is forcing Nvidia to improve the best when there is no competition? When you sell your product without competition : you are selling the best product customers can...buy. If Vega performance is similar to 1070 and his price is similar it will be another "innovative" AMD HBM fail.
I'm sure that there will be a Vega card on the 1070 performance bracket. Everything that will decide this is going to be the price. Judging by initial rumored availability (which could be wrong), the mainstream part of Vega will most likely come later. AMD has had, and still has, a huge perception issues. Youtubers and tech writers are doing a better job explaining how their tech works and progresses over time, than they do. Look at chips like Hawaii, as an example. AMD dug a hole in their own product by initially not investing in drivers for it (which was the stupidest thing they could ever have done), and packaging it with subpar cooling solutions. The same product that was "destroyed" by Maxwell is still faster than it, despite being a year older. Same story with the Fury. They promised OC headroom for the Fury X, which was a mistake. They didn't promote the vanilla Fury at all, despite it being in the same price range as the 980 and destroying it at the same time in performance. They have a true, big, marketing issue.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Saying that Nvidia stifles innovation might seem like a little too much but when all the things u list are put behind an inflated price tag, is doesn't sound that far fetched. Another thing that might contribute to that uneasiness about Nvidia is the fact they lagged behind in async, dx12 and vulkan while being the "bigger" company, instead (ab)using that power to manually tune drivers for each game (which works, but is not innovative at all). I don't hate Nvidia, but personally dislike when they put effort in software or "secondary" things as a way to "lock" gamers/academia/etc into their "not cheap at all" hardware, instead of focusing on the GPU itself.
The overpriced GPU argument is boring. GTX8800 cost $600 at launch in 2007. The architecture had a R&D budget of $475M. Discreet GPU sales in 2006 were 85M. Discreet GPU sales in 2015 were only 44M. Nvidia claims Pascal cost them "several billion". Their R&D budget is ~350M per quarter so about $1.4B per year and pascal took 2 years. I doubt all of it is Pascal itself, so let's just assume Pascal $2B. https://venturebeat.com/2016/05/06/nvidia-launches-pascal-based-consumer-pc-graphics-chipps-360-degree-photo-art-and-3d-audio-for-vr/ Cost per transistor for basically everything after http://www.eetimes.com/author.asp?section_id=36&doc_id=1329887">28nm stalled, it isn't making GPU's cheaper yet GPU's have more and more transistors each year. So basically Nvidia is charging roughly the same price as it did for it's GPUs in 2007, the overall market is only half the size, the R&D budget has increased ~3x and they don't even have the benefit of transistor cost scaling. AMD has it even worse because of HBM and watercooling on it's consumer cards eating into it's margins. The analyst takeaway after the giant stock drop the other day (I lost $5000 on it myself) was basically AMD's margins are bad and they burned through 30% of their remaining liquid cash. And yet people want AMD to release a 1080Ti watercooled HBM monster sized card for like $500.. It's also why I laugh when people say Titan X/Tesla/Quadro cards are over priced. Those margins are essentially subsidizing the cost of consumer GPUs. Every company that buys a DGX-1 with 8, $10,000 GPU's in it are basically allowing you to buy your geforce card as cheap as you are. It's the whole reason why Nvidia branched into all these different high margin markets in the first place. Edit: That also brings me into the resolution argument. For the past several years you needed the absolute best single chip card in order to game in the latest titles at 1080p. 1080p covers 90% of the gaming market. So basically 90% of gamers had to pay $500+ to enjoy newest games at reasonable framerates. Now that is covered by ~$250 1060/480/580s, which also have lower margins. And with Volta/Navi or whatever the Polaris replacement is, it's going to get even more ridiculous.. it's going to be like $150 for a 1080p card capable of handling every game at highest settings - especially now that the consoles are pushing 4K and effectively pausing graphic as the new horsepower is going to be entirely used for the resolution. Eventually people will switch to 4K as the monitors come down in price. But 90% of the market isn't switching overnight.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
The overpriced GPU argument is boring. GTX8800 cost $600 at launch in 2007. The architecture had a R&D budget of $600M. Discreet GPU sales in 2006 were 85M. Discreet GPU sales in 2015 were only 44M. Nvidia claims Pascal cost them "several billion". Their R&D budget is ~350M per quarter so about $1.4B per year and pascal took 2 years. I doubt all of it is Pascal itself, so let's just assume Pascal $2B. http://www.eetimes.com/author.asp?section_id=36&doc_id=1329887 Cost per transistor for basically everything after 28nm stalled, it isn't making GPU's cheaper yet GPU's have more and more transistors each year. So basically Nvidia is charging roughly the same price as it did for it's GPUs in 2007, the overall market is only half the size, the R&D budget has increased ~3x and they don't even have the benefit of transistor cost scaling. AMD has it even worse because of HBM and watercooling on it's consumer cards eating into it's margins. The analyst takeaway after the giant stock drop the other day (I lost $5000 on it myself) was basically AMD's margins are bad and they burned through 30% of their remaining liquid cash. And yet people want AMD to release a 1080Ti watercooled HBM monster sized card for like $500.. ok. I guess if you want the company to go into a death spiral that's fine. It's also why I laugh when people say Titan X/Tesla/Quadro cards are over priced. Those margins are essentially subsidizing the cost of consumer GPUs. Every company that buys a DGX-1 with 8, $10,000 GPU's in it are basically allowing you to buy your geforce card as cheap as you are. It's the whole reason why Nvidia branched into all these different high margin markets in the first place. Edit: That also brings me into the resolution argument. For the past several years you needed the absolute best single chip card in order to game in the latest titles at 1080p. 1080p covers 90% of the gaming market. So basically 90% of gamers had to pay $500+ to enjoy newest games at reasonable framerates. Now that is covered by ~$250 1060/480/580s, which also have lower margins. And with Volta/Navi or whatever the Polaris replacement is, it's going to get even more ridiculous.. it's going to be like $150 for a 1080p card capable of handling every game at highest settings - especially now that the consoles are pushing 4K and effectively pausing graphic as the new horsepower is going to be entirely used for the resolution. Eventually people will switch to 4K as the monitors come down in price. But 90% of the market isn't switching overnight.
I don't know if you have any experience with corporate finance, or finance at all. Take all that "R&D" amounts with huge grains of salt. Companies that are designing chips have standard running costs that they can present however they like for marketing purposes. AMD needs roughly $600m to run for a quarter, NVIDIA probably needs something more than that. Whenever they present you with "R&D" costs they simply ad up their running cost for the months it took to produce a chip. The purpose of the whole company is R&D anyway. Don't buy too much into it. They just have to cover their running cost from sales from the new architecture and that's it. There is no initial investment to currently recuperate, this has happened since the first product each of them launched.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I don't know if you have any experience with corporate finance, or finance at all. Take all that "R&D" amounts with huge grains of salt. Companies that are designing chips have standard running costs that they can present however they like for marketing purposes. AMD needs roughly $600m to run for a quarter, NVIDIA probably needs something more than that. Whenever they present you with "R&D" costs they simply ad up their running cost for the months it took to produce a chip. The purpose of the whole company is R&D anyway. Don't buy too much into it. They just have to cover their running cost from sales from the new architecture and that's it. There is no initial investment to currently recuperate, this has happened since the first product each of them launched.
Well yeah, it's more complicated then I'm making it out to be. There are all kinds of costs that have gone down and up and other factors I'm not even considering - but R&D costs for these companies definitely gone up even if it's not in the ~$2B range, the discreet GPU market is definitely smaller than what it was and transistor price scaling has essentially stalled. Also, I edited my post after you quoted, but in the edit I said the vast majority of people (90%) don't even need Vegas/Ti/1080s. It's probably why AMD is only shipping 20K Vega chips. It's also probably why they did what they did with Polaris. They knew the RX480 would hit 90% of the market and it would be fine. Idk, I just don't see the issue with the pricing but then again my financial status, where I live, etc probably all plays a role into my perception of it.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
This is cool and all, but I just want an am4 apu.
Hell, Yeah![spoiler]But even this time it will be miracle to get decent 12" notebook.[/spoiler]
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Well yeah, it's more complicated then I'm making it out to be. There are all kinds of costs that have gone down and up and other factors I'm not even considering - but R&D costs for these companies definitely gone up even if it's not in the ~$2B range, the discreet GPU market is definitely smaller than what it was and transistor price scaling has essentially stalled. Also, I edited my post after you quoted, but in the edit I said the vast majority of people (90%) don't even need Vegas/Ti/1080s. It's probably why AMD is only shipping 20K Vega chips. It's also probably why they did what they did with Polaris. They knew the RX480 would hit 90% of the market and it would be fine. Idk, I just don't see the issue with the pricing but then again my financial status, where I live, etc probably all plays a role into my perception of it.
Ah yeah, I agree completely. But pricing is a bit ridiculous I think, especially if you're a bit of an old timer. I might be factually wrong, but my gut tells me that the 2560 shader/256-bit bus GTX 1080 shouldn't cost close to $700 at launch, no matter the performance. It's subjective, but I still feel like I'm getting robbed. You have to consider relative performance into it too. I know that a computer is a computer and it has myriad of uses, and that you'll get much better graphics etc out of it, but consider something like the Scorpio. With the same amount of money as an uppder-middle range GPU, you get a whole system that is basically plug and play and will give you true 4k games and a ton of apps on top. The total cost of ownership over time will most likely be higher if you are an avid gamer, but if you're a tiny bit more casual than that, you will have to invest at least double the money to get comparable performance from a PC. All the articles and videos comparing PC to console performance never mention the horrendous frame pacing issues of all the gimped systems they suggest, and neither they mention that there aren't any real and reliable tools to achieve that on a computer. It's no surprise that the PC market is actually declining year over year, percentage wise. And I'm not sh*tting on NVIDIA only. The "good" Polaris cards are on the 260+ euro side, which is the same price as a Switch, for a small chip with a 256-bit bus on a small PCB. Things have changed and the middle class that used to be the main consumer of all this stuff has had its income squeezed out the last decade. We can't keep pretending that things are as they used to be.
https://forums.guru3d.com/data/avatars/m/260/260855.jpg
GTX 1080 shouldn't cost close to $700 at launch, no matter the performance. It's subjective, but I still feel like I'm getting robbed.
It cost that much because there was no competition to force the price down. Don't complain about a publicly traded corporation trying to maximize profit - that's their Prime Directive. It's the first rule of business that influences and dictates all other decisions. It's foolish to expect a company to make less money just to keep the consumer happy. Complain that we are lacking two companies, on equal footing, who are fighting for the same customers in the same market. That's the real problem here. If Sony were putting out the PS4 without the competition from Xbox, or vice-versa, the last console launch would have been $600, with an upfront 10 year sales cycle. Why? Because they could have gotten away with it. Microsoft was able to drag out the previous generation a lot longer than anyone expected because Sony wasn't in a position to force their hand sooner. And the competition they did have was amazing. Microsoft wanted to force people into their DRM box with mandatory Kinect connection for $500. Sony slapped them in the face with a parody video making fun of their anti-consumer vision, and came in a hundred bucks cheaper. Result? Microsoft was forced to walk back just about every major plan they had for the Xbox One due to competition. This is what we need in the enthusiast GPU space. Thankfully, we've finally got some competition in the CPU market.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Ah yeah, I agree completely. But pricing is a bit ridiculous I think, especially if you're a bit of an old timer. I might be factually wrong, but my gut tells me that the 2560 shader/256-bit bus GTX 1080 shouldn't cost close to $700 at launch, no matter the performance. It's subjective, but I still feel like I'm getting robbed. You have to consider relative performance into it too. I know that a computer is a computer and it has myriad of uses, and that you'll get much better graphics etc out of it, but consider something like the Scorpio. With the same amount of money as an uppder-middle range GPU, you get a whole system that is basically plug and play and will give you true 4k games and a ton of apps on top. The total cost of ownership over time will most likely be higher if you are an avid gamer, but if you're a tiny bit more casual than that, you will have to invest at least double the money to get comparable performance from a PC. All the articles and videos comparing PC to console performance never mention the horrendous frame pacing issues of all the gimped systems they suggest, and neither they mention that there aren't any real and reliable tools to achieve that on a computer. It's no surprise that the PC market is actually declining year over year, percentage wise. And I'm not sh*tting on NVIDIA only. The "good" Polaris cards are on the 260+ euro side, which is the same price as a Switch, for a small chip with a 256-bit bus on a small PCB. Things have changed and the middle class that used to be the main consumer of all this stuff has had its income squeezed out the last decade. We can't keep pretending that things are as they used to be.
How about you look at it this way 690M transistors for $800 msrp (8800Ultra) vs. 7.2B transistors for $700MSRP (1080).
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
It cost that much because there was no competition to force the price down. Don't complain about a publicly traded corporation trying to maximize profit - that's their Prime Directive. It's the first rule of business that influences and dictates all other decisions. It's foolish to expect a company to make less money just to keep the consumer happy.
That's why I said that my opinion about it is completely subjective 🙂 I still feel like getting robbed. It's a feeling, not an objective truth. Your argument is obviously correct.
Complain that we are lacking two companies, on equal footing, who are fighting for the same customers in the same market. That's the real problem here.
I remember the days when there were 4-5 companies. ATi, NVidia, 3dfx, VideoLogic/PowerVR, Rendition, Intel (anyone remember the 740), S3... What we have now is a travesty.
How about you look at it this way 690M transistors for $800 msrp (8800Ultra) vs. 7.2B transistors for $700MSRP (1080).
This kind of calculation is so superficial man. You are forgetting a lot of things. First: Complex PCBs were harder and more expensive to make back then. A PCB for a 256-bit memory interface was much more expensive to make in 2006 than it is in 2017. Second: The actual production cost for a chip is a factor of the chip's size, the wafer's size, and the maturity of the process. Take a look at the size of the G80 chip and the size of GP104. G80 on top. As you can see from the PCIe connectors, the cards are basically to scale. [spoiler]http://imgur.com/jepDjrU.png[/spoiler] Here's an image of a TSMC 130nm wafer, used for the 8800 GTX, from the Techpowerup review of the card. [spoiler]http://imgur.com/rrs2QyR.jpg[/spoiler] That wafer contains 118 dies there. Sure, it's more cheap to make it today, but not back then. In contrast, a modern TSMC 16nm wafer is 300mm in diameter. It contains 180 GP104 dies and it has similar build costs. Let me quote an analysis by the investment website The Motley Fool.
Coming up with relative cost estimates Analyst Handel Jones estimates that the cost of a 16-nanometer wafer by the end of 2016 to a fabless customer like NVIDIA should come in at around $7,779.22. In contrast, a 28-nanometer wafer by the end of 2014 (by which time the process is very mature) should run a fabless customer approximately $4577.25. By using Silicon Edge's dies per wafer estimator tool, a wafer of GM200 chips should pack a total of 91 dies. A wafer of GP104 chips, on the other hand, should be able to cram in approximately 180 dies. This passes a basic sanity check because the GP104 is about half the size of the GM200. Now, if we assume that the 28-nanometer process, by virtue of its maturity, has a very low defect density of 0.01 defects per square centimeter, then -- using iSine's die yield calculator tool -- NVIDIA should get around 76 good chips per 28-nanometer wafer. Dividing the wafer cost by the number of good dies yields a cost of $58.68 per chip. Doing the same calculation for the 16-nanometer GP104, but assuming a defect density of 0.015 defects per square centimeter (since it is a less mature process), yields around 164 good dies per wafer. Dividing the estimated 16-nanometer wafer costs by this figure leads to a die cost estimate of $47.43. I should note that these estimates involve a lot of guesswork and assumptions, and are done in an attempt to provide a relative cost comparison between the two chips under certain assumptions (that I believe are realistic). NVIDIA financial implications Based on the analysis above, it looks as though the GPU that powers the GTX 1080 may actually be cheaper to manufacture than the GM200 that powered the GTX Titan X and GTX 980 Ti flagship cards of yesteryear. In this case, it's little wonder that NVIDIA has chosen to essentially "end of life" the GTX 980 Ti in favor of the GTX 1080. The 1080 is faster, more efficient, and likely cheaper to build. It's a no-brainer for NVIDIA. At the end of the day, it looks as though the GeForce GTX 1080 won't have a negative impact on the company's gross profit margins. Of course, if the 16-nanometer process winds up performing significantly worse from a defect density perspective than the 28-nanometer process than I assumed in my calculations, the analysis could change.
That chip costs less than $50 to make. This obviously isn't the whole deal since you have the running costs of the company, the PCB itself and the extra components, profit margins for everyone in the chain etc. But in no way it ends up being "like the old cards were priced" at $700. That's a $250-$300 GPU there. A small historical sample of NVIDIA's profit margin should put that theory to rest. Their profit margin in 2006 was at 18% and the current one sits at 31%. There are a lot of ways to streamline and increase it, but the major on is giving you (comparatively) less for more. So yeah, I get it market wise. Just don't tell me it's like it used to be, because it isn't. TL;DR: My issue isn't that NVIDIA is upselling. Any company that could get away with it would do the same. My issue is people saying that NVIDIA is not upselling and that prices were the same for similar manufacturing costs. /babyrage
data/avatar/default/avatar01.webp
I don't understand why the HBM2 would be such a bottleneck. It's not new technology anymore, per se, it's just development from the original HBM. Didn't they learn anything from the first one?
It isn't, the rumors started when Hynix pulled it's 2ghz HBM2 from it's website because AMD has an agreement for all the top end HBM from Hynix. There is no basis for the HBM2 shortage rumor.
What exactly is AMD doing that's so innovative with Vega? And how has Nvidia stifled innovation for years?
I wrote a blurb based on https://www.youtube.com/watch?v=m5EFbIhslKU&t=3s here https://www.reddit.com/r/Amd/comments/69voz3/opinion_if_vega_performance_regardless_of_the/dh9wh5m/ It should fill you in on why Vega is such a big deal. Nvidia has abused it's TWIMTBP program by locking AMD out of game optimizations by withholding critical code. adding in stupid stuff like a tesselated wall, tesselated terrain below water level, and cheated on graphics quality to dominate benchmarks.
https://forums.guru3d.com/data/avatars/m/242/242134.jpg
Not saying NV isnt trying to make the most from selling their products, but not looking at the top end model (not the largest market share nor an indicator for price increase), and comparing the price for the 2ND and 3Rd fastest card from the past 15y, I dont see it. Besides that, check how much prices for bread/groceries went up over the past 10-15y. How many of you, complaining about nv milking, are happy we're paying more to get less quality food products?? Cutting cost there will save you more than spending 1-200 less on a gpu that I keep for years...
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I wrote a blurb based on https://www.youtube.com/watch?v=m5EFbIhslKU&t=3s here https://www.reddit.com/r/Amd/comments/69voz3/opinion_if_vega_performance_regardless_of_the/dh9wh5m/ It should fill you in on why Vega is such a big deal.
I'm not asking why Vega is such a big deal. I understand all the technical improvements made to the architecture. Solfaur said that 20K is nothing. I don't know if I agree with that but w/e. MorganX responded "You gotta start somewhere when you innovate" implying that some innovation of AMD's is limiting it's supply to 20K.
Nvidia has abused it's TWIMTBP program by locking AMD out of game optimizations by withholding critical code. adding in stupid stuff like a tesselated wall, tesselated terrain below water level, and cheated on graphics quality to dominate benchmarks.
You mean like when AMD added TressFX code into Tomb Raider (2013) at the last minute and screwed up Nvidia's performance? http://www.eurogamer.net/articles/2013-03-07-nvidia-apologises-to-tomb-raider-pc-players-plagued-by-geforce-issues Or maybe you're referring to this?
"We've been working with CD Projeckt Red from the beginning," said Huddy. "We've been giving them detailed feedback all the way through. Around two months before release, or thereabouts, the GameWorks code arrived with HairWorks, and it completely sabotaged our performance as far as we're concerned. We were running well before that... it's wrecked our performance, almost as if it was put in to achieve that goal."
Nvidia got it less than week before shipping, AMD got it two months before shipping.. or did they? https://www.youtube.com/watch?v=-i8K5M98eME Weird. Here is a demo of Hairworks in Witcher 3 a year before it was released. It's possible the code got added a few months before it launched but we wouldn't know because when Project Cars came out and this thread was made: http://www.reddit.com/r/pcmasterrace/comments/367qav/mark_my_word_if_we_dont_stop_the_nvidia_gameworks/ And AMD's Richard Huddy responded on twitter: "Thank for supporting/ wanting an open and fair PC gaming industry." to the thread.. which of course blew up everywhere about how Nvidia was sabotaging performance again.. well it turns out AMD doesn't check in very often with developers:
"We’ve provided AMD with 20 keys for game testing as they work on the driver side," said Slighty Mad Studios' Ian Bell. "But you only have to look at the lesser hardware in the consoles to see how optimised we are on AMD based chips. We’re reaching out to AMD with all of our efforts. We’ve provided them 20 keys as I say. They were invited to work with us for years, looking through company mails the last I can see [AMD] talked to us was October of last year. Categorically, Nvidia have not paid us a penny. They have though been very forthcoming with support and co-marketing work at their instigation."
https://arstechnica.com/gaming/2015/05/amd-says-nvidias-gameworks-completely-sabotaged-witcher-3-performance/ So maybe the code was added 2 months before or maybe AMD only checked it two months before release. All I know is the source for Hairworks is now available and last time I checked there was no major gain in performance by some driver AMD released in response to it. Turns out the performance issues was just poor geometry performance of their architecture and not some intentional ploy like Richard Huddy suggested with this statement: "We were running well before that... it's wrecked our performance, almost as if it was put in to achieve that goal." Also the Crysis 2 water bull**** has been debunked a hundred times. When you turn on wireframe mode it removes the culling. Both the CryEngine developers have stated this along with people who mod the engine. As for the tessellated walls: Here is the default Crysis setting: http://abload.de/img/fulltessellation2jrki.jpg Here is AMD's "Optimized" setting: http://abload.de/img/amdreducedtessellatioclrrn.jpg So much for it not being necessary. Similar issues happen when you optimize tessellation with Nvidia's Godrays. Areas around dense objects like fences look like crap. Don't get me wrong. Nvidia has done it's fair share of crap - but recently there has been a ton of misinformation and nonsense being spread about stuff like this and I'm tired of reading it.
https://forums.guru3d.com/data/avatars/m/206/206288.jpg
Yeah, i thought the general consensus these days was that the Crysis 2 tessellation scandal was just lies made up by AMD fanboys and people with no understanding of culling. The misinformation is what puts me off switching to AMD, it just feels like a distraction that is used to push GPU sales for AMD, even things like async compute have been hijacked and are now just a marketing gimmick.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Believing that AMD are the "good guys" is equally blind as not believing that NVIDIA is fleecing everyone with current prices for the hardware given. Just a couple of years ago AMD locked out of VSR everything below a 7970, for no reason at all. There were people with HD 4/5/6000 cards using it just fine. They even attempted to cut it for the 7000 series with some cr*p about "missing scalers" that were proven wrong by people like me running modded 7970's to 280x's, and the simple fact that the feature was literally working for everything. Even the current limits it has are artificial. Soon after no official AMD rep is in this forum, and everyone has moved into the walled garden of ignorance that the official AMD forums are. A place where if your technical knowledge is enough to call them on their bullsh*t you get shadowbanned. They never acknowledged how bad their DX11/OpenGL driver is/was. The did the same in the past with supersampling AA support in older cards and gave equally cold explanations along with truths about the "market". They are as sh*tty or worse than NVIDIA, they just lack the market size for them to step on their clientele. My only argument for them is that after all this time having to promote and support open software (because there was no other way), they most likely have a more open and collaborative company culture compared to NVIDIA, because it was an adapt or die issue for them.
data/avatar/default/avatar39.webp
Yeah, i thought the general consensus these days was that the Crysis 2 tessellation scandal was just lies made up by AMD fanboys and people with no understanding of culling. The misinformation is what puts me off switching to AMD, it just feels like a distraction that is used to push GPU sales for AMD, even things like async compute have been hijacked and are now just a marketing gimmick.
It is amazing that still gets put out as fact. The developer Crytek, pointed out that culling was disabled in wireframe mode which is the only time you'd see it.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
My only argument for them is that after all this time having to promote and support open software (because there was no other way), they most likely have a more open and collaborative company culture compared to NVIDIA, because it was an adapt or die issue for them.
I agree with everything you said except the above quote. I don't think AMD are more open or collaborative. Many devs have already shared their experience with us about AMD and it's a lot of negative comments. For many devs, it seems like AMD doesn't really care. This situation might have changed in "very" recent times, but, the fact of the matter is Nvidia are and have been much more supportive of devs. If a company comes forward and answers questions and helps to resolve issues, is it any wonder that those games run well on Nvidia? I do think it comes down to money again; Nvidia obviously can afford to offer support and even an engineer or two when needed. However, I don't think this hands-on approach by Nvidia deserves much criticism. I would love to know who are those devs who turn down the chance to get their games up and running on Nvidia...anyone?
https://forums.guru3d.com/data/avatars/m/206/206288.jpg
It is amazing that still gets put out as fact. The developer Crytek, pointed out that culling was disabled in wireframe mode which is the only time you'd see it.
It is, and i'm not sure if people are unaware of the follow up facts or are just deliberately choosing to ignore those to push an opinion. TBF, if more AMD fans were like PrMinisterGR i think the company would be in a better place, they need more people who aren't spouting the good guy nonsense and even more people who don't just look at them as a budget brand
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
I agree with everything you said except the above quote. I don't think AMD are more open or collaborative. Many devs have already shared their experience with us about AMD and it's a lot of negative comments. For many devs, it seems like AMD doesn't really care. This situation might have changed in "very" recent times, but, the fact of the matter is Nvidia are and have been much more supportive of devs. If a company comes forward and answers questions and helps to resolve issues, is it any wonder that those games run well on Nvidia? I do think it comes down to money again; Nvidia obviously can afford to offer support and even an engineer or two when needed. However, I don't think this hands-on approach by Nvidia deserves much criticism. I would love to know who are those devs who turn down the chance to get their games up and running on Nvidia...anyone?
I mean in for things much more beyond developers. The amount of open source tools, drivers and libraries they have put out of there deserves a lot of respect. They recently had 11k of lines of Linux driver rejected by the kernel, and instead of gtfo'ing as a lot of companies would do, they sat down and rewrote it in ways that will even inform the Windows binaries. Due to the consoles and the way the company is functioning now, they had to work from the people making the open source compilers like GCC or LLVM, to Apple, Microsoft, SONY, Nintendo, Hynix etc. Somewhere here we also have to mention that despite the immense space for friction left in all this, AMD doesn't seem to litigate with anyone except Intel, in a case that was later proven that AMD was completely correct about. What I wanted to say here is that being forced to be like that, AMD has more or less become a more open company in general. It's a dance with the devil kind of situation.