AMD on the Road: takes Radeon RX Vega to the Gamers

Published by

Click here to post a comment for AMD on the Road: takes Radeon RX Vega to the Gamers on our message forum
data/avatar/default/avatar02.webp
Efficiency and clocks are more or less limited by GloFo 14nm process, after all it was created based on low power architecture and then cranked up to high power usage. I expect Navi on 7nm build for high power will fix that.
No... just no. 980Ti and Fury X are built on same 28nm TSMC. GTX 780 Ti and GTX750/Ti are built on the same process as well. And they are all words apart when it comes to perf/W The fact that both the process and the architecture (as well as the implementation, for example 1080 FE perf/W >> custom 1060) are responsible for products efficiency should be obvious same as:
And your statement that anyone can anyone can create TFlops performance product is clear bull****. You need to have a lot of resources and experience to be able create CPU/GPU and that's with not even mentioning walking over licences/patents minefield.
No ****... I thought that was an obvious hyperbole :3eyes:
https://forums.guru3d.com/data/avatars/m/264/264289.jpg
Efficiency is a metric people look at when two cards are more or less equal. Like let's just assume that the Vega RX performs roughly equivalent to a 1080. Price comes in at $499. Performance/price it roughly compares to the 1080 (I know miners are driving prices up atm, lets ignore that) - what other things can we look at to determine which card is the better purchase? That's when efficiency and value add features like Freesync/Ansel/etc come into play. 1080 uses almost 100w less than what we expect Vega RX to use. I game roughly 3-4 hours a day, probably more on a weekends. That's ~$18 a year where I live. So if I hold my card for two years, that makes it roughly $40 cheaper for the 1080. On the flipside, if I'm buying a new monitor, a comparable freesync display is over a hundred dollars cheaper than a Gsync one. So depending on whether or not you already have a G-Sync display, how much game, etc, people are going to value different aspects of cards. Then as Noisiv said, it also gives an indication of future/architectural performance. A good example is Pascal - GP102 is currently only ~480mm2. Nvidia has built larger chips, all the way up at ~600 - the limit here is power consumption. Nvidia doesn't like to cross the ~250-300w barrier with their reference models. So basically Nvidia is stuck at 480mm2, at the given freq/core count, unless they improve power efficiency. Nvidia claims Volta's FP32 CUDA cores are 50% more efficient than Pascal. Theoretically this will allow them to push a 600mm2 Volta card operating at roughly the same frequencies as the current Ti models. As far as HBM2, I don't think it's publicity or advertising, I think it's AMD's lack of funding for multiple SKU's. HBM2 is almost definitely necessary in cloud computing/deep learning/etc. So AMD needs their top tier card to support it, as Nvidia's does. The problem is that AMD can't afford to split their line up like Nvidia. GP100 is radically different than GP102, not only HBM2, but even the CUDA cores, cache hierarchy, etc are all different. For AMD to follow that, it requires them to design/fab/validate completely separate products. So instead they opt to basically sell their "GP100" variant as a consumer card. Margins are lower due to this, but the upfront cost and time to market is drastically reduced. That being said, there are people that think RX Vega will be slightly different architecturally. I haven't seen any rumors/information that support this claim and historically AMD hasn't split it's consumer/workstation/compute cards to the same degree that Nvidia has.
I think so too. In the end efficiency is depending on the workload and only comparable if that Basis is the same so the question is how good the clock and Power gating is able to keep the power down for unused facilities.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Efficiency = performance, because if you have ample efficiency there is nothing stopping you to trade that efficiency for higher clocks aka for higher performance. Efficiency is the name of game, and for the last several years Nvidia has been doing nothing but figuring out how to move the bits around by using the least amount of energy. Efficiency + perf/mm2 = Everything
I don't disagree that GCN is becoming a bit inefficient for today's standards. There is some truth that "efficiency=performance", but where you're a bit mislead is how you think better efficiency opens doors to push the hardware harder for more performance, which is often true but definitely not always. For example, Ryzen has better efficiency per-core than a similarly clocked Kaby Lake but can't overclock nearly as high. In another perspective, in many OpenCL tasks, an AMD GPUs often get better performance-per-watt than Nvidia, but worse performance-per-watt in gaming (regardless of which GPU gets the higher framerate). The architecture, the silicon quality, and the transistors themselves play more of a roll in how fast you can push something than the efficiency of the design. As a side note: AMD GPUs tend to heat up a lot more under synthetic benchmarks, but when you have v-sync on with a normal game, they still have worse performance-per-watt vs Nvidia but not to the point where it's worth noting. My GPU, for example, is known to reach 300W under FurMark, but it tends to remain below 250W under a normal gaming session.
data/avatar/default/avatar15.webp
I am sure you'll agree that Ryzen is an oddball when it comes to OC. For the sake of the argument imagine if custom Vega RX come at ~375W of real ingame consumption and lets assume it equals 1080Ti FE @250W. Who's more likely to be a faster card when OC-ed?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
For the sake of the argument imagine if custom Vega RX come at ~375W of real ingame consumption and lets assume it equals 1080Ti FE @250W. Who's more likely to be a faster card when OC-ed?
Well yeah, obviously the 1080Ti will be faster. But that's also a very pessimistic and biased outlook. Again, I don't disagree that something like a 1080 or 1080Ti has more potential, I'm just saying your statement is a little too cut-and-dried, and there are a lot more variables involved than "efficiency" and "perf/mm2". Ultimately your point still stands, I'm just saying careful not to generalize too much.
data/avatar/default/avatar19.webp
Well yeah, obviously the 1080Ti will be faster. But that's also a very pessimistic and biased outlook. Again, I don't disagree that something like a 1080 or 1080Ti has more potential, I'm just saying your statement is a little too cut-and-dried, and there are a lot more variables involved than "efficiency" and "perf/mm2". Ultimately your point still stands, I'm just saying careful not to generalize too much.
From everything we've seen Vega FE is atm(!) slower in games than 1080 FE! While consuming 280W and dowclocking itself to ~1440MHz. So it's easily 300W+ at 1600MHz. Knowing this, does 375W for custom Vega RX seem far fetched? And then I've added something like ~25% to its per clock performance so that Vega RX equals 1080 Ti FE, LOL I've even assumed perfect scaling. How is that "very pessimistic and biased outlook" 😕 Nevermind that I even said "For the sake of the argument imagine if", this hypothetical scenario does not look out of this world - at all
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
From everything we've seen Vega FE is atm(!) slower in games than 1080 FE! While consuming 280W and dowclocking itself to ~1440MHz. So it's easily 300W+ at 1600MHz. Knowing this, does 375W for custom Vega RX seem far fetched? And then I've added something like ~25% to its per clock performance so that Vega RX equals 1080 Ti FE, LOL I've even assumed perfect scaling. How is that "very pessimistic and biased outlook" 😕 Nevermind that I even said "For the sake of the argument imagine if", this hypothetical scenario does not look out of this world - at all
To my recollection, Vega FE is not unanimously slower than 1080 FE. Much like the Titan series, to my understanding, Vega FE also involves more transistors for things like double-precision floats. An RX Vega is likely going to offer better performance-per-watt to Vega FE for gaming purposes (but worse for workstation tasks). 375W sounds very far-fetched. Unless RX Vega offers 3x 8-pin connectors, I don't see how a single GPU could consume that much power, let alone get that much without being hazardous. Wattage does not scale linearly with clock rate. It might in theory (I haven't actually checked), but in practice the laws of thermodynamics kick in. Intel's i9 series is a good example of this. Some chips get better performance-per-watt as clock rates increase, some get worse. Again, you're not considering enough variables.
data/avatar/default/avatar31.webp
To my recollection, Vega FE is not unanimously slower than 1080 FE. Much like the Titan series, to my understanding, Vega FE also involves more transistors for things like double-precision floats. An RX Vega is likely going to offer better performance-per-watt to Vega FE for gaming purposes (but worse for workstation tasks). 375W sounds very far-fetched. Unless RX Vega offers 3x 8-pin connectors, I don't see how a single GPU could consume that much power, let alone get that much without being hazardous. Wattage does not scale linearly with clock rate. It might in theory (I haven't actually checked), but in practice the laws of thermodynamics kick in. Intel's i9 series is a good example of this. Some chips get better performance-per-watt as clock rates increase, some get worse. Again, you're not considering enough variables.
wattage scales linearly with clocks UNTILL... until more voltage is needed and then it scales linearly with something like clocks*Voltage^2 2X8 pin + PCI-E =375W, which does not mean that 375W is actually the maximum power available. It's the recommended(!) maximum Remember reference RX 480 and >75W on PCI-E? Vega FE has **** double precision. And I fail to see what does sample variance has to do with the discussion at hand 🙂
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
To my recollection, Vega FE is not unanimously slower than 1080 FE. Much like the Titan series, to my understanding, Vega FE also involves more transistors for things like double-precision floats. An RX Vega is likely going to offer better performance-per-watt to Vega FE for gaming purposes (but worse for workstation tasks). 375W sounds very far-fetched. Unless RX Vega offers 3x 8-pin connectors, I don't see how a single GPU could consume that much power, let alone get that much without being hazardous. Wattage does not scale linearly with clock rate. It might in theory (I haven't actually checked), but in practice the laws of thermodynamics kick in. Intel's i9 series is a good example of this. Some chips get better performance-per-watt as clock rates increase, some get worse. Again, you're not considering enough variables.
Hmm, there is definitely some changes that are going to occur for RX - but I don't think it's going to be too different from FE. FE's double precision is already cut to 1/16th FP32. Mixed math is inherent to the CU's so that's also likely to remain with RX - AMD stated they wanted to use the mixed math to accelerate their game libraries like TressFX. HBCC/Tile Raster are both supposedly disabled. I don't know how HBCC would effect power, but it could help minimum performance if/when the card is bandwidth starved. Tile Raster slightly improves performance as the card spends less time moving data from memory - it also slightly lowers power. So if those are disabled, we should see some perf increase directly from the features, plus additional increase due to any lower power consumption. In the end AMD can obviously sell the card for cheaper to make it competitive. No matter how bad hardware is, the cost is the ultimate factor. But by building expensive/complex hardware and then selling it at a lower cost, AMD's only hurting themselves. Analysts already consider AMD's margins really low for the tech industry, Vega under performing for the cost to manufacture only makes things worse. I think AMD is banking on 7nm for both Zen and Vega. Zen, architecturally is in a perfect position. It's limited to 4Ghz by the process - keep Zen the same and clock it to 4.5 at linear power scaling and it completely blows Intel's products out of the water. 7nm is going to allow that to happen - but Zen plus is also the second iteration of the design, which historically has always had the largest IPC gains with new architectures. Baring AMD screwing anything up, I think Zen+ is going to be really, really good. Vega is also going to be good at 7nm, but I think AMD faces different competition from Nvidia than Intel. Intel is competing through litigation and marketing at this point. Nvidia is actually innovating at incredible paces. I think the market is big enough though that AMD can find room, even if their products are slightly behind Nvidia's in terms of various metrics people use to gauge which product is better. Navi will most likely bring the TR/Epyc/Infinity Fabric design to GPU's - Nvidia is taking the same approach. That's when I think AMD is going to have an advantage, as they've basically been heading in this direction for the past decade with heterogeneous computing.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
wattage scales linearly with clocks UNTILL... until more voltage is needed and then it scales linearly with something like clocks*Voltage^2
Again, theoretical scaling isn't the same as what happens in practice. Unless you provide the equation, there's not much point continuing to discuss this. Regardless, you don't have to change the voltage for the scaling to skew. Heat output will change the equation. EDIT: I understand this may sound nit-picky, but when you consider the sheer amount of transistors, the already high wattage, and the high clocks, the rate at which wattage increases becomes a lot more chaotic. If we were talking about something like an i3 on water cooling, then yeah, the wattage scaling is going to be pretty linear.
2X8 pin + PCI-E =375W, which does not mean that 375W is actually the maximum power available. It's the recommended(!) maximum Remember reference RX 480 and >75W on PCI-E?
Exactly, 375W is not the actual maximum power available. It isn't the recommended maximum (that would be 362W, where you get 288W from the 2x 8-pins and 74W from PCIe) but the industry standard maximum. It is possible to exceed 375W, though it is frowned upon.
And I fail to see what does sample variance has to do with the discussion at hand 🙂
The point is it affects wattage.
data/avatar/default/avatar14.webp
I've lost you completely tbh... There is no such thing as theoretical power equation for ic circuit. power ~ clocks*voltage^2 is an approximative empirical equation that is usable under very limited scope of circumstances, merely a good starting point. You keep repeating "Wattage does not scale linearly with clock rate." As if I had claimed that that is always the case. Matter of fact power DOES scale linearly with clocks - AT BEST. In practice it often scales worse, sometimes much worse, especially going past the clock/power sweet spot ( which AMD lately has no trouble passing), especially closing in on max. OC. How does this, or your claim of power not scaling linearly help our Vega RX, ... I have no idea. I WISH IF VEGA POWER CONSUMPTION SCALED LINEARLY WITH CLOCKS PAST 1600MHz! There I said it. And how the **** did you draw me into this discussion when all I've said is: lets imagine 375W custom AIB Vega. Which was a simple - for the sake of the argument, yet not out of this world assumption. Now I need to provide a whitepaper on this, else I am very pessimistic and biased? And what about equating 1080 Ti, biased also? But lets see you try: Knowing that 1440MHz Vega draws 280W. How much would you assume that custom OC-ED 1700MHz Vega might draw? Negative zero?
Again, theoretical scaling isn't the same as what happens in practice. Unless you provide the equation, there's not much point continuing to discuss this. Regardless, you don't have to change the voltage for the scaling to skew. Heat output will change the equation. EDIT: I understand this may sound nit-picky, but when you consider the sheer amount of transistors, the already high wattage, and the high clocks, the rate at which wattage increases becomes a lot more chaotic. If we were talking about something like an i3 on water cooling, then yeah, the wattage scaling is going to be pretty linear. Exactly, 375W is not the actual maximum power available. It isn't the recommended maximum (that would be 362W, where you get 288W from the 2x 8-pins and 74W from PCIe) but the industry standard maximum. It is possible to exceed 375W, though it is frowned upon. The point is it affects wattage.
Sample variance affects the wattage... yes and?? You might wanna talk about the specific golden chip, I am interested into volume averages.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
You started it. 😉 Other than that, AMD please release SOMETHING already.
No this time I'm innocent! 😀 I actually said this afterwards:
I was more thinking about the serving the market, UK PC gamers, not if AMD wants to build a new business department there. Gamers are still gamers, even after Brexit.
So I meant, politics aside, there will probably be more customers ready to attend such an event in London than in Budapest. At least that's what I though.
Efficiency is a metric people look at when two cards are more or less equal. Like let's just assume that the Vega RX performs roughly equivalent to a 1080. Price comes in at $499. Performance/price it roughly compares to the 1080 (I know miners are driving prices up atm, lets ignore that) - what other things can we look at to determine which card is the better purchase? That's when efficiency and value add features like Freesync/Ansel/etc come into play. 1080 uses almost 100w less than what we expect Vega RX to use. I game roughly 3-4 hours a day, probably more on a weekends. That's ~$18 a year where I live. So if I hold my card for two years, that makes it roughly $40 cheaper for the 1080. On the flipside, if I'm buying a new monitor, a comparable freesync display is over a hundred dollars cheaper than a Gsync one. So depending on whether or not you already have a G-Sync display, how much game, etc, people are going to value different aspects of cards. Then as Noisiv said, it also gives an indication of future/architectural performance. A good example is Pascal - GP102 is currently only ~480mm2. Nvidia has built larger chips, all the way up at ~600 - the limit here is power consumption. Nvidia doesn't like to cross the ~250-300w barrier with their reference models. So basically Nvidia is stuck at 480mm2, at the given freq/core count, unless they improve power efficiency. Nvidia claims Volta's FP32 CUDA cores are 50% more efficient than Pascal. Theoretically this will allow them to push a 600mm2 Volta card operating at roughly the same frequencies as the current Ti models. As far as HBM2, I don't think it's publicity or advertising, I think it's AMD's lack of funding for multiple SKU's. HBM2 is almost definitely necessary in cloud computing/deep learning/etc. So AMD needs their top tier card to support it, as Nvidia's does. The problem is that AMD can't afford to split their line up like Nvidia. GP100 is radically different than GP102, not only HBM2, but even the CUDA cores, cache hierarchy, etc are all different. For AMD to follow that, it requires them to design/fab/validate completely separate products. So instead they opt to basically sell their "GP100" variant as a consumer card. Margins are lower due to this, but the upfront cost and time to market is drastically reduced. That being said, there are people that think RX Vega will be slightly different architecturally. I haven't seen any rumors/information that support this claim and historically AMD hasn't split it's consumer/workstation/compute cards to the same degree that Nvidia has.
Thanks for explaining Denial. Although I have to say I was more talking about electrical efficiency (power). Sure you right with that 40$ bill, but then again, we're enthusiasts... it's our hobby, so I personally do not tend to think about electricity bills too much. And here it's probably even more than at your place (Austria has rather costly electricity opposed to Germany, for example). Just from the point of view saying that every hobby costs money. But of course you are right, it's subjectively thinking in my case. As in for HBM2, you are probably right with the costs of bringing out different SKUs. I just feel that AMD would have done their customers a greater favour with releasing the cards half a year earlier with GDDR5X, than later with HBM2, which arguably is not that huge of a performance gain right now. Maybe they should have done their refresh with HBM2, but that is only a point if HBM2 is delaying Vega at all. If Vega is coming now because they couldn't have the chips earlier half a year ago (just an example), that's a wholly different story.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
There is no such thing as theoretical power equation for ic circuit. power ~ clocks*voltage^2 is an approximative empirical equation that is usable under very limited scope of circumstances, merely a good starting point.
You kind of just admitted yourself that it is inaccurate. Even without considering temperature, that equation doesn't tell the whole picture. Take high-school physics for example. They'll tell you Earth's gravity is 9.8m/s^2, which is true, but the equations you're told to solve don't account for air resistance, terminal velocity, starting velocity, air density, and so on. So when the teacher asks "how fast will the penny have moved by the time it hits the ground?", multiplying the distance by 9.8m/s^2 will give you a very, very wrong answer. Processor wattage is no different.
You keep repeating "Wattage does not scale linearly with clock rate." As if I had claimed that that is always the case. Matter of fact power DOES scale linearly with clocks - AT BEST.
Ok, and that's why I pointed out to you that you're being a little too general about what you're expecting. The fact of the matter is, when you push hardware as hard as AMD has pushed Vega, the equation gets very complicated. Again, consider the disproportionate wattage of an overclocked i9.
In practice it often scales worse, sometimes much worse, especially going past the clock/power sweet spot ( which AMD lately has no trouble passing), especially closing in on max. OC. How does this, or your claim of power not scaling linearly help our Vega RX, ... I have no idea.
I never said the scaling would work in favor of Vega. In fact, I wouldn't be surprised if it works against it. But as bad as it could get, I still think you may be over-estimating. We don't know enough about the wattage per transistor. Also just to clarify, is the 280W you referred to TDP or the actual measured wattage under full load? Because advertised TDP is a real crappy way to calculate wattage, for any product.
And how the **** did you draw me into this discussion when all I've said is: lets imagine 375W custom AIB Vega. Which was a simple - for the sake of the argument, yet not out of this world assumption. Now I need to provide a whitepaper on this, else I am very pessimistic and biased? And what about equating 1080 Ti, biased also?
375W is not a good number to have, and it is a number you came up with based on a loose equation. You brought it up as a way to express how inefficient you felt the architecture was. You then compared it to a 1080Ti, something the product isn't advertised to compete against. That sounds pretty pessimistic to me.
But lets see you try: Knowing that 1440MHz Vega draws 280W. How much would you assume that custom OC-ED 1700MHz Vega might draw?
Well given the variables I have (so no temperature, no fan speeds, no voltage, etc.) the equation is left to be: ((1-(S/O))*A)+A=B Where S is stock frequency, O is overclocked frequency, A is stock wattage, and B is the final overclocked power draw. ((1-(1440/1700))*280)+280=B ((1-0.85)*280)+280=B (0.15*280)+280=B 42+280=322 So that's a 53W difference without considering other variables that may improve or worsen wattage. This is the difference between nearly exceeding the PSU specifications and "just a very hot GPU".
Sample variance affects the wattage... yes and?? You might wanna talk about the specific golden chip, I am interested into volume averages.
Unless I'm not understanding what you mean by "sample variance", it can be as much as a 50W difference. Remember - I'm not saying Vega is efficient. I'm overall not impressed by it, but, I just think you're over-estimating how bad it is.
data/avatar/default/avatar30.webp
You kind of just admitted yourself that it is inaccurate. Even without considering temperature, that equation doesn't tell the whole picture. Take high-school physics for example. They'll tell you Earth's gravity is 9.8m/s^2, which is true, but the equations you're told to solve don't account for air resistance, terminal velocity, starting velocity, air density, and so on. So when the teacher asks "how fast will the penny have moved by the time it hits the ground?", multiplying the distance by 9.8m/s^2 will give you a very, very wrong answer. Processor wattage is no different.
OK now you're conflating the unknowns with the approximations within the model. Like... you really really need to know the initial velocity(!) to have any clue about the final velocity. While not having GPU temp merely suggests a rough model, an approximation. And since you're being pedantic, you forgot the height above sea level, and dozen other initial conditions 🙂 And even if you had all of them, you still woul;dn't be able to solve this "simple" problem analytically, because as far as I know there is no general and the exact motion equation which accounts for air resistance. So again you're are back to approximations and some kind of experimental model. But ok so far we agree.
Ok, and that's why I pointed out to you that you're being a little too general about what you're expecting. The fact of the matter is, when you push hardware as hard as AMD has pushed Vega, the equation gets very complicated. Again, consider the disproportionate wattage of an overclocked i9. 375W is not a good number to have, and it is a number you came up with based on a loose equation. You brought it up as a way to express how inefficient you felt the architecture was. You then compared it to a 1080Ti, something the product isn't advertised to compete against. That sounds pretty pessimistic to me. Well given the variables I have (so no temperature, no fan speeds, no voltage, etc.)
would you have been any happier if you had temp, fan speed, voltage? would this attempt at power calculation had been any different?
the equation is left to be: ((1-(S/O))*A)+A=B Where S is stock frequency, O is overclocked frequency, A is stock wattage, and B is the final overclocked power draw. ((1-(1440/1700))*280)+280=B ((1-0.85)*280)+280=B (0.15*280)+280=B 42+280=322 So that's a 53W difference without considering other variables that may improve or worsen wattage. This is the difference between nearly exceeding the PSU specifications and "just a very hot GPU".
So after chastising me for being overly-simplistic in my pessimistic approximation, you yourself went with the most basic, linear approximation (which you yourself said is wrong), and the one that everyone should knows is impossible in the real world. What happened to common sense, why not add few %? Anyone with a clue should know that power going linear past max. boost clock all the way to 1700MHz is VEEEERY optimistic. <-- DONT YOU AGREE? Take a look: Vega FE 1650MHz, 1.2V 375 Watts from 2x 8-pin alone overclocking is kinda broken because once you OC, GPU goes to 1.2V https://www.youtube.com/watch?v=IfSGboBX1QE
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
And since you're being pedantic, you forgot the height above sea level, and dozen other initial conditions 🙂 And even if you had all of them, you still woul;dn't be able to solve this "simple" problem analytically, because as far as I know there is no general and the exact motion equation which accounts for air resistance. So again you're are back to approximations and some kind of experimental model. But ok so far we agree.
I didn't list all variables, just an example of many - you are right though. But, it isn't pedantry when your result is well beyond the margin of error. That's my only point, and 375W based on my rough calculation is beyond the margin of error.
would you have been any happier if you had temp, fan speed, voltage? would this attempt at power calculation had been any different?
Yes, in addition to actual measured wattage (so not TDP). But despite what you may think, I'm not that picky either. After all, you were giving an estimate, just a very high one.
So after chastising me for being overly-simplistic in my pessimistic approximation
I'm not chastising you for being over-simplistic; I don't care about an approximation. My gripe is you intentionally over-estimated, pitted a product against another of a higher performance tier (keep in mind the 1080Ti has more transistors than the 1080), and used that as a way to ridicule Vega's efficiency. To reiterate, I don't think Vega is that efficient either, but it isn't that bad.
you yourself went with the most basic, linear approximation (which you yourself said is wrong), and the one that everyone should knows is impossible in the real world.
You asked, I obliged. The reason I did that was to show prove that a calculated rough estimate should've been a lot lower than what you said, which it was.
Anyone with a clue should know that power going linear past max. boost clock all the way to 1700MHz is VEEEERY optimistic. <-- DONT YOU AGREE?
Yup, but who says that's a necessary scenario? Again, RX Vega isn't supposed to be pitted against the 1080Ti, so what's the point is making such a comparison when discussing efficiency? In another perspective: Overclock a 1080 to perform like a 1080Ti and you'll find its efficiency isn't so stellar either (when compared to the 1080Ti).
data/avatar/default/avatar01.webp
Yup, but who says that's a necessary scenario? Again, RX Vega isn't supposed to be pitted against the 1080Ti, so what's the point is making such a comparison when discussing efficiency? In another perspective: Overclock a 1080 to perform like a 1080Ti and you'll find its efficiency isn't so stellar either.
Well OK then. If you think that perf wise Vega RX should be 1080 competitor, then indeed there is no sense comparing it on the same performance basis with 1080 Ti. But that is more pessimistic than anything that I have envisioned for Vega RX. Reported to AMD for being a Debbie Downer 😀
data/avatar/default/avatar38.webp
BTW why do think that perf-wise Vega is more 1080, than 1080Ti competitor? Wouldn't by any chance Vegas lowish clocks,relatively to Pascal, have anything to do with Vega's performance, ie wouldn't Vega being only at 1080 level have anything to do with it's power consumption, ie being TDP limited? So there you go -> efficiency=performance 😉
https://forums.guru3d.com/data/avatars/m/224/224796.jpg
At this point we just do not know anything for sure one way or the other. Until Hilbert, or some other reliable review site releases reviews with benchmarks nothing is certain. With that said, my very rough guess is that it will land in between a GTX 1080 and 1080 Ti. That would put AMD unfortunately late to the game, but also have a very needed benefit of lowering prices for the high end gaming GPU market. So far Vega (frontier version) doesn't look like much of a game changer for mining, so hopefully the stock will not get swallowed up immediately by mining farms in China that run hundreds of GPUs on $0.01 electricity costs. 😛
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
If you think that perf wise Vega RX should be 1080 competitor, then indeed there is no sense comparing it on the same performance basis with 1080 Ti. But that is more pessimistic than anything that I have envisioned for Vega RX.
Based on my understanding, Vega is supposed to be a healthy level ahead of the 1080, but distinctly behind the 1080Ti. I never heard AMD claiming it was ever meant to compete against the 1080Ti. Considering AMD likes to cherry pick results, that's saying something. I can see why you'd think my view is pessimistic, but I don't think Vega is a bad product. It's a little more power hungry than I'd like to see, I'm slightly disappointed that's the best they could do, and I don't really understand who the target demographic is, but I think it's a solid product. I know many people here expect it to outperform the 1080Ti, and I find that a bit unrealistic.
BTW why do think that perf-wise Vega is more 1080, than 1080Ti competitor?
Because AMD said so, and because from the few results I've seen of Vega FE, Vega RX isn't bound to be that much different.
Wouldn't by any chance Vegas lowish clocks,relatively to Pascal, have anything to do with Vega's performance,
No? They're completely different architectures, to the point they don't even have the same memory controller. They're so different that you can't compare clock to clock.
ie wouldn't Vega being only at 1080 level have anything to do with it's power consumption, ie being TDP limited?
It is within the brands interests to stay within a certain power envelope (not TDP, because that's not the same thing). This is why you'll rarely see reference GPUs exceed 300W in benchmarks. I'm sure there is OC headroom for Vega, and I am fully aware it is relatively inefficient compared to Pascal - again, I'm not denying that. I doubt AMD themselves will push Vega to reach 1080Ti levels, even if that's theoretically possible. However, I do think 3rd party companies like Sapphire, Asus, Gigabyte, and so on will do their own "superclocked" variants.
data/avatar/default/avatar39.webp
Why would AMD release a card to compete with the 1080 which is ~20% faster than the previous generation 980Ti when they already had the Fury X to compete with that (and it already competed well, disregarding VRAM limitations - and now competes better)? Is Vega going to be 20-30% faster than a Fury X? They might as well not release a card at all. It doesn't make sense to me for AMD to be targeting this card against the 1080. Not at all.