Sapphire To Release Vapor-X edition for the Radeon RX 7900 XTX & XT

Published by

Click here to post a comment for Sapphire To Release Vapor-X edition for the Radeon RX 7900 XTX & XT on our message forum
https://forums.guru3d.com/data/avatars/m/277/277878.jpg
Holy s.... Vapor-x is back.
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Nooice.
data/avatar/default/avatar21.webp
Welcome back to the game Sapphire!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
That actually seems rather promising, since I don't think they'd bother if the GPU was going to be underwhelming.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
yay
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
schmidtbag:

That actually seems rather promising, since I don't think they'd bother if the GPU was going to be underwhelming.
Let's hope so. I have felt like AMD has been quite subdued ever since 4090 and 4080 got released. That has made me worried, considering earlier AMD seemed to be very proud of the MCM technology being applied to GPUs. I reckon chances are high that the next Nvidia GPU generation will also be MCM.
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
Great to hear !
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
Kaarme:

Let's hope so. I have felt like AMD has been quite subdued ever since 4090 and 4080 got released. That has made me worried, considering earlier AMD seemed to be very proud of the MCM technology being applied to GPUs. I reckon chances are high that the next Nvidia GPU generation will also be MCM.
No doubt as MCM in some form is definitely the way forward.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Had a r9 280x vapor-x back in the day im glad its making a comeback.
https://forums.guru3d.com/data/avatars/m/56/56004.jpg
The Vapor-X RX 7900 XTX has got me excited, now it all boils down to price and availability.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Kaarme:

Let's hope so. I have felt like AMD has been quite subdued ever since 4090 and 4080 got released. That has made me worried, considering earlier AMD seemed to be very proud of the MCM technology being applied to GPUs. I reckon chances are high that the next Nvidia GPU generation will also be MCM.
nope. they have a greater chance of having 3d cache 1st (but have to pay extra for it unlike AMD who paid 1/2 of the fab). Nvidia has been playing at a similar game but is way behind AMD and Intel. Intel will have a MCM GPU as soon as they decide to hit the enthusiast market but Nvidia has far more R&D on topic ahead of them. the name of the game is latency and Nvidia is far, far behind as Apple, AMD, and Intel all have some form of MCM on the market today. and mind you TSMC is doing both Apple and AMD but they are totally different processes
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
tunejunky:

nope. they have a greater chance of having 3d cache 1st (but have to pay extra for it unlike AMD who paid 1/2 of the fab). Nvidia has been playing at a similar game but is way behind AMD and Intel. Intel will have a MCM GPU as soon as they decide to hit the enthusiast market but Nvidia has far more R&D on topic ahead of them. the name of the game is latency and Nvidia is far, far behind as Apple, AMD, and Intel all have some form of MCM on the market today. and mind you TSMC is doing both Apple and AMD but they are totally different processes
We wouldn't know even if Nvidia had been experimenting with MCM for years now and just decided it wasn't yet ready for this 4000 generation. The general specs of Ada Lovelace must have been written on paper already a couple of years ago. After all, Nvidia has got a whole lot more budget than AMD, so while Jensen loves his money, I don't believe Nvidia would be experiencing as much pressure to get immediate financial results from R&D as AMD (even AMD doesn't, does it?). Unless Nvidia stock owners are really anal, but seeing how Jensen has been behind the wheel since the beginning of Nvidia's incredible journey to the top of the fabless semiconductor business, I bet the stock owners let Mr. Leather Jacket do whatever he wants.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Kaarme:

We wouldn't know even if Nvidia had been experimenting with MCM for years now and just decided it wasn't yet ready for this 4000 generation. The general specs of Ada Lovelace must have been written on paper already a couple of years ago. After all, Nvidia has got a whole lot more budget than AMD, so while Jensen loves his money, I don't believe Nvidia would be experiencing as much pressure to get immediate financial results from R&D as AMD (even AMD doesn't, does it?). Unless Nvidia stock owners are really anal, but seeing how Jensen has been behind the wheel since the beginning of Nvidia's incredible journey to the top of the fabless semiconductor business, I bet the stock owners let Mr. Leather Jacket do whatever he wants.
Nvidia has documented R&D on MCM since (right) before AMD announced it on their roadmap. however, there is a world of difference between "me too" research and years of ES at different nodes and flavors of substrates. to be anywhere close in the real world Nvidia would've had to contract custom fab work and they haven't even had more than one prototype. Intel's MCM isn't as elegant, cost productive, or as high yield as AMD but it's real and they've spent over a billion dollars (incl. fab updates) to be at the point where their CPUs have double the yield of the 10th generation. Apple of course has the "stitched" together MCM, which makes it the third totally different process to achieve similar goals. Nvidia has a press release
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
tunejunky:

Nvidia has documented R&D on MCM since (right) before AMD announced it on their roadmap. however, there is a world of difference between "me too" research and years of ES at different nodes and flavors of substrates. to be anywhere close in the real world Nvidia would've had to contract custom fab work and they haven't even had more than one prototype. Intel's MCM isn't as elegant, cost productive, or as high yield as AMD but it's real and they've spent over a billion dollars (incl. fab updates) to be at the point where their CPUs have double the yield of the 10th generation. Apple of course has the "stitched" together MCM, which makes it the third totally different process to achieve similar goals. Nvidia has a press release
To be fair chiplets design is an old story with IBM being the first who played with it - 3081 mainframe. Nvidia played with chiplets at least the same as AMD if not more because they have a problem: their AI engine doesn't do well with chiplets and Nvidia wants to keep their tensor cores because unlike AMD they built an entire universe of applications based on it: neural networks, AI "smart" cars, etc. AMD has nothing to offer in this field so for them the transition was easy. I use a 3090Ti for deep learning and it is amazing: my own in house learning network! - after 3 months training it has a precision of 68% in image recognition and this with just a consumer card! I also use CUDA accelerated applications at work - AMD has nothing in this field - they didn't had to think about this. I like the fact that Nvidia was supportive of science (and of course their profit) - there is more to GPU than games - AMD failed to recognize that. An very interesting scientific article published by some universities in collaboration with Nvidia in 2017: https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

Let's hope so. I have felt like AMD has been quite subdued ever since 4090 and 4080 got released. That has made me worried, considering earlier AMD seemed to be very proud of the MCM technology being applied to GPUs. I reckon chances are high that the next Nvidia GPU generation will also be MCM.
I think they got subdued because the RTX 4000 series ended up being a lot more efficient than most had expected. AMD knew they weren't going to beat the 4090 in performance and they most likely know they're still going to under-perform in raytracing, but they could eat least compete against the 4080 in terms of performance-per- and performance-per-watt. Well, the value proposition doesn't really matter when you're still paying 4 figures, and AMD was most likely targeting Nvidia's overestimated wattage. So, now they got a product that's not quite as special as they originally anticipated. I think it'll still be good, and I'm still feeling pretty confident I'll be getting a 7700[XT] if the price is right, but this is likely another round where Nvidia is the overall winner.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
barbacot:

To be fair chiplets design is an old story with IBM being the first who played with it - 3081 mainframe. Nvidia played with chiplets at least the same as AMD if not more because they have a problem: their AI engine doesn't do well with chiplets and Nvidia wants to keep their tensor cores because unlike AMD they built an entire universe of applications based on it: neural networks, AI "smart" cars, etc. AMD has nothing to offer in this field so for them the transition was easy. I use a 3090Ti for deep learning and it is amazing: my own in house learning network! - after 3 months training it has a precision of 68% in image recognition and this with just a consumer card! I also use CUDA accelerated applications at work - AMD has nothing in this field - they didn't had to think about this. I like the fact that Nvidia was supportive of science (and of course their profit) - there is more to GPU than games - AMD failed to recognize that. An very interesting scientific article published by some universities in collaboration with Nvidia in 2017: https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf
to be fair, an excellent post
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
nvidia's mcm is different, can't remember the name for it but they're chiplets designed so that each one can have a different function. edit: it's called composable on-package architecture (COPA), basically allowing them to make many different variations.
Our analysis shows that DL-optimized COPA-GPUs will provide impressive per-GPU training and inference performance improvements, while still efficiently supporting scaled-down HPC-targeted designs. DL-optimized COPA-GPUs will also result in reduced datacenter cost by minimizing the number of GPUs required to achieve scale-out training performance targets, making COPA-GPU an attractive paradigm for increasing individual and aggregate GPU performance without over-optimizing the product for any specific domain.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
schmidtbag:

I think they got subdued because the RTX 4000 series ended up being a lot more efficient than most had expected. AMD knew they weren't going to beat the 4090 in performance and they most likely know they're still going to under-perform in raytracing, but they could eat least compete against the 4080 in terms of performance-per- and performance-per-watt. Well, the value proposition doesn't really matter when you're still paying 4 figures, and AMD was most likely targeting Nvidia's overestimated wattage. So, now they got a product that's not quite as special as they originally anticipated. I think it'll still be good, and I'm still feeling pretty confident I'll be getting a 7700[XT] if the price is right, but this is likely another round where Nvidia is the overall winner.
the efficiency was a pleasant surprise to some but many had expected it w/ the node shrink AMD wasn't any more worried about the 4090 than the 1080 when they released the rx5800. not only was AMD not targeting that slice of heaven or hell - the yield is unacceptable to AMD. they don't have the bucks to waste 20-30% of each wafer on a limited fab run. just being honest. plus the fact that this is something Nvidia does every generation, the difference being the post-crypto price. the Nvidia Halo production is literally paid for by Marketing every year. the number of units sold has never justified the cost of production alone. but Nvidia is almost as good at marketing as Intel (the king).
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
I think the main reason nvidia hasn't gone mcm, is because they can sell gpus at 20k+ a pop ,which more than makes up for the increased chip cost and low yields , so instead they pursue larger reticle sizes for maximum performance, with the likes of volta and hopper. I dont think we'll see an true chiplet design from nvidia until it actually provides a performance advantage over max die size + nvlinking multiple cards or perhaps the manufacturing is able to keep up with demand for lower end cards.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
tunejunky:

AMD wasn't any more worried about the 4090 than the 1080 when they released the rx5800.
what ?
tunejunky:

not only was AMD not targeting that slice of heaven or hell - the yield is unacceptable to AMD. they don't have the bucks to waste 20-30% of each wafer on a limited fab run. just being honest. plus the fact that this is something Nvidia does every generation, the difference being the post-crypto price. the Nvidia Halo production is literally paid for by Marketing every year. the number of units sold has never justified the cost of production alone. but Nvidia is almost as good at marketing as Intel (the king).
a 60% functional ad102 is still 15% more powerful than 4080 - amd's target for 7900xtx. that is really impressive. come 2024, they'll be selling 4-5 skus on ad102 (even 4090 is heavily cut down), both g6 and g6x.