Sapphire To Release Vapor-X edition for the Radeon RX 7900 XTX & XT
Sapphire has revealed in a teaser that it will relaunch the "Vapor-X" graphics card series, which is cooled with the help of an evaporator chamber ("Vapor-Chamber") for the Radeon RX 7900 XTX and 7900 XT. That's a return of the series after an absence of more than six years.
After a sabbatical of more than six years, the Vapor-X series—famous for its high-end custom designs and having debuted in 2008 with the Sapphire Radeon HD 3870 Vapor-X—is making a comeback with the Radeon RX 7000 series, which utilises the same cooler with Vapor Chamber Technology ("VCT") as the Atomic and Toxic models.
I had to look it up, but the Sapphire Radeon R9 390X Vapor-X was the last in the Vapor-X series, although the Toxic series with AiO water cooler was also represented in the Radeon RX 6900 XT (test). The Radeon RX 7900 XTX and 7900 XT, as well as manufacturers' custom designs based on the RDNA 3 architecture, could change that on December 13.
Sapphire's New Nitro RX 7900 GPU: A Teaser - 11/24/2022 09:48 AM
Sapphire has just hinted at yet another future card on social media, yeah, the Radeon RX 7900 series. ...
Review: Sapphire Radeon RX 6400 Pulse (4GB) - 09/08/2022 01:38 PM
Meet AMD's Radeon RX 6400. It has 4GB of 64-bit RAM and a whiff of Inifinity Cache (16MB L3). This card might operate at Radeon RX 480 levels when connected to a PCIe Gen x4 connection (3.0/4.0). Rea...
Sapphire Adds 8GB SKU to Radeon RX 6500 XT Graphics Card line - 07/26/2022 08:55 AM
And that's unusual but certainly a good move. With 8 GB of video memory, twice the SKU's norm of 4 GB. The Sapphire Pulse RX 6500 XT 8 GB is designed similarly to the company's regular Pulse RX 65...
Intel Sapphire Rapids Workstation Specifications: Up to 56 Cores, 350W TDP - 07/25/2022 08:30 AM
VideoCardz has provided the Sapphire Rapids-WS CPU specs. These core-heavy chips target workstations, not data centers, unlike Sapphire Rapids-SP....
SAPPHIRE has produced a non-XT Radeon RX 6700 graphics card. - 07/13/2022 09:28 AM
The Radeon RX 6700 is a GPU that sits below the Radeon RX 6750 XT / 6700 XT. It has 2,560 2,304 stream processors, an Infinity cache of 96MB 80MB, video memory of GDDR6 12GB GDDR6 10GB, and a memory b...
Senior Member
Posts: 3366
Joined: 2013-03-10
nope.
they have a greater chance of having 3d cache 1st (but have to pay extra for it unlike AMD who paid 1/2 of the fab).
Nvidia has been playing at a similar game but is way behind AMD and Intel. Intel will have a MCM GPU as soon as they decide to hit the enthusiast market but Nvidia has far more R&D on topic ahead of them. the name of the game is latency and Nvidia is far, far behind as Apple, AMD, and Intel all have some form of MCM on the market today.
and mind you TSMC is doing both Apple and AMD but they are totally different processes
We wouldn't know even if Nvidia had been experimenting with MCM for years now and just decided it wasn't yet ready for this 4000 generation. The general specs of Ada Lovelace must have been written on paper already a couple of years ago. After all, Nvidia has got a whole lot more budget than AMD, so while Jensen loves his money, I don't believe Nvidia would be experiencing as much pressure to get immediate financial results from R&D as AMD (even AMD doesn't, does it?). Unless Nvidia stock owners are really anal, but seeing how Jensen has been behind the wheel since the beginning of Nvidia's incredible journey to the top of the fabless semiconductor business, I bet the stock owners let Mr. Leather Jacket do whatever he wants.
Senior Member
Posts: 3255
Joined: 2017-08-18
Nvidia has documented R&D on MCM since (right) before AMD announced it on their roadmap.
however, there is a world of difference between "me too" research and years of ES at different nodes and flavors of substrates.
to be anywhere close in the real world Nvidia would've had to contract custom fab work and they haven't even had more than one prototype.
Intel's MCM isn't as elegant, cost productive, or as high yield as AMD but it's real and they've spent over a billion dollars (incl. fab updates) to be at the point where their CPUs have double the yield of the 10th generation.
Apple of course has the "stitched" together MCM, which makes it the third totally different process to achieve similar goals.
Nvidia has a press release
Senior Member
Posts: 840
Joined: 2007-09-24
Nvidia has documented R&D on MCM since (right) before AMD announced it on their roadmap.
however, there is a world of difference between "me too" research and years of ES at different nodes and flavors of substrates.
to be anywhere close in the real world Nvidia would've had to contract custom fab work and they haven't even had more than one prototype.
Intel's MCM isn't as elegant, cost productive, or as high yield as AMD but it's real and they've spent over a billion dollars (incl. fab updates) to be at the point where their CPUs have double the yield of the 10th generation.
Apple of course has the "stitched" together MCM, which makes it the third totally different process to achieve similar goals.
Nvidia has a press release
To be fair chiplets design is an old story with IBM being the first who played with it - 3081 mainframe. Nvidia played with chiplets at least the same as AMD if not more because they have a problem: their AI engine doesn't do well with chiplets and Nvidia wants to keep their tensor cores because unlike AMD they built an entire universe of applications based on it: neural networks, AI "smart" cars, etc. AMD has nothing to offer in this field so for them the transition was easy. I use a 3090Ti for deep learning and it is amazing: my own in house learning network! - after 3 months training it has a precision of 68% in image recognition and this with just a consumer card! I also use CUDA accelerated applications at work - AMD has nothing in this field - they didn't had to think about this. I like the fact that Nvidia was supportive of science (and of course their profit) - there is more to GPU than games - AMD failed to recognize that.
An very interesting scientific article published by some universities in collaboration with Nvidia in 2017: https://research.nvidia.com/sites/default/files/publications/ISCA-2017-MCMGPU.pdf
Senior Member
Posts: 7237
Joined: 2012-11-10
I think they got subdued because the RTX 4000 series ended up being a lot more efficient than most had expected. AMD knew they weren't going to beat the 4090 in performance and they most likely know they're still going to under-perform in raytracing, but they could eat least compete against the 4080 in terms of performance-per-
Senior Member
Posts: 3255
Joined: 2017-08-18
nope.
they have a greater chance of having 3d cache 1st (but have to pay extra for it unlike AMD who paid 1/2 of the fab).
Nvidia has been playing at a similar game but is way behind AMD and Intel. Intel will have a MCM GPU as soon as they decide to hit the enthusiast market but Nvidia has far more R&D on topic ahead of them. the name of the game is latency and Nvidia is far, far behind as Apple, AMD, and Intel all have some form of MCM on the market today.
and mind you TSMC is doing both Apple and AMD but they are totally different processes