AMD Big Navi would get Infinity Cache

Published by

Click here to post a comment for AMD Big Navi would get Infinity Cache on our message forum
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
schmidtbag:

Considering it's "infinity", makes me wonder if this is actually supposed to be a shared cache for APUs. Right now, memory bandwidth is the #1 issue for APUs and the caches they have are just simply not sufficient. dGPUs don't really need a fancy cache, since you can just widen the memory bus for a major bandwidth increase.
Wider bus increases the cost and power consumption this could great way to increase the bandwidth on igpu and dgpu
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Undying:

Wider bus increase the cost and power consumption this could great way to increase the bandwidth on igpu and dgpu.
A decently large cache is far more expensive. To my understanding, a wider bus isn't necessarily more power hungry if the total amount of components doesn't go up. So for example, whether you have a 8GB on a 256-bit bus or a 384-bit bus, I don't think the power consumption is going to change much. Maybe I'm wrong - I don't have solid evidence of this, but, the memory chips themselves aren't necessarily working harder. The GPU itself is working harder (and therefore will use more power) because it is provided more bandwidth to prevent downtime, but the same could be said of a larger cache.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Silva:

Didn't know about the collab part, but it must be expensive too (and hot). AMD has to work with what its available. 256-bit is to make it more affordable and competitive, I hope it's enough to not starve the card.
That statement is as true as stating that nVidia would never get graphics memory types AMD co-developed. History is clear on that note. AMD's reasons are most likely cost and power draw.
schmidtbag:

A decently large cache is far more expensive. To my understanding, a wider bus isn't necessarily more power hungry if the total amount of components doesn't go up. So for example, whether you have a 8GB on a 256-bit bus or a 384-bit bus, I don't think the power consumption is going to change much. Maybe I'm wrong - I don't have solid evidence of this, but, the memory chips themselves aren't necessarily working harder. The GPU itself is working harder (and therefore will use more power) because it is provided more bandwidth to prevent downtime, but the same could be said of a larger cache.
Having 384-bit bus vs 256 incurs only 50% power draw penalty from entire memory subsystem. And costs proprotionally more for memories + PCB complexity.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
cucaulay malkin:

yes,except your expamples don't reflect the scope of this here situation,amd absolutley need that bandwidth,that's why they're introducing this cache - they're lacking bandwidth.
Not sure, you can already do a lot with the AMD's one, also cache isn't always for that (and rapidely found a limit and will not boost the real bandwith)... Anyway we will be fixed soon about that trick.
data/avatar/default/avatar03.webp
I'm all in for innovation. If AMD has developed a process that'll effective shorten the path to data, while guaranteeing a boost to GPU performance... I say 'go for it'. This should be good.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
It would be so funny if all the rumors are completely off, and this "Big Navi" will be something very different. I mean, AMD feeding bulls*** to leakers over the last year is totally possible ! as this trademarked "Infinity Cache" could literally be anything... doesn't even have to be connected to Navi at all. Maybe it's the new name of Zen 3 cache, instead of last year's "Game Cache"
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Fox2232:

Having 384-bit bus vs 256 incurs only 50% power draw penalty from entire memory subsystem. And costs proprotionally more for memories + PCB complexity.
I don't think I made myself clear: In my hypothetical situation, the amount of memory chips would be the same, but in the lower-bit model, some chips would share the same bus. It doesn't appear this happens often, but it does happen. Take the new A6000 for example, where that has a 384-bit bus despite having 48GB. That very obviously has multiple chips per bus. Therefore, Nvidia could, in theory, double the bus width without affecting the number of memory chips. Doing so would not have as significant of a power increase on the memory subsystem.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
I want to see if this "infinity cache" is really something revolutionary or just a marketing stunt like the "Game cache" on amd zen 2 cpu's which no matter how you called is still L3 cache...so AMD has "history" in this field and with their launch approaching the marketing machine is working at full speed. I wouldn't be surprised if a new title will appear that Big Navi uses "quantum technology" in their GPU or it is developed in collaboration with aliens... At least something spectacular not like Nvidia's black leather jacket man who was baking something in the oven... ...and then everybody would start talking again about bandwidth, chiplets, how great Lisa Su is and so on... To tell you the truth I am a little disappointed by their marketing department - I was expecting some "leaked" benchmarks that will blow my socks off and not some fancy words...
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
schmidtbag:

I don't think I made myself clear: In my hypothetical situation, the amount of memory chips would be the same, but in the lower-bit model, some chips would share the same bus. It doesn't appear this happens often, but it does happen. Take the new A6000 for example, where that has a 384-bit bus despite having 48GB. That very obviously has multiple chips per bus. Therefore, Nvidia could, in theory, double the bus width without affecting the number of memory chips. Doing so would not have as significant of a power increase on the memory subsystem.
GDDR6 has 32Gb variant = 4GB per chip.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Is rumored this is likely the patent or one of the patents that make up the Infinity Cache. if its the case then this really is revolutionary. https://www.freepatentsonline.com/20200293445.pdf Edit here is a recent video talking about said patent: [youtube=CGIhOnt7F6s]
https://forums.guru3d.com/data/avatars/m/282/282600.jpg
All it is is the precursor to chiplets, if you have a large GPU and break its various functions down into several parts you can theoretically design a system like in the video where each chiplet has a specific task and is all interlinked with infinity cache. You keep individual die size low so costs come down, you can design dedicated architectures per GPU function instead of having to tie it all into a single monolithic ever increasing in size GPU which has higher costs and lower yields and uses more power at higher temps. Take an Nvidia design as an example, you could offload the RT cores to its own dedicated chip
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
ACEB:

All it is is the precursor to chiplets, if you have a large GPU and break its various functions down into several parts you can theoretically design a system like in the video where each chiplet has a specific task and is all interlinked with infinity cache. You keep individual die size low so costs come down, you can design dedicated architectures per GPU function instead of having to tie it all into a single monolithic ever increasing in size GPU which has higher costs and lower yields and uses more power at higher temps. Take an Nvidia design as an example, you could offload the RT cores to its own dedicated chip
So..I made an agreement with my kid: every time that somebody writes in this topic "chiplet" and "HBM2" I give him a buck - I have a feeling that I will empty my wallet by tonight... You Sir just cost me two bucks!
data/avatar/default/avatar15.webp
schmidtbag:

A decently large cache is far more expensive.
This is an area where I'd trust both the engineers and sales people to figure out what's the better option. It's worth keeping in mind, though by no means a perfect analogy, that each 77 square mm Zen 2 chiplet also houses 32MB of level 3 cache. With the yields TSMC are getting on their 7nm node now, it may just be that cache is both more effective and more economical.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Fox2232:

GDDR6 has 32Gb variant = 4GB per chip.
According to what? I've only heard about a maximum of 16Gb, which to my understanding isn't even available yet. Remember, we're talking about GDDR6X here. Regardless, I doubt there's a major power difference in using a single 4GB chip vs a pair of 2GB chips sharing the same bus, assuming all else is the same.
Exodite:

With the yields TSMC are getting on their 7nm node now, it may just be that cache is both more effective and more economical.
Indeed it will be, but it's still disproportionately a lot more expensive than VRAM. Otherwise, what's the point of having VRAM?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

According to what? I've only heard about a maximum of 16Gb, which to my understanding isn't even available yet. Remember, we're talking about GDDR6X here. Regardless, I doubt there's a major power difference in using a single 4GB chip vs a pair of 2GB chips sharing the same bus, assuming all else is the same. Indeed it will be, but it's still disproportionately a lot more expensive than VRAM. Otherwise, what's the point of having VRAM?
I don't think AMD is getting GDDR6X.
data/avatar/default/avatar20.webp
schmidtbag:

Indeed it will be, but it's still disproportionately a lot more expensive than VRAM. Otherwise, what's the point of having VRAM?
Well sure, though that's a hardly a fair analogy in this case - it's not like we're talking about a situation where AMD will include 8 to 16 GB of cache on-die. GDDR6 is more expensive DDR, GDDR6X even more so. I would expect denser memory configurations to be disproportionally more expensive, per chip, too but that's just an assumption on my part. Increasing bus width is incredibly expensive, due to the added complexity of the boards. You may need more components and the additional traces will mean relocating other components, using boards with higher layer counts, using more complex cooling solutions and so on. Also, keep in mind that the additional memory controllers on the chip aren't free either - take a look at the Navi 10 floor plan for example. This may all amount to nothing of course, we're just speculating on rumors, but my point is that it's not difficult to envision a situation where a large (the rumored 128 MB perhaps) cache solution would be more efficient than widening the bus or using more exotic memory solutions.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Exodite:

Increasing bus width is incredibly expensive, due to the added complexity of the boards. You may need more components and the additional traces will mean relocating other components, using boards with higher layer counts, using more complex cooling solutions and so on.
Somehow years ago it wasn't so expensive. My ancient Radeon 390 has a bus width of 512 bits! Nowadays even the 1500 euros elite cards don't get nearly that much. What happened to make it so expensive?
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Kaarme:

What happened to make it so expensive?
Milking, man, milking. Nvidia realized that customers are willing to pay more and more and more for virtually the same BOM. AMD simply followed suit... A duopoly is not really "competition", a duopoly is just a form of monopoly, divided in two, in which the underdog can price higher simply because the leader is charging whatever they want. GPU market stopped being competitive many, many years ago. I remember when there were 20 companies producing 2D chips... and some ventured in 3D, the rest didn't survive.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
wavetrex:

Milking, man, milking. Nvidia realized that customers are willing to pay more and more and more for virtually the same BOM. AMD simply followed suit...
Nvidia increased it's margins but it's not the same BOM. Stop lying.