Guru3D.com
  • HOME
  • NEWS
    • Channels
    • Archive
  • DOWNLOADS
    • New Downloads
    • Categories
    • Archive
  • GAME REVIEWS
  • ARTICLES
    • Rig of the Month
    • Join ROTM
    • PC Buyers Guide
    • Guru3D VGA Charts
    • Editorials
    • Dated content
  • HARDWARE REVIEWS
    • Videocards
    • Processors
    • Audio
    • Motherboards
    • Memory and Flash
    • SSD Storage
    • Chassis
    • Media Players
    • Power Supply
    • Laptop and Mobile
    • Smartphone
    • Networking
    • Keyboard Mouse
    • Cooling
    • Search articles
    • Knowledgebase
    • More Categories
  • FORUMS
  • NEWSLETTER
  • CONTACT

New Reviews
Fractal Design Pop Air RGB Black TG review
Palit GeForce GTX 1630 4GB Dual review
FSP Dagger Pro (850W PSU) review
Razer Leviathan V2 gaming soundbar review
Guru3D NVMe Thermal Test - the heatsink vs. performance
EnGenius ECW220S 2x2 Cloud Access Point review
Alphacool Eisbaer Aurora HPE 360 LCS cooler review
Noctua NH-D12L CPU Cooler Review
Silicon Power XPOWER XS70 1TB NVMe SSD Review
Hyte Y60 chassis review

New Downloads
AMD Radeon Software Adrenalin 22.6.1 WHQL driver download
GeForce 516.59 WHQL driver download
Media Player Classic - Home Cinema v1.9.22 Download
AMD Chipset Drivers Download v4.06.10.651
CrystalDiskInfo 8.17 Download
AMD Radeon Software Adrenalin 22.6.1 Windows 7 driver download
ReShade download v5.2.2
HWiNFO Download v7.26
7-Zip v22.00 Download
GeForce 516.40 WHQL driver download


New Forum Topics
3060ti vs 6700xt a year later AMD to have Tensor Core equivalent on RDNA3 AMD Radeon Software Adrenalin 22.6.1 - Driver download and discussion FSR Thread 525.14 Windows Insider drivers Extreme 4-Way Sli Tuning AMD Radeon Software - UWP Review: Palit GeForce GTX 1630 4GB Dual Many websites around the world are unreachable due to a Cloudflare outage. Unable to install windows from linux




Guru3D.com » News » Radeon Vega 20 Will Have XGMI Interconnect

Radeon Vega 20 Will Have XGMI Interconnect

by Hilbert Hagedoorn on: 09/06/2018 08:47 AM | source: phoronix | 30 comment(s)
Radeon Vega 20 Will Have XGMI Interconnect

It's expected that VEGA20, when released, will get a new interface called XGMI, this is a high speed interconnect and can be seen along the ones of NVLINK, a much faster connector. Thanks to a new set of AMDGPU Linux driver patches this was discussed. 

XGMI would be a peer-to-peer high-speed interconnect and is based on Infinity Fabric, the unified way to communicate with everything or AMD, even the processors use this. XGMI is basically AMD's alternative to NVIDIA's NVLink for inter-connecting GPUs. The finding was spotted by Phoronix who spotted the new entry in a new AMDGPU Linux driver. They, for now, can be found on the amd-gfx list but will likely be queued via -next for introduction in the Linux 4.20~5.0 kernel as part of their latest Vega 20 enablement work.



Radeon Vega 20 Will Have XGMI Interconnect




« Micron delivers GDDR6 as NVIDIA launch partner on GeForce RTX · Radeon Vega 20 Will Have XGMI Interconnect · Settlers 2019 Pre-Alpha gameplay »

Related Stories

AMD Radeon Vega 12 and Vega 20 Wave Hello From Ashes Of The Singularity Database - 06/19/2018 05:43 PM
It is getting a little too obvious who is planting small Vega leaks into the Ashes Of The Singularity database, I mean repetitive pattern each and every year. And what company is all over who is all ...

AMD Announces 7nm Radeon RX Radeon Vega Instinct with 32 GB HBM2 - 06/06/2018 05:24 AM
AMD at Computex just shared the word that is it announcing 7nm Radeon RX Vega the first model will be a deep-learning SKU, ther Radeon Instinct 7nm 32 GB HBM2, ...

Radeon Vega20 Gets Spotted in Linux AMDGPU driver - 04/02/2018 08:52 AM
Interesting, some Vega20 Linux patches have been listed, and have been posted by somebody froMA MD, likely a driver developer. We mentioned Vega20 a few times in the past already, and that would be a...

Review: EK-MLC Phoenix 360 AIO CPU & GPU (Radeon Vega) Liquid Cooling - 02/09/2018 10:39 AM
EK recently replaced the Predator AOI range with the new revamped Phoenix series, we test the 360mm EK-MLC (Modular Liquid Cooling) Phoenix series liquid cooler and will add a Radeon Vega 56 from AMOD...

Intel Announces 8th Gen Core i7 Processor with Radeon Vega M Graphics - 01/08/2018 09:56 AM
Intel is launching a first-of-its kind processor: the 8th Gen Intel Core processor with Radeon RX Vega M Graphics. Packed with features and performance crafted for gamers, content creators and fans of...


6 pages 1 2 3 4 5 6


Vananovion
Senior Member



Posts: 153
Joined: 2017-08-31

#5582159 Posted on: 09/07/2018 10:58 AM
I second what Fox2232 said. If you want to know more, Gamers Nexus has a great video explaining why AMD had to use HBM2 on Vega, even though it made the cards more expensive -

.

I'd also add, that HBM is a fairly new technology, while GDDR has been around for quite some time. I like to think about it as HDDs vs SSDs - SSDs used be out of reach of regular consumers when they first came out. Nowadays, no one sane uses an HDD for any sort of performance use-case. It is still some way off, but I think the same is going to happen with GDDR and HBM - GDDR will move down to budget solutions, while HBM will take mainstream and high-end stuff.

Agent-A01
Senior Member



Posts: 11568
Joined: 2010-12-27

#5582300 Posted on: 09/07/2018 04:28 PM
What do you mean by "GDDR6 is faster than current implementations of HBM2"?
Faster per mm2 of memory chip?
Faster per chip?
Faster per watt?
Faster per $?

I think you mean faster per pin. But that's not very important, since HBM2 have magnitude more pins on tiny chip.
I have read Micron's official "blogger" post where he went into all great hings about GDDR6. Comparing it to GDDR4/5. Mentioning power efficiency. Ending with: "if you need more... HBM2."

Why are people always bringing into discussion AMD's cards which are Power capped? (300W)
Don't you and other get fact that if those cards used GDDR, they would have even less power left for GPU once AMD would place enough of GDDR5(x) chips on board?
When you think about AMD's cards with HBM which eat 300W (and gain additional performance once you increase power limit), they would definitely be worse cards with GDDR whatever version at time available.

As of cost. People who never saw any pricing material say for years: "HBM costs fortune. HBM2 costs fortune."
Would be lovely if they ever cared to post comparison of chip capacity versus price comparison against few GDDR5(x)/6 from few manufacturers.


Edit: And. btw... practicality...
Any SoC, high performance mobile device (from notebook to cellphone), anything in data center where cost of ownership matters more than initial price.

Why? Small form factor. High bandwidth per package. Low power consumption per transferred bit of data. High data density.

Do you think that $700~1000 cellphones should not use HBM2? I think it is exactly what they should be using at that price point to at least look justified. And consumer at least gets much faster and more power efficient memory.

Faster as in memory bandwidth.

AFAIK fury x > vega64 in memory bandwidth ~500GB/s with vega being lower.

Comparing upcoming 2080ti with 352bit bus using GDDR6 has 616 GB/s total bandwidth which is much more than consumer cards.

Even titan v with 3092bit HBM2 barely has more bandwidth than 2080Ti with cut 352bit bus(384bit unlocked would put it > titan v at 672GB/s)

GDDR6 uses 1.35v vs GDDR5 1.5v with lower latency so in practical use it shortens the gap between HBM.

Only in very power-strained cases would GPUs need the marginal difference saved in power usage.

As for AMD being power-capped, that's their own fault.
They wouldn't need the extra few watts of savings if they didn't decide to try maximize clock speeds out of the box to try to get closer performance to equivalent NV cards.

Less voltage and lower clocks would have saved them a ton of wattage making HBM unnecessary.

As for mobile phones, a single DDR6 chip has a tiny power envelope.
A single HBM stack is not going to bring in hours of extra usage.

Anyways with 2080Ti using GDDR6 it's obvious that hbm doesn't bring enough benefits to offset the cost it incurs.

Fox2232
Senior Member



Posts: 11809
Joined: 2012-07-20

#5582332 Posted on: 09/07/2018 05:50 PM
Faster as in memory bandwidth.

AFAIK fury x > vega64 in memory bandwidth ~500GB/s with vega being lower.

Comparing upcoming 2080ti with 352bit bus using GDDR6 has 616 GB/s total bandwidth which is much more than consumer cards.

Even titan v with 3092bit HBM2 barely has more bandwidth than 2080Ti with cut 352bit bus(384bit unlocked would put it > titan v at 672GB/s)

GDDR6 uses 1.35v vs GDDR5 1.5v with lower latency so in practical use it shortens the gap between HBM.

Only in very power-strained cases would GPUs need the marginal difference saved in power usage.

As for AMD being power-capped, that's their own fault.
They wouldn't need the extra few watts of savings if they didn't decide to try maximize clock speeds out of the box to try to get closer performance to equivalent NV cards.

Less voltage and lower clocks would have saved them a ton of wattage making HBM unnecessary.

As for mobile phones, a single DDR6 chip has a tiny power envelope.
A single HBM stack is not going to bring in hours of extra usage.

Anyways with 2080Ti using GDDR6 it's obvious that hbm doesn't bring enough benefits to offset the cost it incurs.
So, you meant that meaningless per pin bandwidth. One thing nobody really has to care about. One can have easily 8 HBM(2) packages around GPU.
How many GDDR(X) packages you can fit on PCB? How many watts it will eat to even compete in bandwidth with HBM(2)?
And IMC area. Way AMD boasted about HBM having small IMC for bandwidth...

When you talk about small, HBM(2) is clearly better except pricing, but that's not that bad. When you talk Big GPUs with huge requirements...
GDDR6 is needed, otherwise you would be again in times where you stack memory banks on both sides of PCB.

Ad "AFAIK fury x > vega64 in memory bandwidth ~500GB/s with vega being lower."
It underscores fact that you again overlook development in technology you try to make look worse. Fury X had to use 4 HBM1 chips, Vega56/64 has 2 HBM2 chips.
And since then HBM2 improved.

Please, do not judge memory technology based on bandwidth of product which could have used more or fewer memory chips depending on vendor decision.
I do really wonder how would you phrase following argument if Titan V had 4x HBM2 chips...
"Even titan v with 3092bit HBM2 barely has more bandwidth than 2080Ti with cut 352bit bus(384bit unlocked would put it > titan v at 672GB/s)"

You should understand another important thing in comparison of those technologies. HBM has very short traces. Papers for GDDR6 imply that traces length and shaping has higher importance than ever.
This means that with HBM, you are mainly looking at what chip can do. With GDDR6 there is PCB design too due to noise sensitivity.

JonasBeckman
Senior Member



Posts: 17562
Joined: 2009-02-25

#5582346 Posted on: 09/07/2018 06:41 PM
Bus is a lot wider for the HBM cards for AMD but speed is hindered a bit. Fury caps out at I think it was around 320 - 360 GB/s when measured due to the core clock speed not keeping up so not really the theoretical max of 512 GB/s though the 4096 (4x 1024) bus width is nice but on it's own it's probably not a deciding feature. (290X had some 512-bit ring bus design being all the hype back when it launched, didn't really compete directly with NVIDIA's offerings and was apparently pretty complex and costly for AMD.)

Vega manages somewhere around 480 GB/s I think it was, Vega 64 that is and even that is slightly held back due to specs originally being 1 Ghz at 1.2v and not 945 Mhz at 1.35v though it's possible Vega 20 here will have the refresh chips hitting higher speeds without having to downclock and overvolt them.
(Clock speed I think the Frontier edition got it right, 1400 Mhz at 1.0v but it can boost up a bit higher. 1600 at 1.2v just generates a lot of extra heat and that turbo mode is ridiculous with power usage versus what little gains you get.)

Guess that with Vega 20 being more of a server or workstation card it might not matter too much for gaming and general consumer needs as the workload is more specialized although some of the improvements could carry over into Navi as a further tweak of the GCN arch although how much remains to be seen. :)
(The various other features the GPU lost also probably factor in even if overall performance gains might not have been quite all that AMD hyped them up to be though it looks like the primitive shaders are making a return with Navi at least in some form.)


EDIT: Well that and far as gaming goes catching up to the 1080Ti needs at least another 30% performance bump and then between that and the 2080Ti it could need another 20% at least to try and match that although I guess AMD isn't going try for the top GPU performance position but it will be interesting to see what's next. (Coming in several months after the 2000 series from NVIDIA isn't going to help things either.)

Not too sure about Vega 20 either, heard a lot of this type of deep learning already goes via CUDA so that makes it hard for AMD to get into this area I suppose. Also something that will be interesting to hear more about.


EDIT: And I guess we might see more from Intel next year too and what that card will bring.
Miners will have quite a few choices now ha ha. Well the 2000 series might actually exceed Vega now if they have faster memory but I guess that activity has lessened a bit too from how it was just a year ago.
(Well until it inevitably flares up again and some other popular coin type appears.)

Agent-A01
Senior Member



Posts: 11568
Joined: 2010-12-27

#5582364 Posted on: 09/07/2018 07:21 PM
So, you meant that meaningless per pin bandwidth. One thing nobody really has to care about. One can have easily 8 HBM(2) packages around GPU.
How many GDDR(X) packages you can fit on PCB? How many watts it will eat to even compete in bandwidth with HBM(2)?
And IMC area. Way AMD boasted about HBM having small IMC for bandwidth...

When you talk about small, HBM(2) is clearly better except pricing, but that's not that bad. When you talk Big GPUs with huge requirements...
GDDR6 is needed, otherwise you would be again in times where you stack memory banks on both sides of PCB.

Ad "AFAIK fury x > vega64 in memory bandwidth ~500GB/s with vega being lower."
It underscores fact that you again overlook development in technology you try to make look worse. Fury X had to use 4 HBM1 chips, Vega56/64 has 2 HBM2 chips.
And since then HBM2 improved.

Please, do not judge memory technology based on bandwidth of product which could have used more or fewer memory chips depending on vendor decision.
I do really wonder how would you phrase following argument if Titan V had 4x HBM2 chips...
"Even titan v with 3092bit HBM2 barely has more bandwidth than 2080Ti with cut 352bit bus(384bit unlocked would put it > titan v at 672GB/s)"

You should understand another important thing in comparison of those technologies. HBM has very short traces. Papers for GDDR6 imply that traces length and shaping has higher importance than ever.
This means that with HBM, you are mainly looking at what chip can do. With GDDR6 there is PCB design too due to noise sensitivity.

The only thing HBM has going for it right now in current products is smaller package and lower power usage.
I'm well aware that hbm saves a lot of space.

Realistically neither of those are a big issue right now which is why it hasn't seen widespread usage.

GDDR5x was around the 20watt mark for total power usage which isn't a big deal.
GDDR6 will be much more efficient.

Signal integrity for GDDR6 in traces are non-issue when a board is designed correctly.

As for 'judging memory technology based on bandwidth' you need to look back at my original post where I said GDDR6 is faster than current cards with HBM.
That's all my argument was, not an argument trying to say HBM is bad or anything.

BTW, stacking memory chips on both sides of PCB does not help increase performance nor is necessary for increased vram size.
GDDR6 2gb chips are possible which puts current 11gb > to 22gb using the same 11 32bit memory channels.
There is no need for dual sided memory on the PCB.

Also, don't forget the GTX 285 days where it had 16 32bit memory channels; there is plenty of space left with current GPUs.

Anyways I wish HBM3 was in all new cards but that's not happening any time soon.
Too expensive for benefits that aren't necessary right now, at least for NV.

And lastly, memory bandwidth is very important especially when core architecture is getting much faster.
It hasn't reached the point to need several stacks of HBM for NV yet though, apparently. ( 2080ti being faster than titan v)

6 pages 1 2 3 4 5 6


Post New Comment
Click here to post a comment for this news story on the message forum.


Guru3D.com © 2022