Gigabyte Thunderbolt 3 - RX 580 Gaming Box
Gigabyte releases their RX 580 Gaming Box, same box, different graphics card. Basically, if you have a laptop with Thunderbolt 3 connector, you could hook up an external graphics card solution. Gigabyte developed that as GTX 1070 and 1080 Gaming Box, now an RX 580 model has been added.
It’s a bit of a thing to carry around though, it weighs in at 2KG, as next to the graphics card, the box also holds a 450W power supply. The unit is, however, Thunderbolt 3, requiring a laptop with TB3 as well.
- Embedded Radeon™ RX 580 8G graphics card enables graphics-intensive game
- Thunderbolt™ 3 plug and play
- Easy to carry with portable size
- Supports Quick Charge (QC 3.0) and Power Delivery (PD 3.0)
- Supports 3x USB 3.0 for peripheral devices
- RGB fusion - 16.7M Customizable Color Lighting
The Gaming Box also provides a more compact solution compared to other external graphics alternatives with its small form factor, which takes up very little space and can be easily stored or carried on the go with the included carrying bag, enhancing the device with better portability. On the side of the chassis, there is customizable color lighting powered by RGB Fusion.
In addition to the graphics performance upgrade, the Gaming Box acts as a docking station. Aside from the HDMI, DisplayPort, and two dual-link DVI outputs to pair with single or multiple external monitors, the device also has a collection of three USB 3.0 ports available at the back for connecting peripherals or external storage for an enhanced user experience. There is also an extra Quick Charge 3.0 port that allows for personal mobile device charging at up to 4 times the speed of a conventional charger.
While a price has not been announced, expect something in the 500 USD/Euro range.
Gigabyte Launches Brix barebones Based on Kaby Lake-R quad-cores - 03/01/2018 09:20 AM
Gigabyte launches new Brix Mini PCs equipped with Kaby Lake-R generation SoCs, Gigabyte will release four models, two based on a Core i5-8250U and two the Core i7-8550U, procs that are also used in ...
Gigabyte Shows Quad 10GbE SFP28 LAN ports card - 02/19/2018 10:09 AM
Gigabyte has released a four port 10 Gigabit LAN card "CLN4M34" based on a Mellanox ConnectX-4 chip. The card makes use of an x16 PCI Express bus interface....
Review: Gigabyte X299 Aorus Gaming 7 PRO motherboard - 02/19/2018 09:33 AM
Gigabyte recently released their X299 Aorus Gaming 7 PRO motherboard with improved VRM design, let's have a look and review it. It is a nice motherboard in dark accents that resides in a high-end seg...
EK Releases monoblock with Digital LEDs for Gigabyte X399 motherboards - 02/16/2018 10:31 AM
EK is releasing a new Socket TR4 based monoblock made for several GIGABYTE® X399 motherboards. The EK-FB GA X399 GAMING RGB Monoblock has an integrated 3-pin RGB Digital LED strip which makes it co...
Gigabyte releases Z370 Aorus Gaming Ultra 2.0 - 02/16/2018 10:23 AM
Gigabyte has outted a revision 2.0 of their Z370 Aorus Gaming Ultra motherboard, the new model gets improved power delivery and will become available on the market February the 24th. ...
Senior Member
Posts: 5748
Joined: 2012-11-10
True, but it's the bottleneck why I specifically feel a 1060 or 570 make for a better option - you pay less without really any performance loss in most cases. Meanwhile, weaker GPUs use less power. This is important to factor in, since the power bricks get disproportionately more expensive, large, and hot as you increase the wattage. From what I recall, the price really starts to go up fast once you breach 80W. If you go for an RX 550 or GTX 1050Ti, those are efficient enough to not need any PCIe power connectors, (therefore should be sub-75W) and a power brick for that should be relatively cheap. They're also slow enough that as long as you lower texture details, they should perform well on USB 3.2. They still might be a little too much for PCIe 3.0 x1 slots, though.
I'm not sure what your level of interest in all of this is, but there is something I could recommend to you, that maybe isn't elegant but very cheap. There are products on eBay that allow you to convert M.2 slots into x4 PCIe slots. I think they cost something like $5USD. You can also get M.2 riser cables. Combine these together and buy some 12v power brick and you could create your own eGPU solution for laptops for about 1/10 the price of these other eGPU enclosures. Since you're feeding directly from PCIe, there is no latency loss. The downsides are the ugly appearance, fragility, and not hot-swappable.
Maybe someone with nV card or older driver/OS can try as it may be some bug with insider build I have.
What were you testing? If you're not really doing anything, PCIe usage tends to stay from 0-1%.
BTW here's what I was referencing earlier:
https://www.techpowerup.com/reviews/NVIDIA/GeForce-GTX-1080-PCI-Express-Scaling/
They seem to do one of these tests every other year. Pretty interesting and comprehensive - I don't know of anyone else who does tests like this.
Senior Member
Posts: 312
Joined: 2017-09-02
Don't waste your money. If this is your only option, you need to rethink your life decisions.
Harsh but true.
Senior Member
Posts: 11733
Joined: 2012-07-20
True, but it's the bottleneck why I specifically feel a 1060 or 570 make for a better option - you pay less without really any performance loss in most cases. Meanwhile, weaker GPUs use less power. This is important to factor in, since the power bricks get disproportionately more expensive, large, and hot as you increase the wattage. From what I recall, the price really starts to go up fast once you breach 80W. If you go for an RX 550 or GTX 1050Ti, those are efficient enough to not need any PCIe power connectors, (therefore should be sub-75W) and a power brick for that should be relatively cheap. They're also slow enough that as long as you lower texture details, they should perform well on USB 3.2. They still might be a little too much for PCIe 3.0 x1 slots, though.
I'm not sure what your level of interest in all of this is, but there is something I could recommend to you, that maybe isn't elegant but very cheap. There are products on eBay that allow you to convert M.2 slots into x4 PCIe slots. I think they cost something like $5USD. You can also get M.2 riser cables. Combine these together and buy some 12v power brick and you could create your own eGPU solution for laptops for about 1/10 the price of these other eGPU enclosures. Since you're feeding directly from PCIe, there is no latency loss. The downsides are the ugly appearance, fragility, and not hot-swappable.
What were you testing? If you're not really doing anything, PCIe usage tends to stay from 0-1%.
BTW here's what I was referencing earlier:
https://www.techpowerup.com/reviews/NVIDIA/GeForce-GTX-1080-PCI-Express-Scaling/
They seem to do one of these tests every other year. Pretty interesting and comprehensive - I don't know of anyone else who does tests like this.
It's just a generic technology interest. I saw those PCIe 3.0 x1 cables and hacks people did to have it working. M.2 slot to PCIe 3.0 x4 would in most cases have no issue. Except I think that native PCIe in both cases have trouble with turning card ON/OFF. Hotswap behavior is unusual for PCIe in consumer segment.
Anyway. In test you linked, there comes following info:
PCIe 1.1 x4 (1GB/s = 8Gbps) loses around 33% of GTX 1080 performance in worst case scenario. Best Case Doom 19% loss.
PCIe 2.0 x4 (2GB/s = 16Gbps) loses around 15% of GTX 1080 performance in worst case scenario. Best Case TR 5% loss.
(And those are on 1080p gaming, where mobile CPU is more likely to be bottleneck "instead".)
So I would say that USB 3.2 is quite acceptable with 20Gbps.
Senior Member
Posts: 5748
Joined: 2012-11-10
I've done a few experiments myself with x1 slots and haven't had any trouble with them (in terms of them being on or off). But, I've only tested such situations in Linux. I intend to try some tests with one of those M.2 to x4 converters but I'm not sure when I intend to get around to it. Like you, I'm more interested in the technology than actually having a need for it.
Don't forget though - there's also the bandwidth of transferring the video signal back to the laptop, if you intend to. That 5% loss in the best-case-scenario will increase significantly. But as stated before, if you intend to use a separate monitor then yes, USB 3.2 ought to suffice.
Another thing to consider that I forgot about is there's some overhead due to the USB translation layer, which will result in some performance loss. This is why, for example, there aren't any external USB 3.1 drives that can meet or exceed SATA III, despite the 4Gbps lead. To my understanding, Thunderbolt is a lower-level connection than USB and doesn't suffer the same overhead.
Something I'm not too sure about is how bandwidth is allocated by the device. So more specifically, if you have a PCIe 2.0 GPU, to my understanding, the GPU is getting a more bandwidth out of a 2.0 x16 slot than a 3.0 x8 slot, even though the total bandwidth of the slots are roughly the same. From benchmarks I've seen, it seems Thunderbolt will actually split up the bandwidth into multiple lanes, but I'm not sure if a USB adapter will. If it can, that'd be awesome.
Anyway, interesting discussion. I personally find this stuff fun to think about - I hope I'm not coming across as annoying or arrogant, but I find you're good to bounce ideas off of.
Senior Member
Posts: 11733
Joined: 2012-07-20
I think what you said is heavily dependent on whether you're plugging a display directly into the eGPU or trying to use your laptop's display. If you are expecting to use the laptop's built-in display (especially if it is 1080p or higher) with high-res textures, to my understanding, that soaks up a lot of your usable bandwidth, where a 580 would be bottlenecked.
Keep in mind that in a lot of cases, a GTX 1080 (non-Ti) starts to see performance losses when going from x8 PCIe 3.0 lanes down to x4 lanes at various resolutions, when using a display connected straight to the GPU. PCIe 3.0 with x4 lanes is roughly 55Gbps. If we were to dramatically generalize that a RX 580 uses proportionately less bandwidth than a 1080 (when accounting for its lesser performance), I don't think it is possible to consistently take advantage of it's processing power over USB 3.2, regardless of which display is used. But, I could be wrong.
While true, that bottleneck may still be very acceptable as long as graphics does not need to load bunch of textures from system memory every frame. For years, people use PCIe 2.0 and 3.0 x1 in notebooks (meant for Wifi and such) for external PCIe GPUs. Ugly, but relatively cheap solution. And it somewhat works.
I tried to check actual IO utilization, but HWiNFO64 always shows this value as 0%. So lovely configuration for graph displayed over time and nothing to be shown :/
Maybe someone with nV card or older driver/OS can try, as it may be some bug with insider build I have.