Samsung Increases Production of 8Gb HBM2 Memory

Published by

Click here to post a comment for Samsung Increases Production of 8Gb HBM2 Memory on our message forum
data/avatar/default/avatar27.webp
Yeeeees, finally we will see Vega graphic cards. Too sad that they will cost a lot, consume a ton of power and only match GTX1080.
data/avatar/default/avatar08.webp
Yeeeees, finally we will see Vega graphic cards. Too sad that they will cost a lot, consume a ton of power and only match GTX1080.
Nice four posts. Which 1080 is capable of 13 TFLOPS SP and 25 TFLOPS 1/2P?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Next gen APUs, please. Tiny and efficient.
https://forums.guru3d.com/data/avatars/m/163/163032.jpg
Capital B is Byte lower case b is bit. title has lower case b and article says byte.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
Nice four posts. Which 1080 is capable of 13 TFLOPS SP and 25 TFLOPS 1/2P?
TFLOPs mean nothing when comparing Nvidia and AMD. Nvidia often gets less Tflops yet in games out performs AMD, so isn't the best comparison
data/avatar/default/avatar11.webp
TFLOPs mean nothing when comparing Nvidia and AMD. Nvidia often gets less Tflops yet in games out performs AMD, so isn't the best comparison
^This because if you look at Xbox-one X vs the PS4 Pro there is a huge difference in TFLOPS but yet the X-box-one X is not as powerful as the PS4 Pro because of the Huge bottleneck the CPU is in the X-Box-One X.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Nice four posts. Which 1080 is capable of 13 TFLOPS SP and 25 TFLOPS 1/2P?
Actually most 1080's are around 10Tflop because most 1080's boost to around 1920Mhz out of the box. Just like 1080ti's Tflop of SP is more around 13~14Tflops out of the box with 1.9Ghz boost clocks if I'm doing my math right.
data/avatar/default/avatar32.webp
TFLOPs mean nothing when comparing Nvidia and AMD. Nvidia often gets less Tflops yet in games out performs AMD, so isn't the best comparison
I realize this. I guess its hard to tell when a guy named Exascale is trolling someone whos an obvious troll. In all seriousness i know that peak theoretical TFLOPS means nothing for both gaming GPUs or supercomputers, and i often bring up K Computer as the perfect example of this, since it has about 10 PFLOPS and outperforms the 100 PFLOPS Taihu Light in more relevant benchmarks than HPL, like HPCG and Graph500. The whole deal with Exascale computing is that its NOT about FLOPS anymore. Its about things like byte/FLOP ratios and increasing efficiency by reducing communication. Nvidia talked about this extensively with their MCM GPU. Future architectures are focusing on memory and interconnect bandwidth much more. Im not positive about the Vega architecture being good or not. The value it offers for deep learning will probably be pretty high. V100 clearly has everyone beat until Fujitsu, NeuroStream and Intel release their real machine learning architectures BUT its also $13,000. I doubt that those others will be cheap either. Vega is pretty cheap for the full chip and offers GP102 performance in terms of SP math and 1/2 precision. Turning on ECC with HBM doesnt require anything that costs money or performance, so they can charge a lot less than P40 or P6000 does. Vega has the potential to do A LOT since it uses Infinity Fabric and so do AMDs CPUs. This is something that people seem to forget. Up until now, gaming and workstation GPUs and their host CPUs have only talked over ancient and inefficient protocols. Now that AMD is making an x86 CPU thats competetive and also talks more directly to the GPU, i think we can expect Vega to do well. Another nice fact that i havent seen ANY tech sites talking about is the address spaces. Ryzen, Threadripper and Epyc apparently have 48bit physical addressing. Vega has 49bit virtual addressing and 48bit physical. That means you can make a large shared memory heterogeneous system. To be fair, Nvidia has the same address space size on GP100 and V100 and they did it first, but AMD is now doing it on a GPU that costs a fraction of what GP100 or V100 do. While that last fact about address space may seem irrelevant to gamers, its not. It means that people building huge systems can use Vega, which means AMD can sell more, which means lower prices for consumer parts based on the same chip(even though theyre already low considering what they are).
data/avatar/default/avatar38.webp
^This because if you look at Xbox-one X vs the PS4 Pro there is a huge difference in TFLOPS but yet the X-box-one X is not as powerful as the PS4 Pro because of the Huge bottleneck the CPU is in the X-Box-One X.
Uh... no. The xbox one x is more powerful than the ps4 pro. Its cpu is clocked higher than the ps4 pro's but yes the cpu is a bottleneck (for achieving higher FPS) but the xbox one x otherwise is more powerful in gpu, memory speed and memory capacity and as said before a small increase in cpu clock over ps4 pro.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
Uh... no. The xbox one x is more powerful than the ps4 pro. Its cpu is clocked higher than the ps4 pro's but yes the cpu is a bottleneck (for achieving higher FPS) but the xbox one x otherwise is more powerful in gpu, memory speed and memory capacity and as said before a small increase in cpu clock over ps4 pro.
Sadly the difference is so much smaller due to the bottleneck in the CPU, though same can be said about the ps4pro also... the cpu might not be to bad if it actually ran at 3.5ghz or something higher, would make a worlds of difference, but hey maybe the xboxonex2one and ps5 will change that 😀 i'm mostly hoping now that the next console generation will all be backwards compatible since its on the 64bit architecture
data/avatar/default/avatar26.webp
Sadly the difference is so much smaller due to the bottleneck in the CPU, though same can be said about the ps4pro also... the cpu might not be to bad if it actually ran at 3.5ghz or something higher, would make a worlds of difference, but hey maybe the xboxonex2one and ps5 will change that 😀 i'm mostly hoping now that the next console generation will all be backwards compatible since its on the 64bit architecture
Consoles have been degenerating in terms of their power and technology relative to what's state of the art for a while now. I wouldnt expect the next generation to proportionally be a huge leap within a couple years. It used to be that consoles were leading edge tech, and right up to the Playstation 2 they were made in Japan and last forever. It used to be that console hardware was SO high tech that they were sold at a significant loss, despite the dollar having better purchasing power back in the 80s and 90s. The N64's CPU is a perfect example of this. The companies that made consoles actually turned a profit by selling games. Now, the purchasing power of most western currencies is complete garbage. Consoles went from exotic architectures to low end generic x86 PC hardware, cheaply made by the lowest bidder in China. That "race to the bottom" has manifested itself as consoles being able to push 30fps at 720 or 1600x900 or whatever. And i remember watching a video long ago about the Xbox One or PS4 saying that they do sell the hardware at a profit now. The business model for consoles has fundamentally shifted negatively.
https://forums.guru3d.com/data/avatars/m/93/93080.jpg
Man I love tech....I mean my PC proves this, but uhh why all this HBM2? GDDR5X is still enough for today's pc's. This tech isn't even being taken advantage of. You can throw it in any and all cards you want...but it's being wasted without apps and games taking advantage of it fully. All that bandwidth isn't doing what people think it is. I can understand future proofing. But in this case by the time something happens that HBM becomes widely accepted and programed for, the cards will be too slow core wise. Seriously though...awesome stuff, I'm just still not impressed. Wasted bandwidth imho. The only way I'll say otherwise is when I see more games and apps that are programmed to take full advantage of it. Honestly though, hardly anything now does unless its a tech demo. Now if this HBM2 stuff was thrown into a console...man that would be a TOTALLY different story. They would take advantage of it before a PC. Example: GDDR5X can stand up to HBM2 on a console. But I can see a console taking advantage of HBM before PC, since games and apps are programmed FOR the consoles. Unless we get more games and apps built FOR PC that take advantage....I don't see the point. IF there is another generation of consoles, they really need to look into HBM2 or whatever is out there at the time. So what I'm getting at is. People should not "upgrade" their cards from GDDR5X or even GDDR5 just to get HBM2, when it will sit widely unused without anything programed to use it. Will it still be fast? Yes. But not how people think it may be.
Yeeeees, finally we will see Vega graphic cards. Too sad that they will cost a lot, consume a ton of power and only match GTX1080.
That is pretty much what I'm getting at too. Ryzen is great stuff, though these cards having HBM2 thrown on em and only matching Nvidia's last gen flagship proves what I'm saying. Memory with that bandwidth should blow anything out of the water looking at specs. IF they wan't to they need games and apps programmed to actually use it. By doing this, and they still charge GTX 1080 prices, they are crazy. They won't be able to sell many at all. Only true fans. Ryzen on the other hand...I don't blame anybody going for these.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Well with AMD having experience with HBM2 now and a GPU architecture using it plus a upcoming CPU architecture and possibly new generation on-die GPU's for those that does put them in a favorable position for the next console hardware refresh. 🙂 Of course Nvidia might want to get involved or MS and Sony (Or Nintendo.) could go with something else so you never know. (Nvidia also uses HBM2 but in a lesser extent for their top-end Pascal Quadro - or was it Tesla? - series of cards.) The bandwidth improvements and as a bonus the lower power usage would work pretty well on console after all. Going to be a while until the next console refresh though or so I'd guess at least, whether that's the Xbox One ? / PS4 ? or a full new console remains to be seen. (But it's getting about time for some new hardware now but it'll probably be a few years at least.)
https://forums.guru3d.com/data/avatars/m/93/93080.jpg
Well with AMD having experience with HBM2 now and a GPU architecture using it plus a upcoming CPU architecture and possibly new generation on-die GPU's for those that does put them in a favorable position for the next console hardware refresh. 🙂 Of course Nvidia might want to get involved or MS and Sony (Or Nintendo.) could go with something else so you never know. (Nvidia also uses HBM2 but in a lesser extent for their top-end Pascal Quadro - or was it Tesla? - series of cards.) The bandwidth improvements and as a bonus the lower power usage would work pretty well on console after all. Going to be a while until the next console refresh though or so I'd guess at least, whether that's the Xbox One ? / PS4 ? or a full new console remains to be seen. (But it's getting about time for some new hardware now but it'll probably be a few years at least.)
Yeah I totally could see HBM in consoles before PC. That is, in regards of HBM actually being taken advantage of. I really don't think Nvidia went the route of HBM on PC yet because of the whole idea of nothing really utilizing it. Nvidia will make the switch if they do, when it's advisable. HBM just rings console because of that idea. Bandwidth is something consoles seem to always be lacking in one area or another.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
AMD seems to insist on HBM2 and in general amd was not that far off with their choices maybe a bit too soon at times i mean they started in blank paper to design a new cpu that can scale like that with infinity fabric they saw that they can not keep scaling cpu's on a monolithic design before everyone else , bulldozer might have been a failure but their idea also was that the future is multi-cored they where right again , who knows hbm might be something that will disappear soon or the dominant graphics memory in a decade or less we will see they must have their reason insisting on it .
data/avatar/default/avatar09.webp
AMD seems to insist on HBM2 and in general amd was not that far off with their choices maybe a bit too soon at times i mean they started in blank paper to design a new cpu that can scale like that with infinity fabric they saw that they can not keep scaling cpu's on a monolithic design before everyone else , bulldozer might have been a failure but their idea also was that the future is multi-cored they where right again , who knows hbm might be something that will disappear soon or the dominant graphics memory in a decade or less we will see they must have their reason insisting on it .
Lots of people know. HBM and 2.5D in general is the next necessary step. AMD is using it because they dont have as many GPUs as Nvidia, and because when properly used its much better than any GDDR SGRAM. GDDR5 is based on ancient DDR3, and really isnt that efficient. Reducing communication to save power is partially the hardware's responsibility and partially software's. This is something that AMD knows and has been working on for years.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
exascale my self i believe that eventually hbm might straight up replace the gddr memory although Gddr it is not done by a longshot with gddr6 coming soon it sure still has place on the high end graphics cards 1080ti and 1080 with gddr5x proved that but is an interesting topic curious to see how it will work in the end !
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Lots of people know. HBM and 2.5D in general is the next necessary step. AMD is using it because they dont have as many GPUs as Nvidia, and because when properly used its much better than any GDDR SGRAM. GDDR5 is based on ancient DDR3, and really isnt that efficient. Reducing communication to save power is partially the hardware's responsibility and partially software's. This is something that AMD knows and has been working on for years.
exascale my self i believe that eventually hbm might straight up replace the gddr memory although Gddr it is not done by a longshot with gddr6 coming soon it sure still has place on the high end graphics cards 1080ti and 1080 with gddr5x proved that but is an interesting topic curious to see how it will work in the end !
Okay here is the thing. For gaming there is no reason to abandon GDDR even if it is based off DDR3. With GDDR5x it has been proven with the 1080 and 1080ti that the increased latency of the very loose timings does not negatively effect gaming performance. I don't see GDDR going away in gaming focused cards and seeing as it is cheaper than HBM it will continue to be the standard. BTW What is GDDR6 based on, DDR3 or DDR4?
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
well loop not soon at least it still delivers plenty of performance and gddr 6 .... i have no idea my best guess will be ddr4 but thats my guess :P come to think about it gddr5 is getting almost a decade old first used on the radeon hd 4870 !
data/avatar/default/avatar20.webp
Okay here is the thing. For gaming there is no reason to abandon GDDR even if it is based off DDR3. With GDDR5x it has been proven with the 1080 and 1080ti that the increased latency of the very loose timings does not negatively effect gaming performance. I don't see GDDR going away in gaming focused cards and seeing as it is cheaper than HBM it will continue to be the standard. BTW What is GDDR6 based on, DDR3 or DDR4?
Given its specs i think its based on DDR4. Its double the density and transfer rate of GDDR5, so it sounds like DDR4 based to me. Consumer focused stuff is always going to use cheap old tech when it can. Same goes for manufacturing processes. Most consumer level stuff is flimsy junk compared to anything professional. Computers are no exception. That phenomenon has gotten a lot worse since the turn of the century and continues to get worse over time. Even Nvidia diverged their pro and consumer product lines completely, and it began with the Kepler line making the 560ti successor chip the new x80 card's chip, then releasing the big GK110 as Titan(real 580 successor) at nearly double the price, then 780and 780ti. The 900 series continued this trend, disguising the 102 chip as 200 and calling it the 980ti. Now we have $3 billion R&D V100 for $13,000 and consumers think their 104 and 102 chips are high tech. That used to happen the opposite way btw. High tech stuff would trickle down, not consumer stuff builds the launchpad for supercomputer parts. Its an economic phenomenon brought about by perverse economic incentives that have resulted in currencies becoming essentially worthless, hence prices going up significantly, rather than remainibg stable or dropping. All of this works because average consumers are indoctrinated to be satisfied with the illusion that marketing creates for them. The illusion that quality is unimportant and a few numbers are the real selling points. Never mind what a thing really is, represents, or does to the market. My FAVORITE examples of people rushing out and buying junk thats worse than what they had before are TVs and light bulbs. During the great recession, almost all TVs and monitors went from WCG-CCFL or RGB LED backlights to WLED trash backlights and TVs got significantly worse. Funny thing is, "LED TVs" were sold as if they were new and better, even though they were disgustingly bad. TV makers sold consumers a worse product and consumers fell for it. Same thing with light bulbs. Consumer LED and CFL bulbs are worse in every way than incandescent bulbs. Light bulb makers figured out a scheme to get people to adopt fire hazard eye destroying bulbs by marketing them correctly. There are a lot of incredibly expensive LEDs for professional film use, but even so a lot of cinematographers light color critical scenes with 100 CRI, no metamerism tungsten incandescent bulbs. But hey those eye damaging fire hazard LEDs and CFLs have a MASSIVE profit margin compared to better incandescents. Now you have people buying TCL TVs with no clue that the company started out as a counterfeit goods maker and hazardous light bulbs that cost 10x more than they should. So yeah we will have ancient tech in the consumer space while supercomputers and hyperscale customers have 2.5D and 3D ICs. The race to the bottom is the new normal and not many consumers care or know enough to fix it. Personally, i love the fact that Ryzen is 1/4 of an Epyc with little compromised, and the fact that Vega is actually their highest tech GPU chip. Oh and Intel will be using HBM on Knights Hill confirmed.