GALAX GeForce RTX 4070 Ti EX White review
Cougar Terminator gaming chair review
G.Skill TridentZ5 RGB DDR5 7200 CL34 2x16 GB review
ASUS TUF Gaming B760-PLUS WIFI D4 review
Netac NV7000 2 TB NVMe SSD Review
ASUS GeForce RTX 4080 Noctua OC Edition review
MSI Clutch GM51 Wireless mouse review
ASUS ROG STRIX B760-F Gaming WIFI review
Asus ROG Harpe Ace Aim Lab Edition mouse review
SteelSeries Arctis Nova Pro Headset review
AMD Ryzen 4000 Pro 4350G , 4650G and Ryzen 7 4750G APUs Pop Up at Distributor
AMD's Renoir based APUs are getting closer and closer, the ZEN2 processors with integrated graphics will first make their way into the business channel in the form of Pro series APUs.
Three SKUs now have been spotted on the website of a US-based distributor specialized in PC components. The specifications are in line with what we have heard before, and AMD likely is bound to make some new announcements on these.
- Ryzen 5 Pro 4350G will be the least powerful, offering four cores and eight threads at up to 4.1 GHz. Listed for $141
- Ryzen 5 Pro 4650G reportedly has six cores and 12 threads up to 4.3 GHz, with an L3 cache size of 11 MB listed for 204 US dollars.
- Ryzen 7 Pro 4750G will get eight cores and 16 threads at 4.4 GHz, with 12 MB of cache at L3 level, according to the product information on the site. This chip will probably also be able to use the full Integrated GPU, listed at $302.
The specifications are in line with what we have heard before, and AMD likely is bound to make some new announcements on these soon.
« LG Releases new Soundbar that pairs perfectly with the GX Gallery OLED TVs · AMD Ryzen 4000 Pro 4350G , 4650G and Ryzen 7 4750G APUs Pop Up at Distributor
· Werewolf: The Apocalypse - Earthblood slated for 2021 - exclusive to the Epic Games Store »
AMD Ryzen 7 4700GE Memory Test: Shows Low Latency at 47.6 ns - 06/30/2020 10:26 AM
One of the things we always noticed with AMD processor reviews is that the memory latency these days is good but slower than Intel. Currently we're hitting 67ns, Chinese-based TecLab however posted s...
AMD Ryzen 9 3900XT Gets Geekbenched, 5% Faster than 3900X - 06/25/2020 08:09 AM
AMD is set to release their XT series processors, a refresh with three jacked up models. Availability is slated for July 7th, but the first benchmark already surfaced....
Article: AMD Ryzen DRAM Timings, Frequency and Ranks performance effect in games - 06/19/2020 02:10 PM
The impact of memory timings and frequency on AMD Ryzen 3000 systems in games has been a topic of discussion. In this article, we'll zoom in on specifically that. The DDR4 memory controller is locate...
AMD Ryzen ZEN3 architecture not delayed, still to be released in 2020 says AMD - 06/17/2020 05:34 PM
This week there has been a report from Digitimes that Ryzen ZEN3 based processors would be moved towards 2021. I am currenty in a conference call with AMD, they are denying that rumor....
ASUS PN50 is the first AMD Ryzen 4000U Mini PC - 06/17/2020 08:33 AM
Asus recently announced an offering for miniPCs, this round they're based on AMD Ryzen 4000-U Series processors. Mostly used in Ultra portbales the 15 Watt SoC can also be used to power up mini PC. ...
schmidtbag
Senior Member
Posts: 7261
Joined: 2012-11-10
Senior Member
Posts: 7261
Joined: 2012-11-10
#5807341 Posted on: 07/09/2020 10:50 PM
@bobblunderton
Don't get me wrong, for CPUs, the bigger L3 makes a noticeable difference. But we're talking about an APU here. That L3 cache is nowhere near enough to feed a GPU. It will make a difference but the GPU is still going to be bottlenecked by memory bandwidth.
I hope AMD adds more memory channels for AM5 because I feel the jump to DDR5 isn't going to be sufficient.
@bobblunderton
Don't get me wrong, for CPUs, the bigger L3 makes a noticeable difference. But we're talking about an APU here. That L3 cache is nowhere near enough to feed a GPU. It will make a difference but the GPU is still going to be bottlenecked by memory bandwidth.
I hope AMD adds more memory channels for AM5 because I feel the jump to DDR5 isn't going to be sufficient.
bobblunderton
Senior Member
Posts: 416
Joined: 2017-02-15
Senior Member
Posts: 416
Joined: 2017-02-15
#5807441 Posted on: 07/10/2020 06:29 AM
@bobblunderton
Don't get me wrong, for CPUs, the bigger L3 makes a noticeable difference. But we're talking about an APU here. That L3 cache is nowhere near enough to feed a GPU. It will make a difference but the GPU is still going to be bottlenecked by memory bandwidth.
I hope AMD adds more memory channels for AM5 because I feel the jump to DDR5 isn't going to be sufficient.
Normally I would say we had no chance for AMD to add more memory channels to AM5 platform, however, with TRX80 chip set coming out with 8-channel memory, we just may get lucky. I still highly doubt it though.
You want more memory on the CPU to feed the graphics hardware inside of it, that's really only possible on HBM-style setups currently, otherwise, you end up a whole product stack too close together to make sense, full of mostly-salvaged parts. 128mb L3 cache issues on Broadwell were rather serious from what I remember hearing from those in manufacturing, in the yields department anyways. The bigger you make the cache, the higher the chance of defect and hence higher the cost, they'd likely price themselves out of the very market they target. It wouldn't surprise me if these chips were made with 16mb of L3 cache with a small amount (4mb) left to the side for defect padding, to make sure more chips meet criteria and such for market.
Yes, 100% agree it's going to be bandwidth starved when going for memory, as they've almost always been. That is what sells dGPU's though. That being said I'd absolutely love having 2070 Super class performance (that's the gpu in here) on my 3950x, I'd be absolutely thrilled with it. Can't have everything we want, though.
@bobblunderton
Don't get me wrong, for CPUs, the bigger L3 makes a noticeable difference. But we're talking about an APU here. That L3 cache is nowhere near enough to feed a GPU. It will make a difference but the GPU is still going to be bottlenecked by memory bandwidth.
I hope AMD adds more memory channels for AM5 because I feel the jump to DDR5 isn't going to be sufficient.
Normally I would say we had no chance for AMD to add more memory channels to AM5 platform, however, with TRX80 chip set coming out with 8-channel memory, we just may get lucky. I still highly doubt it though.
You want more memory on the CPU to feed the graphics hardware inside of it, that's really only possible on HBM-style setups currently, otherwise, you end up a whole product stack too close together to make sense, full of mostly-salvaged parts. 128mb L3 cache issues on Broadwell were rather serious from what I remember hearing from those in manufacturing, in the yields department anyways. The bigger you make the cache, the higher the chance of defect and hence higher the cost, they'd likely price themselves out of the very market they target. It wouldn't surprise me if these chips were made with 16mb of L3 cache with a small amount (4mb) left to the side for defect padding, to make sure more chips meet criteria and such for market.
Yes, 100% agree it's going to be bandwidth starved when going for memory, as they've almost always been. That is what sells dGPU's though. That being said I'd absolutely love having 2070 Super class performance (that's the gpu in here) on my 3950x, I'd be absolutely thrilled with it. Can't have everything we want, though.
schmidtbag
Senior Member
Posts: 7261
Joined: 2012-11-10
Senior Member
Posts: 7261
Joined: 2012-11-10
#5807592 Posted on: 07/10/2020 02:42 PM
Same, but even just a 3rd memory channel would make all the difference. Any more than 4 would be pointless, because one of the most appealing things about an APU is compactness, and you're not going to be able to easily fit 4 channels of memory on an ITX motherboard, for example (without making serious sacrifices). Even SO-DIMMs might be tricky to fit.
I agree with all of that, though it is worth pointing out the performance improvements on Broadwell were tremendous. Really the point I'm trying to drive home here is AMD can't make progress until they address the bandwidth issue. Personally, I think the most realistic option is to have an optional software tool that can heavily compress game asset data. Depending on the game, the decompression overhead would be less than the memory bandwidth bottleneck. Seems no matter how much you overclock the RAM, the frame rate goes up proportionately, suggesting the GPU is just sitting there doing nothing most of the time.
Well yeah but what I'm getting at here is not even their worst iGPUs have enough bandwidth. We're nowhere close to having 2070 Super performance an an iGPU when we're not even getting Vega 8 performance out of an APU that comes with a Vega 8. And that's the problem I'm trying to address here: we're not going to see more progress in APUs until memory bandwidth isn't such a serious problem.
Normally I would say we had no chance for AMD to add more memory channels to AM5 platform, however, with TRX80 chip set coming out with 8-channel memory, we just may get lucky. I still highly doubt it though.
Same, but even just a 3rd memory channel would make all the difference. Any more than 4 would be pointless, because one of the most appealing things about an APU is compactness, and you're not going to be able to easily fit 4 channels of memory on an ITX motherboard, for example (without making serious sacrifices). Even SO-DIMMs might be tricky to fit.
You want more memory on the CPU to feed the graphics hardware inside of it, that's really only possible on HBM-style setups currently, otherwise, you end up a whole product stack too close together to make sense, full of mostly-salvaged parts. 128mb L3 cache issues on Broadwell were rather serious from what I remember hearing from those in manufacturing, in the yields department anyways. The bigger you make the cache, the higher the chance of defect and hence higher the cost, they'd likely price themselves out of the very market they target. It wouldn't surprise me if these chips were made with 16mb of L3 cache with a small amount (4mb) left to the side for defect padding, to make sure more chips meet criteria and such for market.
I agree with all of that, though it is worth pointing out the performance improvements on Broadwell were tremendous. Really the point I'm trying to drive home here is AMD can't make progress until they address the bandwidth issue. Personally, I think the most realistic option is to have an optional software tool that can heavily compress game asset data. Depending on the game, the decompression overhead would be less than the memory bandwidth bottleneck. Seems no matter how much you overclock the RAM, the frame rate goes up proportionately, suggesting the GPU is just sitting there doing nothing most of the time.
Yes, 100% agree it's going to be bandwidth starved when going for memory, as they've almost always been. That is what sells dGPU's though. That being said I'd absolutely love having 2070 Super class performance (that's the gpu in here) on my 3950x, I'd be absolutely thrilled with it. Can't have everything we want, though.
Well yeah but what I'm getting at here is not even their worst iGPUs have enough bandwidth. We're nowhere close to having 2070 Super performance an an iGPU when we're not even getting Vega 8 performance out of an APU that comes with a Vega 8. And that's the problem I'm trying to address here: we're not going to see more progress in APUs until memory bandwidth isn't such a serious problem.
bobblunderton
Senior Member
Posts: 416
Joined: 2017-02-15
Senior Member
Posts: 416
Joined: 2017-02-15
#5807702 Posted on: 07/10/2020 08:59 PM
Game assets are normally compressed during development, sometimes at run-time. Things like compressing textures to .dds formats can be done during development, and compressing models can be done around when the shaders are written at run time. These things are already normally done in most games - not all - but most games they are. I do game development, so yes, pays to know these things.
I wish I could put into words how difficult it is to fit a modern city into 8gb of VRAM and not have every block repeat like a chase scene in a Hanna-Barberra cartoon. Let's just say it's really difficult. You have to pull out all the stops you can figure out to get it looking even half decent.
The rest of what you said, yeah I'm not going to argue that one. Heck even my RX 480 was memory bandwidth starved with 256-bit link on GDDR5, dunno if the 2070 Super is as I haven't tried, just left it at stock settings. They can help APUs by working out the latency on getting to system RAM once the cache is full, but really, yes we DO need a better link there... however, to that end, you might as well just put it all on a card so that the price is reasonable.
DDR5 memory is supposed to be here around 18 months from now, so maybe it'll help, though I wouldn't expect it to be in OEM PC's until a few months after that, up to a year. I have absolutely no clue what the dates on release are looking like, what will come when, it's much too early to tell.
You'd likely need a much better interface, likely proprietary (to loosen design restrictions), or a very high pin-count cpu socket (look at all the pins on the gpu core on the back of a video card) to enable a higher link to some very expensive memory that doesn't exist yet. That complexity gets expensive really fast so you'd end up right back to where you started, get a dGPU. Consoles did it by putting a decently powerful gpu on the cpu, and using GDDR5 for memory on the entire system... so it can be done. It just wouldn't work with any/all the motherboards out there right now.
Personally, I think the most realistic option is to have an optional software tool that can heavily compress game asset data. Depending on the game, the decompression overhead would be less than the memory bandwidth bottleneck. Seems no matter how much you overclock the RAM, the frame rate goes up proportionately, suggesting the GPU is just sitting there doing nothing most of the time.
Game assets are normally compressed during development, sometimes at run-time. Things like compressing textures to .dds formats can be done during development, and compressing models can be done around when the shaders are written at run time. These things are already normally done in most games - not all - but most games they are. I do game development, so yes, pays to know these things.
I wish I could put into words how difficult it is to fit a modern city into 8gb of VRAM and not have every block repeat like a chase scene in a Hanna-Barberra cartoon. Let's just say it's really difficult. You have to pull out all the stops you can figure out to get it looking even half decent.
The rest of what you said, yeah I'm not going to argue that one. Heck even my RX 480 was memory bandwidth starved with 256-bit link on GDDR5, dunno if the 2070 Super is as I haven't tried, just left it at stock settings. They can help APUs by working out the latency on getting to system RAM once the cache is full, but really, yes we DO need a better link there... however, to that end, you might as well just put it all on a card so that the price is reasonable.
DDR5 memory is supposed to be here around 18 months from now, so maybe it'll help, though I wouldn't expect it to be in OEM PC's until a few months after that, up to a year. I have absolutely no clue what the dates on release are looking like, what will come when, it's much too early to tell.
You'd likely need a much better interface, likely proprietary (to loosen design restrictions), or a very high pin-count cpu socket (look at all the pins on the gpu core on the back of a video card) to enable a higher link to some very expensive memory that doesn't exist yet. That complexity gets expensive really fast so you'd end up right back to where you started, get a dGPU. Consoles did it by putting a decently powerful gpu on the cpu, and using GDDR5 for memory on the entire system... so it can be done. It just wouldn't work with any/all the motherboards out there right now.
Click here to post a comment for this news story on the message forum.
Senior Member
Posts: 416
Joined: 2017-02-15
Hooray, Finally some good new processors to make a few new email computers for around the house.
The new chips* only have 12mb max L3 cache, unlike the 3xxx series Matisse chips. The L3 they have though, is available with less latency per-8-cores / 16-threads due to not having to 'hop' past the divider for the L3 cache like on 6-core or above Matisse designs, where 16mb L3 was allotted in 4-core groups.
So if you have software that uses 8 real cores, it will run faster because it will process quicker and also sync faster with other threads on the same app due to having less latency and less time wasted by the core as the data it needs or is done with is hopping around the chip. This will also be the case for 4xxx series non-apu designs, though apps using more than 8-cores will still hit a latency penalty of some sort due to the core design.
As you go higher though on intel chips conversely, you have basically the same type of interconnect (in place of single and dual ring-bus designs), so it's not a big deal, it's just something one must deal with when running mega-core-count chips over 8-cores.
Mesh designs are advantageous if you don't have slow memory, and you actually USE 8 or more threads, as that would saturate single ring bus and eventually dual-ring bus past 8 cores if they are very busy.
Even in day to day compute, I could feel a big difference partly due to copious amounts of l3 cache in my 3700x over my old 4790k (that was unpatched), and now to this 3950x I put in a week ago. Content creation such as building stuff with World Machine makes a huge difference when having lots of cores.
12mb of lower latency cache VS 32mb of higher latency cache will do just fine, it's a pretty even trade-off, and generally works out for the better unless your app is cache-starved and using a heck of a lot of cores.
To AMD's credit: the on-chip Matisse level 3 cache, even with the latency complaints going around, is usually consistently faster than the intel chips I've tested here, so the latency cries alone are pretty much solved with Matisse and even better on 4xxx chips, apu's or not apu's. They still have much room for improvement in future generations though, and I don't expect AMD to sit on it's laurels anytime soon.
*XT models still have the same amount of cache as other Matisse models, and are just slightly better-binned chips. This is listed for clarification purposes and is different from the G-series APU chips.
3 Cheers for competition in the CPU market before we all get too old to build pc's!