Ryzen 7 4700G and Ryzen 5 4400G 3DMark performance benchmarks spotted

Published by

Click here to post a comment for Ryzen 7 4700G and Ryzen 5 4400G 3DMark performance benchmarks spotted on our message forum
https://forums.guru3d.com/data/avatars/m/260/260828.jpg
Until we get better RAM we wont see a decent improvement in APU graphics, so i asume 5400G and 5700G next year will have similar scores to this.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Picolete:

Until we get better RAM we wont see a decent improvement in APU graphics, so i asume 5400G and 5700G next year will have similar scores to this.
Not necessarily. A [much] bigger cache and better memory compression would make an immense performance difference. But if those things don't change then yeah, you're right.
data/avatar/default/avatar35.webp
This is pretty much the only legit consumer use case for HBM. A small 2 or 4 GB HBM module wired to the package would change everything with APUs, and could probably be done without even needing a new socket. The question is, is there even a market for a high performance APU, especially if the price were higher? Maybe in laptops, if the price could be competitive with the current low-end gaming systems with a CPU + GTX1050.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
schmidtbag:

Not necessarily. A [much] bigger cache and better memory compression would make an immense performance difference. But if those things don't change then yeah, you're right.
I was kinda expecting at some point in seeing triple channel memory which would be 50% more bandwidth. Still i think 4266MT/s ram in a APU should be good but CPU performance would be impacted due to being in ASync with the fabric. I guess once we see it people will try it out.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
illrigger:

This is pretty much the only legit consumer use case for HBM. A small 2 or 4 GB HBM module wired to the package would change everything with APUs, and could probably be done without even needing a new socket. The question is, is there even a market for a high performance APU, especially if the price were higher? Maybe in laptops, if the price could be competitive with the current low-end gaming systems with a CPU + GTX1050.
100% agree! AMD are also in a prime position to do this and quickly too. Using their chiplet design they could use 2 4c/8t CCX, and an I/O die and a fourth die for the HBM.... Stick 2GB, 4GB, and even 8GB HBM chiplet on there. You could then have any CPU core configuration from 2c/4t, 4c,8t, 6c/12t, 8c/16t (just disable as needed). You have the I/O die for the CPU cache, and controllers. It would make sense to use the extra space for a HBM module. Obviously they could still use system RAM for increased VRAM and then stream in assests/data when needed or just use the system RAM as VRAM for when the iGPU is in an idle state such as on desktop, or doing simple tasks and then switch over to the HBM when a fullscreen 3d application kicks in. Wishful thinking..... or "mental masturbation" I like to call it :P
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
More interested in the 3400G, TBH, better IGP performance is more useful to me.
https://forums.guru3d.com/data/avatars/m/271/271684.jpg
CPC_RedDawn:

100% agree! AMD are also in a prime position to do this and quickly too. Using their chiplet design they could use 2 4c/8t CCX, and an I/O die and a fourth die for the HBM.... Stick 2GB, 4GB, and even 8GB HBM chiplet on there. You could then have any CPU core configuration from 2c/4t, 4c,8t, 6c/12t, 8c/16t (just disable as needed). You have the I/O die for the CPU cache, and controllers. It would make sense to use the extra space for a HBM module. Obviously they could still use system RAM for increased VRAM and then stream in assests/data when needed or just use the system RAM as VRAM for when the iGPU is in an idle state such as on desktop, or doing simple tasks and then switch over to the HBM when a fullscreen 3d application kicks in. Wishful thinking..... or "mental masturbation" I like to call it 😛
As much as I would like to see it just for the heck of it, I don't think there is a market for it. Also, you would be adding another heat generator - although HBM is very power efficient and doesn't generate that much heat, it is still something to consider. On the other hand, an HBM stack on die could be used as another level of cache for the CPU. I could see it in a high-end product like a Threadripper, where fast memory access is important.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Vananovion:

As much as I would like to see it just for the heck of it, I don't think there is a market for it. Also, you would be adding another heat generator - although HBM is very power efficient and doesn't generate that much heat, it is still something to consider. On the other hand, an HBM stack on die could be used as another level of cache for the CPU. I could see it in a high-end product like a Threadripper, where fast memory access is important.
Can't speak for current HMB generation. But 4 stacks of HBM1 on FuryX ate 13,5W under load at stock settings. I managed to pull 18,5W when testing various ways to OC them. In comparison to many GDDR6 misadventures, that's super power efficient.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Vananovion:

As much as I would like to see it just for the heck of it, I don't think there is a market for it. Also, you would be adding another heat generator - although HBM is very power efficient and doesn't generate that much heat, it is still something to consider. On the other hand, an HBM stack on die could be used as another level of cache for the CPU. I could see it in a high-end product like a Threadripper, where fast memory access is important.
I totally agree. I don't think its something we will see even in the next 2 or 3 gens of AMD APU products. But once HBM becomes cheaper to make and even more efficient I think AMD are in the prime position with their tech to pull this off. If they are going to do something along these lines they need to start r&d like a year ago lol.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
CPC_RedDawn:

100% agree! AMD are also in a prime position to do this and quickly too. Using their chiplet design they could use 2 4c/8t CCX, and an I/O die and a fourth die for the HBM.... Stick 2GB, 4GB, and even 8GB HBM chiplet on there. You could then have any CPU core configuration from 2c/4t, 4c,8t, 6c/12t, 8c/16t (just disable as needed). You have the I/O die for the CPU cache, and controllers. It would make sense to use the extra space for a HBM module. Obviously they could still use system RAM for increased VRAM and then stream in assests/data when needed or just use the system RAM as VRAM for when the iGPU is in an idle state such as on desktop, or doing simple tasks and then switch over to the HBM when a fullscreen 3d application kicks in. Wishful thinking..... or "mental masturbation" I like to call it 😛
I think this is very much cost dependable too , if sticking hbm and chiplets and io dies .... It might be cheaper and faster to add just 6-8 ryzen cores stick a 4gb 5500xt in the motherboard or 5600 and call it a day , on the other hand i have no info or any kind of source i am just speculating why they haven't done it yet 😛
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Venix:

I think this is very much cost dependable too , if sticking hbm and chiplets and io dies .... It might be cheaper and faster to add just 6-8 ryzen cores stick a 4gb 5500xt in the motherboard or 5600 and call it a day , on the other hand i have no info or any kind of source i am just speculating why they haven't done it yet 😛
They seem to be doing this for the next gen consoles.... 8c/16t Zen 2 (basically a downclocked 3700X) at between 3.5-3.8GHz and an RDNA2 GPU capable of between 10-12TFlops that can run between 1.8-2.2GHz .Coupled with a memory controller that can handle GDDR6 14Gbps speeds and all within a 300W power limit. All cooled with a conventional vapour chamber heatsink inside a small box. What I mentioned and even what you mentioned doesn't seem too far fetched now... 😛
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@CPC_RedDawn consoles except the memory are monolithic cores io gpu ...300 watts on 15 inch laptop is not happening thought now a 6 core with 6-7 tflops on 125 -150watts does not sound far fetched thought!
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Venix:

@CPC_RedDawn consoles except the memory are monolithic cores io gpu ...300 watts on 15 inch laptop is not happening thought now a 6 core with 6-7 tflops on 125 -150watts does not sound far fetched thought!
Take my money!!!! 😀
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@CPC_RedDawn hahah fine send you money at [EMAIL]sendMeYourMoney@paypal.mars[/EMAIL] 😛 (Obviously a joke paypal)
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Venix:

@CPC_RedDawn consoles except the memory are monolithic cores io gpu ...300 watts on 15 inch laptop is not happening thought now a 6 core with 6-7 tflops on 125 -150watts does not sound far fetched thought!
Cooling 300W in 15'' laptop is not impossible with advancement heatpipes and blower cooling design made in last decade. Especially if one can use space where MXM dGPU would be otherwise. My main concern would be keeping VRMs operational safely over long period of time as dust gets in over years. But I think it is not worth doing anyway. Given performance target may be reached year later within 200W and if technology moves forward, two years later it will be 130W. If I was to design laptop, It would have really good CPU with rather weak iGPU like 4700G. And it would have industry standard connection with sufficient bandwidth into external GPU box. All those laptop manufacturers could have been doing it for over decade, because there was always demand for it. But that would slow down sales. So they never did it in meaningful way. When you build a desktop, it would be smaller as it could use exactly same external GPU box.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Fox2232:

If I was to design laptop, It would have really good CPU with rather weak iGPU like 4700G. And it would have industry standard connection with sufficient bandwidth into external GPU box.
I just watched hardwarecunucks video on the new ASUS TUF A15 Laptop with a R9 4900H and a 2060. I was thinking if AMD got into the external GPU box market and released their own boxes and bundled them with a decrete GPU like the 5500, 5600, 5700/xt it would make perfect sense for them. Get rid of that 2060 and just use the iGPU on the 4900H (VEGA 8). This would drop the price of the laptop by quite a lot, making it more enticing for people to make that initial purchase. Drop the 144Hz screen too for maybe a 75Hz or 120Hz making it easier to drive for the iGPU and the lower end decrete GPUs like the 5500 and 5600. Though I am not sure if this can be done over usb-c. The laptop has usb-c and has DP-over-USBC but I am not sure if that would be sufficient enough bandwidth and if it goes both ways and its not just for external displays.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@Fox2232 no is not impossible if you check the high performance gaming laptops they are big boys and very loud battery life also is not long on gaming mode:p and they cost a lot , depends on each individual how portable they want their want their laptops.My self i was never really sold on the external gpu concept since is hard to carry an external case + gpu on trips and their role kinda stays in your home when you return ..... So in such case i find using a good vey portable laptop for your trips and having a desktop at your home a solution that makes more sense.