YouTube LiveStream Shows Benchmarks AMD Ryzen 2700X to be 10% faster

Published by

Click here to post a comment for YouTube LiveStream Shows Benchmarks AMD Ryzen 2700X to be 10% faster on our message forum
data/avatar/default/avatar39.webp
Waiting on HH to give the real numbers.
data/avatar/default/avatar09.webp
For comparison my 1700 clocked at 3.85GHz all cores gets 1747 CB. Cache and Memory latency decreases with Ryzen as clock frequency (CPU, IF) increases so that is to be expected and looks like a consequence of the increased frequency with rev 2 rather than any improvements in the design but we will see.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
sideeffect:

For comparison my 1700 clocked at 3.85GHz all cores gets 1747 CB. Cache and Memory latency decreases with Ryzen as clock frequency (CPU, IF) increases so that is to be expected and looks like a consequence of the increased frequency with rev 2 rather than any improvements in the design but we will see.
Well all AMD ever really claimed Zen+ would be is a refined Zen, so speed bump and bump in efficiency is all that should be expected. The leaked latency improvements seem to be a little more than just from the 200Mhz clock increase though.
data/avatar/default/avatar40.webp
Yes I am not complaining at all this revision looks like it will be great and the CPU frequency jump is the most important thing. However a lot of the comparisons to rev 1 seem to be from older benchmarks with loose secondary timings, older BIOS, unknown infinity fabric speeds etc. The score of 1620 CB for 1800X for example is not reflective of current performance if I set my 1700 to 3.7Ghz all cores which is same as 1800x I get 1673 CB and I only have 3000Mhz DDR4.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Considering the bump in frequency, that's actually a little disappointing. But, that's also just 1 benchmark. Ryzen tends to fare pretty well in Cinebench anyway - it's the games and other latency-picky tasks that was holding back Ryzen the most.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Considering the bump in frequency, that's actually a little disappointing. But, that's also just 1 benchmark. Ryzen tends to fare pretty well in Cinebench anyway - it's the games and other latency-picky tasks that was holding back Ryzen the most.
Doesn't it just match the bump in frequency? 1620 -> 1891 = ~15% increase, 3.6Ghz to 4.2 = ~15% increase
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

Doesn't it just match the bump in frequency? 1620 -> 1891 = ~15% increase, 3.6Ghz to 4.2 = ~15% increase
Yes, that's exactly my point, and why I find it disappointing. I was hoping there'd be more of an IPC improvement. But like I said, there could be one in other benchmarks.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
schmidtbag:

Yes, that's exactly my point, and why I find it disappointing. I was hoping there'd be more of an IPC improvement. But like I said, there could be one in other benchmarks.
No expecting an IPC increase with this “revision” is a bit of a tall order no? I mean Ryzen is only a year old. The fact they got such a increase in frequency this soon is a great accomplishment.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Loophole35:

No expecting an IPC increase with this “revision” is a bit of a tall order no? I mean Ryzen is only a year old. The fact they got such a increase in frequency this soon is a great accomplishment.
No, it isn't a tall order. It's pretty standard, actually - just about every processor (including from ARM, PPC, MIPS, or GPUs) gets refined between each generation, especially after first generations. Skylake was pretty much the trend-breaker (where it wasn't really any different than Haswell/Broadwell other than the DDR4 support), but to be fair, it was also based on a very polished architecture. Ryzen is a great architecture but it's also the first of its generation, and it has weaknesses that could be addressed beyond its clock limitations. Even a 3% clock-per-clock improvement would be satisfactory. I agree that this clock boost is a great accomplishment and a very welcome one, but I'm not convinced that they had nothing to tweak. But like I said before, there's a good chance that they did actually make some small adjustments here and there that would impact other benchmarks. Ryzen is known to perform very well in Cinebench, so whatever those adjustments could be may be irrelevant to it.
data/avatar/default/avatar34.webp
1.296v for 4.2 Ghz on all 8 cores, now that is interesting to me.
https://forums.guru3d.com/data/avatars/m/270/270490.jpg
Lets keep in mind also this is at best an x370 motherboard. Not a x470. XFR2 is not working. With XFR2 that should go a little higher and who knows, along with the memory tweaks make for a little bit better gain. While they said the Ryzen 2 series would WORK with the current chipsets, they didn't say it would run optimally. Of course i am just guessing its not an x470. No proof to back that one up except it should be hitting 4.35 instead of the 4.2 from what i have read.
data/avatar/default/avatar28.webp
Loophole35:

The fact they got such a increase in frequency this soon is a great accomplishment.
I don't agree with this statement at all. AMD simply built the desktop processor on a less power limited process. It's the process that should have been used in the first place for this Desktop part, which it wasn't because data center was the target and efficiency is the goat, not top performance. A more accurate statement would be to say getting Zen Mhz out of a mobile specific process was an amazing accomplishment. Getting Zen+ Mhz out of a power plus process is to be expected, if not more. Zen+ is the way it was meant to be made, nothing more, nothing less.
data/avatar/default/avatar28.webp
TitanArchon:

XFR2 is not working. With
I've had my 1600 at all core clock of 3.9 Ghz since day one. I've never used XRF regardless of my motherboard being compatible or not. What does XFR 2.0 have to to offer that anyone who is OCing would care about? Is anyone else tired of calling setting up a CPU to run exactly how it's capable of running "OCing?" Really, this should just be the norm and not considered OCing in the least. Last gen Ryzen was quire rediculous in this matter. Rebranded 1800x as 1700 and people talking about how they OC their cpu.. seriously...I still can't believe people bought the 1800x when they knew it has no headroom for real OCing. In regards to AMD, CPU's shouldn't have acronyms to pretend a CPU is excellent at doing exactly what it's hamstrung default factory setting allow it. I vote to rename OCing to degimping. At least when talking about AMD CPU where no effort or special hardware is required to get acceptable max frequency.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Pinscher:

I've had my 1600 at all core clock of 3.9 Ghz since day one. I've never used XRF regardless of my motherboard being compatible or not. What does XFR 2.0 have to to offer that anyone who is OCing would care about? Is anyone else tired of calling setting up a CPU to run exactly how it's capable of running "OCing?" Really, this should just be the norm and not considered OCing in the least. In regards to AMD, CPU's shouldn't have acronyms to pretend a CPU is excellent at doing exactly what it's hamstrung default factory setting allow it. I vote to rename OCing to degimping .
You do know that XFR doesn't [always] increase all cores to the boosted speed, right? It only boosts some cores, and under ideal conditions; typically for single-threaded tasks. OCing allows all cores to reach the top speed. In many cases, XFR allows speeds that an OC can't reach. Intel's Turbo Boost 3.0 on the other hand... that's pretty stupid, because all cores are capable of simultaneously turboing to some degree, which totally defeats the purpose of having a base clock. The first and I think 2nd generation Turbo Boost actually made sense and were great ideas, where the CPU would boost a core here and there to maximize the performance of single-threaded tasks, without exceeding the TDP limits. This was especially useful in laptops.
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
Need to keep in mind that Ryzen fairs better with faster ram. The latency @ 3200 C14 being at 66 is no improvement at all. Was able to get 3200 on my 1700X down to 65.2 and that was with all cores at 3.95GHz. The timings need to be identical for any comparison; C14 is not enough. That was also achieved early on...last August or so. Have noticed that my rig does not post and/or boot with the same ram latency every time; more BIOS related. So, hopefully they have better bios versions available. Hopefully The Stilt will make Ryzen Timing Checker 1.3 public. Over at OCN he shows he finally was able to get all the RAM bus settings shown. 10% is not enough to get me to upgrade unless 3600 can be achieved fairly easily.
https://forums.guru3d.com/data/avatars/m/270/270490.jpg
Pinscher:

I've had my 1600 at all core clock of 3.9 Ghz since day one. I've never used XRF regardless of my motherboard being compatible or not. What does XFR 2.0 have to to offer that anyone who is OCing would care about? Is anyone else tired of calling setting up a CPU to run exactly how it's capable of running "OCing?" Really, this should just be the norm and not considered OCing in the least. Last gen Ryzen was quire rediculous in this matter. Rebranded 1800x as 1700 and people talking about how they OC their cpu.. seriously...I still can't believe people bought the 1800x when they knew it has no headroom for real OCing. In regards to AMD, CPU's shouldn't have acronyms to pretend a CPU is excellent at doing exactly what it's hamstrung default factory setting allow it. I vote to rename OCing to degimping. At least when talking about AMD CPU where no effort or special hardware is required to get acceptable max frequency.
Guessing this was aimed at me for the XFR. Here is the deal. Now that the other member spelled out XFR, in the normal world people arent going to OC their CPUs. Even if they can, they are not. Enter what xfr and turbo do. 1. for power saving. Unlike us enthusiasts, a lot of people care about a CPU being able to clock up and down to conserve power. AMD can market the highest possible frequencies and the lowest possible power specs to make the CPU more appealing to all customers. Intel does this as well. Look at the i9 they just announced. 4.8 sounds awesome right?!?!? Look at the fine print... only 4.8on a single core. 2. XFR2 as opposed to XFR certainly has its advantages mainly by making sure the workload stays on the higher frequency core. This was a problem with XFR but the new algorithm and the more mature AMD developers have been able to make this work better and even scale more with the load. If it uses 2 cores, those two clock really high. 3 cores, little lower frequency but 3 cores boost. 4 cores..... you get the point. Taking the mindset that everyone is like you is just not logical. Some people just dont care to OC. I know a few. AMD is doing fine and will continue to do so with how fast they are pumping out new and performing products. People just have to have time to adopt and get used to the new stuff. Always been this way and always will be.
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
From what i gather pinnacle ridge shares the improved l2 cache latency that raven ridge has along with improved l3 cache latency, so it will get slightly better ipc, but not much, other than that its pretty much the same chip. at higher speeds. should be perfect since intel wont be launching anything that can clock higher than coffee lake for quite some time, the gap is closing , down to ~600-700mhz advantage rather than 900-1000mhz
data/avatar/default/avatar17.webp
schmidtbag:

Intel's Turbo Boost 3.0 on the other hand... that's pretty stupid, because all cores are capable of simultaneously turboing to some degree, which totally defeats the purpose of having a base clock. The first and I think 2nd generation Turbo Boost actually made sense and were great ideas, where the CPU would boost a core here and there to maximize the performance of single-threaded tasks, without exceeding the TDP limits. This was especially useful in laptops.
That sounds like 2.0 is really quite a useful technology would would warrant mention, other than we just turbo up your CPU to run faster while it's possible.
data/avatar/default/avatar01.webp
TitanArchon:

1. for power saving. Unlike us enthusiasts, a lot of people care about a CPU being able to clock up and down to conserve power. AMD can market the highest possible frequencies and the lowest possible power specs to make the CPU more appealing to all customers. Intel does this as well. Look at the i9 they just announced. 4 2. XFR2 as opposed to XFR certainly has its advantages mainly by making sure the workload stays on the higher frequency core. This was a problem with XFR but the new algorithm and the more mature AMD developers have been able to make this work better and even scale more with the load. If it uses 2 cores, those two clock really high. 3 cores, little lower frequency but 3 cores boost. 4 cores..... you get the point. Taking the mindset that everyone is like you is just not logical. Some people just dont care to OC. I know a few. AMD is doing fine and will continue to do so with how fast they are pumping out new and performing products. People just have to have time to adopt and get used to the new stuff. Always been this way and always will be.
You are making some great points.. really my post could go on and on to respond and have a conversation, but norammly that kills a trhead. I'll follow your format. Power savings: The only people that are concerned about a CPU's power consumption are those who are going to OC and they are not concerned about the consumed power as much as they are concerned about the power envelope they have to work with to achieve their OC. No one else cares. "I hope my new Rizen saves me 25 watts of power," no one said ever. Anyone only ever wants their work to be done faster and if they need a lower power device that has the battery life they desire for that requirement. XFR2: if a core can be clocked high, wouldn't simply setting your CPU to run at that clock serve more useful. Why pout an algo in between you and the performance you want regardless of what you need at any given point. Thanks for the lesson regarding XFR going beyond the all core speed of a "setup" CPU. I figured once your set your all core frequency that XFR was redundant since you capped the CPU out already. I guess in the end the tech still has some purpose. OCING: Regardless of people being like me or not isn't the point. Any person can simply look at an OC vs Stock Bench mark and realize that CPU's are purposely gimped by manufactures. The fact that we refer to "setting up a CPU" as "OCing" is the reason why more people don't go through the setup steps themselves. They think, "oh, i don't know anything about OCing" even if they know about the bios and tweaking settings. The industry or more so community has taking a very normal practice of setting up a CPU and putting an elitist term on it and scaring away laymen from take responsibility for their computers, and that's a sad situation.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Pinscher:

That sounds like 2.0 is really quite a useful technology would would warrant mention, other than we just turbo up your CPU to run faster while it's possible.
You mean 3.0? But no, it's not a useful technology, it's a stupid gimmick that just permits Intel to ship cheaper heatsinks. You're not getting free/bonus performance, you're maybe getting performance you should have already had out-of-the-box that you already paid for. Their CPUs are perfectly capable of reliably sustaining their all-core boost clocks, but the heat generated when doing so would require them to improve the box cooler. They guarantee the performance of the base clocks, not the turbo clocks. So as long as the stock heatsink can sustain base clocks, whatever extra performance you lose is not their problem. When all cores are capable of clocking higher but don't due to thermal issues, it's really no different than thermal throttling. If they wanted what was best for the customer, they wouldn't have "all-core turbo speeds" (and instead, use that speed as the base clock) and they'd ship a heatsink that wasn't made out of a pack of beer cans. This becomes unethical, in the sense that reviewers will keep the CPU in ideal conditions, whether that be cool and clean air, or whether that be an aftermarket heatsink. So even when they keep the CPU at "factory settings", it's going to run better than it will for the average joe, especially after joe has been using the computer for over a year and the stock heatsink is caked with dust. Like I said, I'm greatly in-favor of Turbo Boost, just not when all cores are boosted.