New CPU-Z Upgrade Lowers Ryzen performance

Published by

Click here to post a comment for New CPU-Z Upgrade Lowers Ryzen performance on our message forum
data/avatar/default/avatar20.webp
So whats that, an average of 30% or so less performance core v core? Seems a bit heavy handed.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
I don't belong to the tin-foil hat brigade, but this sounds quite weird. Why would the Ryzen architecture process something efficiently if it was meaningless? I'm quite sure there are things the Intel core architecture is also good at. In fact, many current game engines prove that, making the core architecture still the best for gamers today. This is the same as saying that since a Porsche engine allows faster acceleration than a Ford engine, the acceleration factor must be disregarded if those two cars are ever compared. Sorry, CPU-Z developer, but I won't buy this explanation, which sounds totally random.
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
I don't belong to the tin-foil hat brigade, but this sounds quite weird. Why would the Ryzen architecture process something efficiently if it was meaningless? I'm quite sure there are things the Intel core architecture is also good at. In fact, many current game engines prove that, making the core architecture still the best for gamers today. This is the same as saying that since a Porsche engine allows faster acceleration than a Ford engine, the acceleration factor must be disregarded if those two cars are ever compared. Sorry, CPU-Z developer, but I won't buy this explanation, which sounds totally random.
There's no point having a benchmark that shows off what one processor arc can do in a 1 in a trillion situation against others that can't.
data/avatar/default/avatar15.webp
I just checked a recent validation from an x1800 @3.8Ghz, i guess the 6700k is running at stock. It still looks fine to me, nothing to worry about: http://i.imgur.com/Vv5XSwr.png
data/avatar/default/avatar20.webp
The benchmark seems very different compared to the previous version, the scores are way lower than the ones I had in the past. Right now my cpu is working at 3.5 GHz base clock with a 4.6 GHz turbo and I got 207 ST and 1348 MT. When Ryzen was released my cpu was working at 3.6/4.0 GHz and I had this result: 1050 ST and 6680 MT (in multithreading uses only the base clock when turbo is enabled). This is BS. Now I want to see how things changed for the Intel cpus.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
Overall AMD CPU's scores are lowered, not only Ryzen. Some user already posted this : [spoiler]http://kephost.com/images/2017/05/02/cpuz_intel_powered.png[/spoiler]
All scores dropped and have to be retested, my 2500k lost more than 1000 points in Single Thread. What's more important is knowing if this is a fair and real way of measuring performance across different architectures.
Sorry, CPU-Z developer, but I won't buy this explanation, which sounds totally random.
There's no point having a benchmark that shows off what one processor arc can do in a 1 in a trillion situation against others that can't.
Having unbiased real world benchmark examples is important. No one uses their computer to calculate Pi or something like that. PS: looking at those numbers again, something is wrong: AMD dropped more than Intel...wth?
I just checked a recent validation from an x1800 @3.8Ghz, i guess the 6700k is running at stock. It still looks fine to me, nothing to worry about: [spoiler]http://i.imgur.com/Vv5XSwr.png[/spoiler]
Those are the old values. And for god sake, use spoilers for big images.
The benchmark seems very different compared to the previous version, the scores are way lower than the ones I had in the past. Right now my cpu is working at 3.5 GHz base clock with a 4.6 GHz turbo and I got 207 ST and 1348 MT. When Ryzen was released my cpu was working at 3.6/4.0 GHz and I had this result: 1050 ST and 6680 MT (in multithreading uses only the base clock when turbo is enabled). This is BS. Now I want to see how things changed for the Intel cpus.
They argued the lower numbers are for comparison sake, as more cpu's with multiple cores will come out and would be difficult to read the high numbers. It's fine by me, but everything needs to be benchmarked again...
https://forums.guru3d.com/data/avatars/m/45/45709.jpg
''The new benchmark uses a new algorithm, and its scores can not be compared with the previous version.'' I don't see anything weird here.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
I just checked a recent validation from an x1800 @3.8Ghz, i guess the 6700k is running at stock. It still looks fine to me, nothing to worry about: http://i.imgur.com/Vv5XSwr.png
Yup looks fine to me. If that 1800x was 4ghz it would still be at top?
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
Having unbiased real world benchmark examples is important. No one uses their computer to calculate Pi or something like that. PS: looking at those numbers again, something is wrong: AMD dropped more than Intel...wth?
What don't you get? AMD results were inflated, of course they dropped more.
https://forums.guru3d.com/data/avatars/m/268/268700.jpg
in short - they created benchmark before and they did not think that any procesor could perform better than intel, so when ryzen show up they had to change benchmark so it could favor intel... btw any program compiled with ms visual c++ is optimized for intel...
data/avatar/default/avatar36.webp
mmm maybe most cpu benchmarks needs to be updated then, maybe it will mimic what we see in games on ryzen.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
There's no point having a benchmark that shows off what one processor arc can do in a 1 in a trillion situation against others that can't.
I hope the CPU-Z developer has bought a few lottery tickets if he has a unique 1 in a trillion luck. A more plausible explanation is that there's real value in processing it faster and thus the architecture does it. The CPU-Z developer simply didn't like it for his own reasons, so he used his natural privilege as the tool developer to change it to something else, to serve his own ends. It's his right, so it's okay, but he would be more respectable by not trying to invent fancy excuses for doing it.
https://forums.guru3d.com/data/avatars/m/262/262564.jpg
This is why I don't trust benchmarks and rely on personal real world experience. Kind of curious this comes after AMDs stock drop. Piling on AMD en vogue now? Anyway, I'm eagerly awaiting my 1700 and Biostar ITX from newegg. I have a feeling it's going to leave my 5820k x99 ITX in the dust. We'll see. If it does, maybe I should email the results to the makers of CPU-Z. Right now, benchmarks have no credibility with me.
https://forums.guru3d.com/data/avatars/m/262/262564.jpg
I hope the CPU-Z developer has bought a few lottery tickets if he has a unique 1 in a trillion luck. A more plausible explanation is that there's real value in processing it faster and thus the architecture does it. The CPU-Z developer simply didn't like it for his own reasons, so he used his natural privilege as the tool developer to change it to something else, to serve his own ends. It's his right, so it's okay, but he would be more respectable by not trying to invent fancy excuses for doing it.
This type of behavior, IMO, renders CPU-Z, completely incredible.
data/avatar/default/avatar39.webp
LoL. Remember when Nvidia called Oxide ? It was like: We're not good at async compute so async compute is totally irrelevant and disable it in your benchmarks please.
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
I hope the CPU-Z developer has bought a few lottery tickets if he has a unique 1 in a trillion luck. A more plausible explanation is that there's real value in processing it faster and thus the architecture does it. The CPU-Z developer simply didn't like it for his own reasons, so he used his natural privilege as the tool developer to change it to something else, to serve his own ends. It's his right, so it's okay, but he would be more respectable by not trying to invent fancy excuses for doing it.
core for core and clock for clock - almost 30% higher than Intel Skylake I still disagree. Even Lisa Su said they were aware it's about ~7% behind Skylake on average, for IPC. If this was the case, AMD would have been all over this special case of processing a specific algorithm, which is quite possibly useless in the real world, or is at least not proven to be useful.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Why are the scores much lower than the previous version, and can they be compared ? At the time the first benchmark was released in 2015, only a few parts included 8 cores (like the 5960X). In the meantime, Ryzen was introduced, and therefore 6 and 8-cores processors will become more and more prevalent. As a result, more models with 10, 12 and 16 cores are soon to be released. More cores mean higher multi-threaded scores, and a lower scale makes the comparisons easier. The new benchmark uses a new algorithm, and its scores can not be compared with the previous version.
Does this mean that the benching works the same and it's just an issue of scale ("points") output?
Why do the Ryzen performance decrease in comparison to the Intel processors with the new benchmark ? When the 1st version of the benchmark was released in 2015, it was tested on all existing architectures to check the relevancy of the scores. Almsot two years later, Ryzen was introduced, and scored - core for core and clock for clock - almost 30% higher than Intel Skylake. After a deep investigation, we found out that the code of the benchmark felt into a special case on Ryzen microarchitecture because of an unexpected sequence of integer instructions. These operations added a noticeable but similar delay in all existing microarchitectures at the time the previous benchmark was developed. When Ryzen was released, we found out that their ALUs executed this unexpected sequence in a much more efficient way, leading to results that mismatch the average performance of that new architecture. We reviewed many software and synthetics benchmarks without being able to find a single case where such a performance boost occurs. We're now convinced that this special case is very unlikely to happen in real-world applications. Our new algorithm described below does not exhibit this behaviour.
So to me this reads like "in our special benchmark Ryzen scored better because we use a bench that works exceptionally well on Ryzen, but not on all other architectures, so we tweaked it to be more real application like, and less easy to do for a single architecture than all others". If this now means he's anti AMD biased or just doesn't want to have a benchmark that favours Ryzen is to be debated and could be seen in both ways...
LoL. Remember when Nvidia called Oxide ? It was like: We're not good at async compute so async compute is totally irrelevant and disable it in your benchmarks please.
No I don't. Because they said they (Nvidia) wouldn't get anything out of Async, making it even slower, so that's why they asked to disable it for Nvidia GPUs only. Talking about AMD biased benchmarks, are we again? 😉 I don't really understand the fuzz about this. Ryzen performs very well, offers great performance for the money. Who cares about such a single benchmark that hardly is relevant for most people, not even when buying either an Intel or AMD platform...
data/avatar/default/avatar32.webp
Well this raises some other important questions: 1- Did the dev release the "magical", "lottery winner", instruction combination that "every" processor (according to him) computes slower than Ryzen? Such a claim needs peer validation. 2- Does it really (as someone stated) affect other AMD chips? I guess this is just a rumor but there's no evidence to prove it true or false yet. 3- Why is this new bench being compiled with such an old MS compiler? Why not use or wait for something Ryzen aware (gcc has been updated to a certain degree) before making this huge change?