Updated: HWInfo Application beta Introduces Power Reporting Deviation Sensor (Cheats)

Published by

Click here to post a comment for Updated: HWInfo Application beta Introduces Power Reporting Deviation Sensor (Cheats) on our message forum
data/avatar/default/avatar19.webp
Ryzen 2600. Not too shabby for a -0.100v undervolt, but I guess I have to restore to defaults and run the test again. EDIT: Ambient temp is 30c, btw. Yes, I know it's hot here.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
my reply from the other thread: https://forums.guru3d.com/threads/explaining-the-amd-ryzen-power-reporting-deviation-metric-in-hwinfo.432667/#post-5797376
fantaskarsef:

Thank you, very interesting read! Now too bad you can't see what the MB manufacturers do until after you buy it, but at least with mildly "cheating" on those values you could get some headroom on the auto OC features of power management. Honestly, if I would know which board has which value behind it, and aiming for custom h20 watercooling anyway, I'd probably apt for something like 75% or 80% if this proves to work like described since manual OC (iirc!) doesn't really pay off with Ryzen CPUs.
data/avatar/default/avatar07.webp
Here it is with default voltage settings. It appears there isn't any "cheating" going on with the MSI B450 Mortar MAX, at least when using a 2600.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
It may be a questionable technique but on the other hand how much is the lifespan of the CPU reduced??? If a CPU has a lifespan in always on mode of let's say ten years this will reduce the lifespan by how much??? a month, a year, two,five??? I don't think that there are a lot of people who keep a CPU in their pc that long (10+ years and not always on) so the reduction in lifespan might be irrelevant...
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
barbacot:

It may be a questionable technique but on the other hand how much is the lifespan of the CPU reduced??? If a CPU has a lifespan in always on mode of let's say ten years this will reduce the lifespan by how much??? a month, a year, two,five??? I don't think that there are a lot of people who keep a CPU in their pc that long (10+ years and not always on) so the reduction in lifespan might be irrelevant...
True. Simple question, how many CPUs actually fail and burn out due to OC... haven't had one in 20 years tbh. Haven't heard of it either.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Fox2232:

Over years, there were people here reporting that their OC was no longer stable after many years and they had to reduce it. I think there were cases where people had to even downclock it in the end. Zen is too young to actually have those cases around. There people either burn CPU in seconds, or it does live. I am definitely guilty of running CPU way out of spec by having one CCD @4,35GHz and another on 4,45GHz while running 1,36V. So, I may be one of 1st people finding out hard way. But in situations where I want to run workload on all cores, not just few, I do run 4GHz and 1,125V or 1,175V. (Not sure which one as I made that profile, tested it a lot for stability. And when I go transcoding/folding/... I just enable it and everything is fine and cool.)
Well you are right, a downclock after years of usage, let's say 5+, that I do have heard of too. But I have to say, even with overclockers it's not that uncommon to simply reduce the OC and go back towards stock, it doesn't really kill a CPU, or reduce it's lifetime by 90%. I've heard and seen bad and aging PSUs kill just as many overclocks and in fact, even hardware for real.
https://forums.guru3d.com/data/avatars/m/118/118821.jpg
degradation can surface in as little as two or three years in extreme cases. its severity is dependent upon the chip leakage and the power delivery itself. at higher voltage/current draw the degradation scales up as you would expect. its only in more serious cases when procs are overvolted >15-20% when this becomes noticeable - and folks dont tend to run their OC on the razors edge of stability, but rather with a small voltage buffer. the effect is thusly further obfuscated the question isnt whether or not we should worry about chip failure (we shouldnt, just nudge your cpu voltage after a few years if/when needed to maintain the desired frequency). the question is, are you okay with mobo manufacturers purposely configuring the power delivery to work differently than it is stated to work in the documentation and the BIOS? do you want your mobo to do exactly what you tell it to do, or are you okay with the mobo deciding to do something else and draw more power to proc, outside of spec, in a way that you dont directly control...all without telling you?
https://forums.guru3d.com/data/avatars/m/255/255510.jpg
Hmm. I have got to check this out. In the past when I have de-lid and liquid metal pasted the cpu I have seen signs of scorching on the chip surface. Tough little things. 🙂
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
__hollywood|meo:

the question isnt whether or not we should worry about chip failure (we shouldnt, just nudge your cpu voltage after a few years if/when needed to maintain the desired frequency). the question is, are you okay with mobo manufacturers purposely configuring the power delivery to work differently than it is stated to work in the documentation and the BIOS? do you want your mobo to do exactly what you tell it to do, or are you okay with the mobo deciding to do something else and draw more power to proc, outside of spec, in a way that you dont directly control...all without telling you?
That's exactly the point, would we want to have something happening without us knowing. Then again, I wouldn't know what exactly internal powermanagement engines in CPUs or GPUs does... one of the issues I have with GPU overclocking which is no longer done by manual control but "black boxes" which are called boost mechanisms.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
My B450M Tuf Gaming-Pro and 3800X combo show ~97% CPU usage while folding, so I think this board is OK.
data/avatar/default/avatar05.webp
I'm running a 3700x, bone stock, on an ASRock X370 Gaming K4 with the latest BIOS. Running P95 or CB-R20, I'm getting a deviation reading of 58 to 60%. HWInfo shows the processor at about 10 watts at idle, and 65 watts under load. That's a 55 watt increase from idle to full load, and yet, the system power draw, according to my UPS, is going up by about 100 watts from idle to full load. Just one more thing to worry about...
data/avatar/default/avatar08.webp
i'll need to look in to this with stock settings but with my 3900x on a asus X570 strix-e i'm seeing 80-97% deviation averaging around 92% and out of curiosity ran a cinebench which had it at a 87% deviation, now... i'm not sure if this is because i'm running actuallyhardcoreoverclocker's recommended pbo settings (also 3600mhz memory/1800mhz infinity fabric) or if things like power plan's are actually capable of affecting it. (running latest bios available 1409, which is still agesa 1.0.0.4 and not 1.0.0.5 maybe that will help).
https://forums.guru3d.com/data/avatars/m/214/214429.jpg
Asus Crosshair VI Extreme x370 BIOS 7704 (latest) AMD R7 2700x all set to Auto in BIOS apart from ram *3200 and ram voltage 1.35v Under Cinbench full load I get Min 108.2% - max 110.9% power reporting deviation
data/avatar/default/avatar33.webp
Fox2232:

Over years, there were people here reporting that their OC was no longer stable after many years and they had to reduce it. I think there were cases where people had to even downclock it in the end. Zen is too young to actually have those cases around. There people either burn CPU in seconds, or it does live. I am definitely guilty of running CPU way out of spec by having one CCD @4,35GHz and another on 4,45GHz while running 1,36V. So, I may be one of 1st people finding out hard way. But in situations where I want to run workload on all cores, not just few, I do run 4GHz and 1,125V or 1,175V. (Not sure which one as I made that profile, tested it a lot for stability. And when I go transcoding/folding/... I just enable it and everything is fine and cool.)
i was one of the people who had that, now my i7 920 was never great it took 1.55v vcore to get in to windows at 4ghz and at that point i still could not finish a 3dmark run. but i eventually settled on a nice 3.66ghz which i could maintain at 1.24v for about 3 years without issue (it was a 2.66ghz cpu so still kinda big oc even though most of them could do 4.0 without much hassle...infact everybody i knew who had one could do it but not mine). eventually it started to bluescreen and i had to start pumping up the voltage, at around 1.36 i decided to start dropping the clocks but at that point it was already to late as i needed to run oc voltage on stock speeds to maintain stability think all in all it lasted me about 5 years getting swapped out with a much more affordable i5 4670k. with ryzen though i've kinda done no overclocking as it doesn't benefit much from it in day to day use/gaming, just got a higher end board, liquid cooled it, and now i'm hanging on for dear life wile the voltages do scary things but AMD and ASUS assure me it's fine.
data/avatar/default/avatar20.webp
asus crosshair vi hero + 3900x: minimum: 69.9%, max 141.8% -> ~70% during full utilization. this tells me what exactly then?;)
data/avatar/default/avatar07.webp
So, should we thank gigabyte for this? I have one of their boards for my 3200G and if left on auto, 3200G is getting only 1.56-1.58v. I am even more interested now what would be reported by tomahawk, but I still don't think it is worth the time for me.
data/avatar/default/avatar11.webp
Fox2232:

Over years, there were people here reporting that their OC was no longer stable after many years and they had to reduce it. I think there were cases where people had to even downclock it in the end. Zen is too young to actually have those cases around. There people either burn CPU in seconds, or it does live. I am definitely guilty of running CPU way out of spec by having one CCD @4,35GHz and another on 4,45GHz while running 1,36V. So, I may be one of 1st people finding out hard way. But in situations where I want to run workload on all cores, not just few, I do run 4GHz and 1,125V or 1,175V. (Not sure which one as I made that profile, tested it a lot for stability. And when I go transcoding/folding/... I just enable it and everything is fine and cool.)
I ran an AMD 1700 CPU at 3.80 GHz on 1.325 volts for just a couple of years before it decided it would no longer run at that speed, regardless of the voltage. And that was with a Noctua cooler that kept temperatures below 70°C under even the worst benchmarking conditions. My ASRock X370 K4 with a 3700x is showing a deviation of 60%! Temperatures in the eighties are common under load. So, I just put the CPU in ECO mode, in the BIOS. Performance dropped about 5% in the worst cases, but temperatures dropped by over 10°C, and power draw dropped by almost 20%. Until this issue is clarified and resolved via a BIOS update, that setting stays.
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
Edit: Viewing more info. Scratch my old post... for now. But here are the results I had posted. My results from my Asus TUF X570-Plus Wi-Fi + 3900X [latest BIOS/AGESA 1.0.0.4]: Cinebench R15 - 83-80% CPU-Z bench - 75-73% (holds mostly at 73%) But that's with a ton of manual adjustments because the stock Asus settings were into dangerous voltage ranges. I significantly lowered how much power my CPU draws vs all auto stock and I don't know exactly how that affected the deviation results (probably would have been worse).
data/avatar/default/avatar15.webp
Hmm this is indeed interesting. Also it from my vantage point from watching videos about OC Ryzen chips that it takes an arm and a leg to do it. So with this information wouldn't make OC even more difficult considering you need close to accurate voltage values.
data/avatar/default/avatar34.webp
Anyone concerned about this, there are a few things you can do... First, you can change settings in HWInfo, to make it give you more accurate results. I'll use my numbers as an example. Running Prime95 with small FFTs, I'm getting a deviation reading of 59%. So, I right clicked on "CPU Package Power" then "Customize Values", and set a multiplier of 1.7. I determined that value by dividing 59 into 100. 100/59 ~ 1.7. This forces HWInfo to multiply the reported power usage value by 1.7 when displaying it. Then, I did the same thing for the "Power Reporting Deviation" setting. Now both values are reporting much closer to what they should be. What was being reported as 65 watts is now being reported as 110 watts - 65 is about 59% of 110. And, the deviation number stays close to 100% under load now. The next thing you can do is apply the same logic to the PPT setting in your BIOS. In my case, it's being under-reported, at 59% of what it actually is. The AMD stock value setting for PPT is 88 watts for my 3700x. 59% of 88 is about 52. So, I set the PPT if my BIOS to 52 watts, leaving the TDC and EDC at Auto. When the motherboard thinks it's limiting the processor to 52 watts, it's actually limiting it to 88 watts, which is the standard for PPT, for my 3700x, according to AMD. And yes, before you ask, performance does take a hit. Full load benchmarks are down about five percent, but maximum temperatures are down more than 10°C. Power now maxes out at about 90 watts - close enough to 88 for me, and within my CPU's specifications. Before, I was seeing power hit 120 watts and more under heavy loads, with temperatures in the mid-eighties. Just do the math, based on your own deviation reading, and PPT for your processor. Set to stock, Ryzen Master will tell you what the PPT setting is. Hope this helps someone...