Ryzen Threadripper 2000 Is Sampling (According To AMD slide)

Published by

Click here to post a comment for Ryzen Threadripper 2000 Is Sampling (According To AMD slide) on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Hmm. Seeing as there's no signs of the 2800(X) for AM4, I would've thought they'd use the 2800X for the Threadripper 8-core model, and shift all the other numbers down. That way, the numbers line up better: the 2920X could have 12 cores and the 2960X could have 16 cores. I assume they're still not going to make 10 and 14 core variants? Anyway - I suspect these Threadrippers will be the most interesting for Ryzen. First gen Ryzen's OC potential was disappointing. 2nd gen's is underwhelming, but for Threadrippers, you wouldn't really want to OC much beyond 4.2GHz anyway until you start running into cooling or power issues.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
i have played with the engineering sample (under heavy scrutiny) of the 2900X. chills went down my spine... and when i heard about advanced cooling support for the TR4, i got giddy. it was on an open frame workbench with a 420mm radiator from a major manufacturer.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
AMD saved us all last year from the monopoly, but that isn't enough as they need to keep giving people reasons to upgrade. I just hope consumers buy their products so they have the money to keep R&D pumping updates now. I need a GPU more than a CPU tbh, and DDR4 isn't going down on price soon so...I'll keep my fingers crossed for better prices soon.
https://forums.guru3d.com/data/avatars/m/268/268848.jpg
I am sure that 2950X could boost up-to 4.5GHz on turbo-boost, and it should give min 10% performance boost over 1950X with better thermals/power consumption.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Silva:

... as they need to keep giving people reasons to upgrade. ... DDR4 isn't going down on price soon so...I'll keep my fingers crossed for better prices soon.
As you have already proved yourself, it doesn't matter if they give people a reason to upgrade since RAM (and GPU) prices are still too high. SSD prices could be better, too, but at least they're not unreasonable. Regardless, having a reason to upgrade is subjective. There are people still rocking Sandy Bridge systems with no real incentive to upgrade, aside from maybe better power efficiency. The fact of the matter is, software just isn't becoming more demanding on CPUs, and single-threaded tasks aren't going to go away any time soon (if ever).
data/avatar/default/avatar04.webp
schmidtbag:

Hmm. Seeing as there's no signs of the 2800(X) for AM4, I would've thought they'd use the 2800X for the Threadripper 8-core model, and shift all the other numbers down. That way, the numbers line up better: the 2920X could have 12 cores and the 2960X could have 16 cores. I assume they're still not going to make 10 and 14 core variants? Anyway - I suspect these Threadrippers will be the most interesting for Ryzen. First gen Ryzen's OC potential was disappointing. 2nd gen's is underwhelming, but for Threadrippers, you wouldn't really want to OC much beyond 4.2GHz anyway until you start running into cooling or power issues.
Calling the OC underwhelming or disappointing is a bit harsh: Ryzen 1700 from 3000 to 3900Mhz is about the same % as a 8700k from 3700 to 4900Mhz, only the old FX or Sandy can overclock more then the Ryzen 1700. On air cooling i feel like the Ryzen is easier to handle, then many of the 8700k and the big Intel chips.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
TLD LARS:

Calling the OC underwhelming or disappointing is a bit harsh: Ryzen 1700 from 3000 to 3900Mhz is about the same % as a 8700k from 3700 to 4900Mhz, only the old FX or Sandy can overclock more then the Ryzen 1700.
Not really a justifiable comparison, seeing as the base clock of the 8700K is as high as the 1700's highest boosted clock. The 8700K's all-core turbo clock is higher than pretty much what almost any 1st-gen Ryzen could do on liquid cooling. That being said, the percentage of OC vs stock clocks isn't the only factor to consider, just one of many. Keep in mind both Coffee Lake and Ryzen have very similar IPC - this is what makes the lower clocks of Ryzen a detriment to the series. The other thing to keep in mind is the silicon quality has very little to do with how far you can go. The average 1700 tops out at 3.9Ghz, but so does pretty much every other 1st gen Ryzen. The 1800X sold so poorly because despite being part of the "better quality bin", it didn't make a difference; you could just get a 1700 and make it reach the same speeds for a much lower price. As for CPUs that can clock better, there's more than just Sandy Bridge and FX. Pretty much every x86 architecture since 2011 could clock higher than 4.1Ghz (with an unlocked multiplier). If you consider chips outside of x86, various chips POWER6-POWER9 could reach from 4-5GHz.
On air cooling i feel like the Ryzen is easier to handle, then many of the 8700k and the big Intel chips.
I would agree with this. Keep in mind, I myself have a Ryzen build, I've built another PC for someone using Ryzen, and I intend to do it again. Just because it doesn't have impressive high clocks, that doesn't make it a bad product.
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
schmidtbag:

Hmm. Seeing as there's no signs of the 2800(X) for AM4, I would've thought they'd use the 2800X for the Threadripper 8-core model, and shift all the other numbers down. That way, the numbers line up better: the 2920X could have 12 cores and the 2960X could have 16 cores. I assume they're still not going to make 10 and 14 core variants? Anyway - I suspect these Threadrippers will be the most interesting for Ryzen. First gen Ryzen's OC potential was disappointing. 2nd gen's is underwhelming, but for Threadrippers, you wouldn't really want to OC much beyond 4.2GHz anyway until you start running into cooling or power issues.
Yes, clock speed isn't really important for these types of chips. It's not like anyone's going to shell out $1K for a TR because it can clock to 4.5 GHz on a single core. The multi-core performance is what really matters and the power draw/heat will be the biggest concern... and that's where I'm a little ambivalent, since the 2700X increased the TDP to 105 watts. Will this be the case with the 2950X as well? (from 180 watts to, say, 200)?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
D3M1G0D:

and that's where I'm a little ambivalent, since the 2700X increased the TDP to 105 watts. Will this be the case with the 2950X as well? (from 180 watts to, say, 200)?
I wouldn't worry about it. It seems Ryzen's wattage really starts to grow exponentially once you exceed 3.8GHz, and the 2700X seems to dabble beyond that. If they kept the clocks more similar to the 1800X, I'm sure it'd actually be lower than the original 95W TDP.
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
schmidtbag:

Regardless, having a reason to upgrade is subjective. There are people still rocking Sandy Bridge systems with no real incentive to upgrade, aside from maybe better power efficiency. The fact of the matter is, software just isn't becoming more demanding on CPUs, and single-threaded tasks aren't going to go away any time soon (if ever).
Totally agree... from a "gamers standpoint" a 1080ti and 1440p monitor will go a lot further than an entire system upgrade http://www.guru3d.com/index.php?ct=articles&action=file&id=40509
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
schmidtbag:

The fact of the matter is, software just isn't becoming more demanding on CPUs, and single-threaded tasks aren't going to go away any time soon (if ever).
Wut? Sure, your word processor might not be be much more demanding than 10 years ago, but lots and lots of software is more demanding, games included. Then there are of course lots of programs that can use any resources you throw at them, like encoding some bloody 4k video or rendering stuff with high polys.
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
Kaarme:

Wut? Sure, your word processor might not be be much more demanding than 10 years ago, but lots and lots of software is more demanding, games included. Then there are of course lots of programs that can use any resources you throw at them, like encoding some bloody 4k video or rendering stuff with high polys.
he's talking about the "everyday average pc user" of course a threadripper 1950x can encode/render a 4k video faster than a 4790k but if your a "gamer" with a gtx1080/@1440p it ain't done nothing for the gamers/gaming experience. in fact it could slow it down.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

Wut? Sure, your word processor might not be be much more demanding than 10 years ago, but lots and lots of software is more demanding, games included. Then there are of course lots of programs that can use any resources you throw at them, like encoding some bloody 4k video or rendering stuff with high polys.
Games, encoding, and rendering are [usually] all things that benefit more from GPUs rather than CPUs. It's not that you can't use more powerful CPUs, but in most cases the effort is better spent elsewhere. That being said, despite PCs having 8-threaded CPUs for roughly a decade and consoles having 6-core CPUs going all the way back to PS3 (I know, it had 8 cores but 2 of them were not usable for games) we still are seeing new games that only use 2-4 threads. And the fact of the matter is, it usually comes down to lots of software can't be efficiently parallelized. When it comes to games, the complexity of adding more threads outweighs the pros of the potential performance gains. I say potential, because in many cases, adding threads can actually hurt performance, since clock cycles are wasted to synchronize data. As I've stated in other threads, CPUs are better at multi-tasking, not parallelization. When you see servers with dozens or even hundreds of threads, in a lot of cases, they aren't running any individual application that takes advantage of them. Instead, they run hundreds of individual/independent single-threaded processes. You can also look at how modern web browsers work, where each tab only uses 1 thread (maybe 2 if you count rendering). But modern CPUs handle this gracefully, since each thread is independent of the others. So, I really don't think CPU-bound tasks are going to change a whole lot in the foreseeable future.
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
schmidtbag:

That being said, despite PCs having 8-threaded CPUs for roughly a decade and consoles having 6-core CPUs going all the way back to PS3 (I know, it had 8 cores but 2 of them were not usable for games) we still are seeing new games that only use 2-4 threads. And the fact of the matter is, it usually comes down to lots of software can't be efficiently parallelized. When it comes to games, the complexity of adding more threads outweighs the pros of the potential performance gains. I say potential, because in many cases, adding threads can actually hurt performance, since clock cycles are wasted to synchronize data.
I'm not sure about that. Although 8-threaded CPUs have been around a while, most gamers still have dual-core CPUs (if the Steam Survey is anything to go by, as unreliable as it is). Until recently, a 8-threaded CPU used to be considered high-end territory. Even Intel themselves seem to be looking towards a threaded future (with the death of Moore's Law and the difficulty in squeezing out more single-threaded performance). This is not to say that future games will necessarily be highly-threaded, but I think there will be a concerted push in that direction (at least more than it used to be). Of course the GPU is still the focal point of games as far as FPS goes, but additional CPU threads can still be put to good use (e.g., seamlessly load new levels or scenes in the background and eliminate loading screens). That's getting a bit beyond this thread though. Even if more games become multi-threaded, I seriously doubt that gamers would benefit from a 16-core CPU like the 2950X anytime soon. I think it's well understood that such chips are for prosumer purposes (or for computing, like myself). Game devs certainly aren't going to optimize their games for the tiny fraction of a percent of consumers who buy 10+ core chips.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
D3M1G0D:

I'm not sure about that. Although 8-threaded CPUs have been around a while, most gamers still have dual-core CPUs (if the Steam Survey is anything to go by, as unreliable as it is). Until recently, a 8-threaded CPU used to be considered high-end territory. Even Intel themselves seem to be looking towards a threaded future (with the death of Moore's Law and the difficulty in squeezing out more single-threaded performance).
I would take the Steam survey with a grain of salt. It says dual cores (and single cores, for that matter) are on the rise, and by a considerable margin, which doesn't sound right. Regardless, yes, 8-threaded CPUs were pretty recently high-end hardware, but that doesn't explain why console ports for all this time have still been so thread limited. Just because an application utilizes multiple threads, that doesn't mean it requires that many to function. Take Ashes of the Singularity for example - that game is plenty playable on 4 cores, even though it can take advantage of a CPU 3x bigger. So, I really don't see the excuse as to why devs haven't been increasing thread counts in PC games, aside from what I already mentioned: not all tasks can so easily be split into multiple pieces, nor should they all be.
This is not to say that future games will necessarily be highly-threaded, but I think there will be a concerted push in that direction (at least more than it used to be). Of course the GPU is still the focal point of games as far as FPS goes, but additional CPU threads can still be put to good use (e.g., seamlessly load new levels or scenes in the background and eliminate loading screens).
I don't disagree, and I think you've also got a good idea there. But even then, that idea would only take advantage of 1 or 2 extra threads, and that task would have a lot of downtime. I'm not criticizing the idea, but exemplifying how there's very little you can do to multi-thread a game in a efficient and reliable way on a CPU. If the task is too important, lower-end systems won't handle it well. If the game immediately depends on the task's output, adding more threads will hurt framerate due to clock/sync issues. If the task can so effortlessly and dynamically be multipled into as many threads as are available, it's probably better off being punted to a GPU.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
These are not and have never been gaming CPU's. They are Workstation / Home-Server / Small-business server CPU's and for those workloads, they are awesome ! (You can game on them, yes, but that's not why you buy them in the first place !) I for one want to replace my main media server (which is using a pretty powerful Ivy Bridge quad core) to a new TR, as the 4-core CPU gets destroyed by PLEX background encodes when more than two people watch something at the same time. And I want to give it more work as well if the power is available, like video encoding (which I do on my main gaming PC), so I don't block my gaming and other activities during the encode. (I used to do that at night while sleeping, but those 4K's take a damn long time!) I'd also like to start running some VM's on the new server, to give two of my colleagues who only have (slow) laptops the ability to do some high-performance workloads as well, which would overheat their laptops like mad and take much longer (not talking about mining or anything like that... it's just work related - Media company) The computer is also a development webserver (for my work), and to be honest the quad-core has run it's time already. It struggles quite hard with everything. I could of course switch it to 8700K for 6 cores, but WHY??? When I can have 16 cores available and possibility to upgrade to 128 GB of RAM ? I welcome our new AMD overlords and their godly 16-core-@-home-CPU's
https://forums.guru3d.com/data/avatars/m/270/270169.jpg
schmidtbag:

"Quote from schmidtbag" Hmm. Seeing as there's no signs of the 2800(X) for AM4, I would've thought they'd use the 2800X for the Threadripper 8-core model, and shift all the other numbers down. That way, the numbers line up better: the 2920X could have 12 cores and the 2960X could have 16 cores. I assume they're still not going to make 10 and 14 core variants? Anyway - I suspect these Threadrippers will be the most interesting for Ryzen. First gen Ryzen's OC potential was disappointing. 2nd gen's is underwhelming, but for Threadrippers, you wouldn't really want to OC much beyond 4.2GHz anyway until you start running into cooling or power issues. "End of quote"
I honestly don't understand how ANYONE could find the entirety of the Zen+/Ryzen 2nd Gen refresh "disappointing"... For a simple "process shrink & tweak" style refresh, Ivy Bridge was disappointing; but IMO Zen+ is anything but. Sure, the clocks are a bit less than what many anticipated, but they managed to massively improve Zen's biggest architectural (vs process related) weak spot, with major improvements to latency across the entire board (which pays massive dividends for OG Zen's biggest performance weakspot; CPU bound gaming. This will also have MASSIVE effects on the MCM based Threadripper; where latency is even more of an issue) which lead to a general +3% improvement in IPC. And that's all before even talking about the absolute freaking GAME-CHANGER that is Precision Boost 2, which let's everyone get most all of their CPU's power w/o needing to do anything (as in you can now separate CPU's into before PB2 & after PB2 [if Intel doesn't end up adopting a similar core by core truly dynamic clock system in response, from their current TB2+3, I'll eat my shorts]). And of course, if you do happen to have the power delivery & cooling to push things further, you have Precision Boost Overdrive (which exposes all the major PB2 parameters to the user) there to make it happen; legitimately making traditional multiplier based overclocking practically obsolete (and after spending about a day dialing in my PBO settings on my GB AX370-Gaming 5 for my 2700X [coming from a 1700; was able to upgrade on the super cheap] only convinced me of that even further).
https://forums.guru3d.com/data/avatars/m/270/270169.jpg
schmidtbag:

I wouldn't worry about it. It seems Ryzen's wattage really starts to grow exponentially once you exceed 3.8GHz, and the 2700X seems to dabble beyond that. If they kept the clocks more similar to the 1800X, I'm sure it'd actually be lower than the original 95W TDP.
And you'd be right, and by WAAAAY more than you were probably expecting. 12nm Zen+'s efficiency sweet spot is so much deeper than Zen's it's kind of insane. It's just that the pushed to it's limits 2700X (stocks ofc) obv doesn't really showcase this fact. OP: we do not send traffic to that site after being accused one too many times. Thanks.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Cooe:

I honestly don't understand how ANYONE could find the entirety of the Zen+/Ryzen 2nd Gen refresh "disappointing"...
EDIT: I didn't imply the entirety of the 2nd gen was disappointing. I only thought the overclocking was. As for your 2 other paragraphs, I agree with all of them, but I don't see how they're relevant to anything I said. I'm merely expressed my disappointment with the overclockability, nothing more. I didn't even say Zen+ has bad overclockability, because it doesn't. But compared to Intel (remember, they have a similar IPC), it is unimpressive.
https://forums.guru3d.com/data/avatars/m/270/270169.jpg
schmidtbag:

"Quote from schmidtbag" And yet, I could argue the inverse. It's an opinion - you don't have to like it or agree. It's fine to be a fan of a product (or even something more personal, such as yourself) and acknowledge the shortcomings. Humility is important. Nothing is perfect, and that's ok. To not find anything disappointing in Zen+ suggests it was a perfect refresh, and that just simply isn't true. To clarify, I did not imply the entirety of the 2nd gen was disappointing. It was great and something I would recommend most people get, but I wouldn't recommend it to everyone. As for your 2 other paragraphs, I agree with all of them, but I don't see how they're relevant to anything I said. I'm merely expressed my disappointment with the overclockability, nothing more. I didn't even say Zen+ has bad overclockability, because it doesn't. But compared to Intel (remember, they have a similar IPC), it is unimpressive. "End of quote"
Ahh my bad, I didn't seem to pick up on you specifying that you were talking about overclocking specifically and not Zen+ as a whole; in which case I sincerely apologize, tips hat. And in that case, I'd completely agree; it was a little disappointing. Sadly, that's something entirely out of AMD's hands, and entirely in Global Foundries. For the things AMD actually had control over, I think they absolutely knocked Zen+ right outta the park. And I'm well aware that everyone is entitled to their own opinions. That's why I said that "I don't understand" not "everyone who disagrees with me is wrong because my opinion is the correct one; end of story".