Tech preview: Threadripper 1900X - 1920X & 1950X

Processors 199 Page 1 of 1 Published by

Click here to post a comment for Tech preview: Threadripper 1900X - 1920X & 1950X on our message forum
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
Wow how disappointed are Ryzen 1700/1800 builders right now? A threadripper with 8/16 got snuck in. I'm sure they kept that quiet to keep selling the Ryzen. I had a sneaking suspicion they might do this and luckily I waited to pull the trigger on Ryzen. Now just wait and see how the 1900x gets along. The Vega seems to be a clunker. The 1080 has been around too long now for AMD to upset that market. And now with the ridiculous power requirements. 1000w gpu needed to run just the Vega? A Threadripper build with A Vega card is going to cause so much heating problems inside of cases. As someone else stated you will need a separate power company account to run the computer! An overclocked Threadripper, an overclocked Vega card, you will be able to pop popcorn on top of your rig. That's o.k. I guess if you watch Netflix on your computer. This will also cause degradation of all the electronics in your rig. THREADRIPPER YES! VEGA NAH NOT FOR ME.
What's different from a few years ago? Other than that we had one generation or so of less powerhungry mid-spec cards?
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
I am not in the least disappointed really. I got my cpu 3 months ago and it has been running great. Threadripper build would have cost more no doubt. And would have had higher memory bandwidth but it should be roughly the same else so I am all fine.
This, plus am4 socket will be there for a while, as a result you can upgrade to a new CPU down the road without switching to a new mobo.
https://forums.guru3d.com/data/avatars/m/248/248902.jpg
I'm buying Intel. Someone needs to support the competition too 🙁
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
So much for the almighty HBM2, it has the same bandwidth as a Ti, except you know like most of us my Ti is running at 12ghz memory so that's well over 500gb of bandwidth. If HBM1 is any indication it won't OC very well and end up being worse than GDDR5x. Unless HBM3 is leaps and bounds ahead then I'd say GDDR6 is the future.
HBM isn't the problem, I'm betting the GPU itself is the bottleneck. GNC isn't aging so well - Vega has more serious problems than memory bandwidth and overclockability. Don't forget - Nvidia is also interested in HBM, so whatever negative speculation you have of it clearly isn't true. It isn't often Nvidia follows the footsteps of a competitor so obviously.
Wow how disappointed are Ryzen 1700/1800 builders right now? A threadripper with 8/16 got snuck in. I'm sure they kept that quiet to keep selling the Ryzen. I had a sneaking suspicion they might do this and luckily I waited to pull the trigger on Ryzen. Now just wait and see how the 1900x gets along.
Maybe 1800X users (who didn't get the sale price) are disappointed, but definitely not 1700 users. Ryzen 7 is for a completely different market. You get fewer memory channels, but from what I recall, frequency has a bigger performance impact. You get fewer PCIe lanes, but most people only want 1 GPU, and any other expansion cards can easily be handled by the chipset. You get more M.2 slots, but most people are better off with just 1 (for performance) with a separate SATA drive (for mass storage). Ryzen 7s also have the advantage of more heatsink availability and ITX form factors. 1900X is currently limited to full ATX with almost no coolers to choose from. So all that being said, the only people who would've regretted their purchase are 1800X owners who wanted a little bit extra but settled for less.
https://forums.guru3d.com/data/avatars/m/263/263507.jpg
Maybe 1800X users (who didn't get the sale price) are disappointed, but definitely not 1700 users. Ryzen 7 is for a completely different market. You get fewer memory channels, but from what I recall, frequency has a bigger performance impact. You get fewer PCIe lanes, but most people only want 1 GPU, and any other expansion cards can easily be handled by the chipset. You get more M.2 slots, but most people are better off with just 1 (for performance) with a separate SATA drive (for mass storage). Ryzen 7s also have the advantage of more heatsink availability and ITX form factors. 1900X is currently limited to full ATX with almost no coolers to choose from. So all that being said, the only people who would've regretted their purchase are 1800X owners who wanted a little bit extra but settled for less.
As long as the technology keeps advancing as fast as possible, I don't really care about this. Too many years with CPU market stuck with tiny yearly performance bumps..... 0% exciting!
https://forums.guru3d.com/data/avatars/m/220/220755.jpg
maybe i'm wrong but i have been seen something strange here, could this be a gaming forum? because i see people crying because ryzen is 5 FPS slower than 7700, crying because they doesn't want 16 cores, come on guys think about hardware, about future, you want the same tech over and over again?
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
maybe i'm wrong but i have been seen something strange here, could this be a gaming forum? because i see people crying because ryzen is 5 FPS slower than 7700, crying because they doesn't want 16 cores, come on guys think about hardware, about future, you want the same tech over and over again?
Vast majority of users on here are gamers first.
https://forums.guru3d.com/data/avatars/m/260/260855.jpg
So wait a minute, I'm seeing conflicting information here. Is the 1080 equivalent card going to be $499, or $599?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Vast majority of users on here are gamers first.
Sort of - bragging rights of numbers is the top priority. A good gaming experience is distinctly the #2 priority for most people here. I've seen on many forums (here included) where people actively and willingly choose higher frame rates over reducing microstutter. People will also willingly pay an extra $150 for an additional 5-15FPS. Usually nobody even glances at average minimum frame rates.
data/avatar/default/avatar34.webp
So wait a minute, I'm seeing conflicting information here. Is the 1080 equivalent card going to be $499, or $599?
The 599$ is a "Radeon Pack", which includes the faster air cooled RX Vega and a 100$ voucher for 1800X+X370 mobo, as I understand. The GPU alone is 499$.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Sort of - bragging rights of numbers is the top priority. A good gaming experience is distinctly the #2 priority for most people here. I've seen on many forums (here included) where people actively and willingly choose higher frame rates over reducing microstutter. People will also willingly pay an extra $150 for an additional 5-15FPS. Usually nobody even glances at average minimum frame rates.
It's funny you say that as most sites that show 1% and .1% numbers showed that Ryzen was still at a disadvantage compared to 7700. It's just users (most of which have never used a 7700) claiming smoother gameplay than the 7700. I look at facts presented to me in reputable reviews and base my decision off of that. I don't have the expendable cash to buy one of each system and test back to back.
https://forums.guru3d.com/data/avatars/m/260/260855.jpg
The 599$ is a "Radeon Pack", which includes the faster air cooled RX Vega and a 100$ voucher for 1800X+X370 mobo, as I understand. The GPU alone is 499$.
Ah, okay. That sounds like an excuse to put most of your cards out in packs...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
It's funny you say that as most sites that show 1% and .1% numbers showed that Ryzen was still at a disadvantage compared to 7700. It's just users (most of which have never used a 7700) claiming smoother gameplay than the 7700. I look at facts presented to me in reputable reviews and base my decision off of that. I don't have the expendable cash to buy one of each system and test back to back.
I said nothing about which product is better or in what way, I'm just saying people tend to ignore what gives the best experience and prefer numbers over their own perception. And no, I'm not the type who says "the human eye can only see 30FPS" but people will pay more or push their hardware to unnecessary limits for just a number, a number they won't perceive over a competitor or the next notch down. If someone wants to spend the extra money and deal with extra heat without any noticeable difference, go ahead - not my problem. But I personally prefer practicality over a petty set of numbers. EDIT: Obviously, if a more expensive or faster product has a difference you can perceive and that difference is a problem, then that product is worth getting. In other words, people should buy what suits their needs, not their ego. That being said, even if you want to go the Intel route, an overclocked i5 can and will handle most games (including 5+ thread games) at 60FPS just fine, but people will still buy the i7 anyway, because of numbers.
data/avatar/default/avatar27.webp
Ah, okay. That sounds like an excuse to put most of your cards out in packs...
The cards will also be available alone except for the water cooled version.
https://forums.guru3d.com/data/avatars/m/260/260855.jpg
The cards will also be available alone except for the water cooled version.
Yeah, but my question is in what quantity? Are 20% of the available cards going to be put into bundles, or 80%? If there aren't enough stand alone cards available for the consumers that want to buy them, then some consumers will have to chose between nothing, or a bundle. I just wonder if that's what they're trying to engineer?
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
I'm sure you knew, but just in case, that's a photo of the character Patrick Bateman in the film American Psycho, it is weird though!
Yes, the actor is Christian Bale...
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
You could actually double the bandwidth of HBM by using more stacks, like NVIDIA does on their GV100 GPUs - but of course more stacks cost more money (and GV100 costs a fortune, but its a "pro" datacenter GPU). In any case I tend to generally agree. HBM is largely overrated for consumer use. To get an actual bandwidth advantage you would need to use 4 stacks, which costs a lot of money, and at 2 stacks GDDR5x and the future GDDR6 will match or even beat it - at a lower pricepoint at that.
Big advantage of HBM2 compared to GDDR5 is power consumption. Big nod to HBM2, from what I read so far. Additionally, it will be interesting to see if the Vega GPU engine can utilize all of that bandwidth, as well. I think at this point the power savings alone is reason enough to use it. From HH's article: "The graphics engineers from AMD claimed that HBM2 will offer you 5x the power efficiency compared to any other graphics memory including GDDR5, and yes that is huge." We shall see... Don't know quite what to think of all of these bundles right out of the gate, however. AMD is being extremely aggressive this year--I'm wondering if Vega will be powerful enough to do 4k in most cases at a decent frame rate. If so, then the bundles ought to lead into massive sales--especially for the $399 non-bundle variant. We shall see...Looking forward to HH's hands-on review!
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
Sort of - bragging rights of numbers is the top priority. A good gaming experience is distinctly the #2 priority for most people here. I've seen on many forums (here included) where people actively and willingly choose higher frame rates over reducing microstutter. People will also willingly pay an extra $150 for an additional 5-15FPS. Usually nobody even glances at average minimum frame rates.
I agree completely. I sort of hate what 3dfx did in starting the frame-rate craze many years back...but then when were talking average GLIDE frame rates of 20-30 fps compared to competitor's cards using less mature APIs at the time maxing out sometimes as low as ~5 fps, it made a lot more sense than it does now as it marked the difference between "playable" frame rates and slide shows. Intel discreet GPUs in those days (the infamous i7xx discrete 3d cards that Intel later dropped because they could not compete with 3dfx/nVidia--I owned two of them)--and ATI Rage Fury 128 (I believe that was the brand name--I had a very difficult time getting >10 fps out of that card and so returned it.) Matrox Millennium was *the* card to own for 2d gaming, but sucked at d3d gaming in terms of frame-rates and image quality. 3dfx whipped everyone for several years in terms of playable frame rates.) Etc. Today, in a blind test most people couldn't see the difference between 120 fps and 80 fps (imagine a test where both displays were running at 80 fps but the end user was asked to point out the display running at 120 fps...;) Lots of people would find it when it wasn't even there!), but nevertheless we have benchmark bar charts actually illustrating differences of ~1 fps between GPUs and CPUs--which is certainly not worth talking about at all. 1fps differences simply are not real and would never be perceived by the end-users, etc. Lately, I'm enjoying AMD's new "Enhanced Sync" option for the RX480 and up, introduced last week in the 17.7.2 WHQL Crimson drivers. The idea is that you set the game to run with vsync off but set the Crimson profile for the game to Enhanced Sync, and the game should run very close to full bore vsync-off rates but with little to no tearing. It's also supposed to help in games in which the frame-rate engines limit the frame-rates to 30 fps or less, too. It's become my new default and seems to work as advertised--for smooth gaming with no stutter and almost no tearing--it's much better than vsync off, imo.
https://forums.guru3d.com/data/avatars/m/268/268759.jpg
Just the VEGA 56 is reasonable if outperforms GTX1070, still no understand why VEGA 64 draws a lot more, Just for 10~15% naaa, AMD put more ROPs please, Tessellation is'nt all...
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I agree completely. I sort of hate what 3dfx did in starting the frame-rate craze many years back...but then when were talking average GLIDE frame rates of 20-30 fps compared to competitor's cards using less mature APIs at the time maxing out sometimes as low as ~5 fps, it made a lot more sense than it does now as it marked the difference between "playable" frame rates and slide shows. Intel discreet GPUs in those days (the infamous i7xx discrete 3d cards that Intel later dropped because they could not compete with 3dfx/nVidia--I owned two of them)--and ATI Rage Fury 128 (I believe that was the brand name--I had a very difficult time getting >10 fps out of that card and so returned it.) Matrox Millennium was *the* card to own for 2d gaming, but sucked at d3d gaming in terms of frame-rates and image quality. 3dfx whipped everyone for several years in terms of playable frame rates.) Etc. Today, in a blind test most people couldn't see the difference between 120 fps and 80 fps (imagine a test where both displays were running at 80 fps but the end user was asked to point out the display running at 120 fps...;) Lots of people would find it when it wasn't even there!), but nevertheless we have benchmark bar charts actually illustrating differences of ~1 fps between GPUs and CPUs--which is certainly not worth talking about at all. 1fps differences simply are not real and would never be perceived by the end-users, etc. Lately, I'm enjoying AMD's new "Enhanced Sync" option for the RX480 and up, introduced last week in the 17.7.2 WHQL Crimson drivers. The idea is that you set the game to run with vsync off but set the Crimson profile for the game to Enhanced Sync, and the game should run very close to full bore vsync-off rates but with little to no tearing. It's also supposed to help in games in which the frame-rate engines limit the frame-rates to 30 fps or less, too. It's become my new default and seems to work as advertised--for smooth gaming with no stutter and almost no tearing--it's much better than vsync off, imo.
120 and 80? I could probably tell on a game like CS/Quake but generally no, not much of a difference - but when a new game comes out and the difference is the same 40% but between 60 and 40, it becomes immediately apparent. And honestly, from experience I went from a 1080 to a 1080Ti on a QHD monitor and the difference was pretty significant. There are too many games at the max settings that are on the cusp of 60fps and the 1080Ti makes it there while the ~30% slower 1080 just doesn't cut it without sacrificing settings.