AMD Holds Press Conference 31st of May – Likely announces Vega and/or HEDT
Click here to post a comment for AMD Holds Press Conference 31st of May – Likely announces Vega and/or HEDT on our message forum
zer0_c0ol
the ES samples are out already 😀
Humanoid_1
and they have Quad channel memory. They say that Dual channel memory was partly responsible for limiting Ryzen's gaming performance where the GPUs were facing light loads.
Will be very interesting to see how these perform on the new S3 (internal code name) socketed platform 🙂
Noisiv
I dunno if we should be smiling. OEMs are less than excited about Ryzen and AMD is still fighting for their lives.
And they still haven't learned how to do a proper product launch.
How many times it has to be said: Having a great hardware is just a half of the success equation.
I am looking at Vega/Ryzen combo and I just know that getting both would be a double trouble.
Take Linux. GCN tanks compared to NV, and then Ryzen tanks compared to Intel. And on top of that FreeSync does not work. Thats double/tripple tanking 😀
And all that within open source software stack, which makes that famous AMD's finger pointing and blaming others not even possible.
schmidtbag
kam03
tsunami231
GeniusPr0
schmidtbag
Herem
stevevnicks
schmidtbag
Lane
https://www.netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0
But the problem is not really Linux, but old OpenGL code who are completely flushed by Nvidia codes and routine. I have never understand how an opensource comunity can have been so bad to keep their code "open" and not one sided. ( outside that the director of Khronos is the second in the leader board of Nvidia ).
Old OpenGL was a joke alone, lets hope they all move quickly to Vulcan,. ( at least for gaming, i use enough openGL pro 3D software based.. )..
One example, it is AMD who finance and rewrite the OpenGL viewport of Blender for accelerate the realtime render.
Well... Linux, in OS market share represent 2.09%... i really have some pain to imagine... 100% of them are playing games on steam. ( you will tell me that is the same for windows or OSX, but honestly 1.2% or 2% will not do any differences ...
schmidtbag
Lane
Theres something in this phrase that you have quoted that you dont understand ?
MorganX
PrMinisterGR
an extra 20%+ on Geekbench SMT tests while Intel's performance is the same between the two. Ryzen is probably the best CPU you can get for Linux right now, especially if you go source-based and compile for it. It destroys the 6900k and a lot of much more expensive, even dual slot Xeons.
Freesync works. Gsync doesn't work when a compositor is present, which means you have to kill your DE's compositor to make it work. That means you don't get window borders to draw.
For Freesync you need AMDGPUPRO, which is part open source. And it works.
Not only you're completely clueless, you're one of the most rabid fanbois I've seen in a while. The only reason I respond to your posts lately is to make sure that people don't fall for any of the things you say.
[youtube]iYWzMvlj2RQ[/youtube]
You seriously have no idea what you're talking about. AMD cards currently have the best open source drivers by far on Linux. If you say that the open source driver doesn't matter, you'll just prove you literally have no clue what you're saying.
LOL. Not only it's not, but if you have any sort of a recent kernel (4.10+), you actually get Noisiv
Noisiv
MurdockNVIDIA
These GTX 20 series specs already make Vega look old
NVIDIA GTX 20 Series Hardware SPECS and release date:
NOTE: GTX 20 series GPUs will NOT make use of HBM2.
NVIDIA TITAN Xv
- GV102 "Volta" (Fully enabled GV102)
- 16.5 Billion Transistors
- 42 SMs
- 5,376 CUDA Cores
- 336 Texture Units
- 96 ROPs
- 384-bit
- 24 GB GDDR6
- 768 GB/s Memory Bandwidth
- TSMC 12nm FFN
Release Date : Q3 2018
NVIDIA GTX 2080 Ti
- GV102 "Volta" (Cut down GV102)
- 16.5 Billion Transistors -
- 40 SMs
- 5,120 CUDA Cores
- 320 Texture Units
- 88 ROPs
- 352-bit
- 22GB GDDR6
- 704 GB/s Memory Bandwidth
- TSMC 12nm FFN
Release Date : Q4 2018 Around The Holidays
NVIDIA GTX 2080
- GV104 "Volta" (Fully enabled GV104)
- 11 Billion Transistors
- 28 SMs
- 3,584 CUDA Cores
- 224 Texture Units
- 64 ROPs
- 256-bit
- 16 GB GDDR6
- 512 GB/s Memory Bandwidth
- TSMC 12nm FFN
Release Date : Q1 2018
NVIDIA GTX 2070
- GV104 "Volta" (Cut Down GV104)
- 11 Billion Transistors
- 21 SMs
- 2,688 CUDA Cores
- 168 Texture Units
- 64 ROPs
- 256-bit
- 16 GB GDDR6
- 410 GB/s Memory Bandwidth
Release Date : Q1 2018
NVIDIA GTX 2060 Ti :
- GV106 "Volta" (Fully enabled GV106)
- 6.6 Billion Transistors
- 14 SMs
- 1,792 CUDA Cores
- 112 Texture Units
- 48 ROPs
- 192-bit
- 12 GB GDDR6
- 300 GB/s Memory Bandwidth
- TSMC 12nm FFN
Release Date : Q3 2018
Aura89