Both Mantle and DX12 can combine video memory
Click here to post a comment for Both Mantle and DX12 can combine video memory on our message forum
A M D BugBear
reix2x
As i see it, the issue is in the game development community, it isn't unified at all.. each game studio is doing their games as they want. even optimizing the game for who pays more money (AMD or Nvidia), or "unoptimizing" it for PC in favor of Consoles.
Year by year the hardware vendors make better and better GPUs but the new games we see are not well optimized at all. (sure there are exceptions)
(.)(.)
Unless its easy to implement, doesnt require an engine to be specifcally for and can be applied to any game, then it wont be massively adopted. Just like Dx11 features.
A M D BugBear
EspHack
^maybe 3x4k monitors in eyefinity
DeskStar
A M D BugBear
brunopita
Hope this works for 3D render engines using GPUs, it's going to be amazing.
-Tj-
Ryu5uzaku
ScoobyDooby
-Tj-
^
Bigger rock cliffs or rocks and its textures or something is with rocks in general has strange latency issues, at least that's what I saw. But it didnt drop that much here though, form ~ 70-80fps to mid 55's for a micro second and only if I panned camera really slowly (ultra textures for "6gb").
KissSh0t
SamW
I guess it depends on how many shaders you have, but is cutting the frame buffer in half going to save you that much in memory?
Unless you actually have full memory sharing, you are still going to have to cache textures and geometry in both cards separately. The memory saved by being able to load some data into one but not the other seems inconsequential. I don't see a case (other than completely different scenes being rendered by each card) where a texture is sent to one GPU but then, for some reason the other GPU doesn't need it and can cache something else instead. Both GPUs will inevitably need to render that texture at some point (if they are rendering the same scene), so each GPU will need a copy and two copies will still be needed.
Also split frame rendering seems like it could screw up a lot of fullscreen shaders, unless you overdraw enough across the split boundaries to get the data you need for the shader.
Fox2232
So many people don't get it at all and race hype train.
It has to be derailed with reality!
What is memory bandwidth of your card 150GB/s? 220GB/s? 320GB/s?
At how much data can get that second card via PCIe every second as 1st accesses data from second at same time?
PCIe 3.0 x16 ~= 16GB/s if we omit any other communication than 2 GPUs we get that one GPU can access data from 2nd's vram at 8GB/s.
If it needs to take just 512MB from there to render each frame, you will have 16fps at best.
So, Dual GPU cards... how much faster is their interconnect on PCB? Is that some miraculous 64GB/s line? in that case 32GB/s for each direction and 512MB accessed data would allow for 64fps at best.
We are talking about data chunks quite small (512MB) as we intend to happily use 2*3GB~2*6GB of vram as many seems to hype for.
This is good news, but for HW designed in future with this in mind.
TheDeeGee
I wonder why this took so long.
DeskStar
I get what you are trying to say, but from my knowledge on GPU's and their V-RAM I could still be wrong here. Does the RAM not control the flow of information (textures, cache and other stuff) from the main boards RAM/CPU communication......?
Theoretically with a fast enough CPU and main board RAM you should able to talk to the GFX cards all day long with no issues having more of any of these items in the equation.
I get that there are always going to be different speeds between how things communicate and therefore is the potential for a bottleneck. But even as it sits today our SSD's are still the bottleneck regarding throughput to the rest of the system as I'll never go with anything less than a raid 0 config for performance? But if things are done right and the system RAM is utilized properly to hold/send data to and from the CPU/GPU properly we end up with no issues at all in performance. I would just think that it would all be similar albeit using more RAM properly.
Just my thoughts on how I percieve the system to talk to itself.
I do not think that I'll upgrade my system config until something like Nvidias NV-Link or something similar becomes mainstream. CPU and GPU can talk directly with nothing interfering like limitations on bus speeds or other hardware limitations we have today.
Ryu5uzaku
A M D BugBear
So when do you all think this will go into full effect?
I think after they mentioned about it, probably early as this summer??? Do you all think that this will go into full effect once the radeon 3xx series is released? I think its Very possible.
Elder III
My Eyefinity Rig hopes this is true. I would love to get full use of my "8 GB) even if it's only on future games.