Both Mantle and DX12 can combine video memory

Published by

Click here to post a comment for Both Mantle and DX12 can combine video memory on our message forum
https://forums.guru3d.com/data/avatars/m/118/118854.jpg
this has to be the best news about graphics for since sli/cf existed
AGREED, Maybe ever since the first introduction of the first gpu to use: Unified shader architecture: GEFORCE 8800GTX!! The only card in existence that actually went almost to over that I remember, double the perfomance then its predecessor: Geforce 7
https://forums.guru3d.com/data/avatars/m/220/220755.jpg
As i see it, the issue is in the game development community, it isn't unified at all.. each game studio is doing their games as they want. even optimizing the game for who pays more money (AMD or Nvidia), or "unoptimizing" it for PC in favor of Consoles. Year by year the hardware vendors make better and better GPUs but the new games we see are not well optimized at all. (sure there are exceptions)
https://forums.guru3d.com/data/avatars/m/230/230424.jpg
Unless its easy to implement, doesnt require an engine to be specifcally for and can be applied to any game, then it wont be massively adopted. Just like Dx11 features.
https://forums.guru3d.com/data/avatars/m/118/118854.jpg
Still way better for 290x 8GB users, as they'll have full 16GB!!!11one 🙂
Game wont move period at that amount of vram, either at 8 or at crazy 16gb vram, it wouldn't even budge at all.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
^maybe 3x4k monitors in eyefinity
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
Soon as the games hit 6-8gb of vram, your quad titan's, even if they scaled at 100% full speed, your cards, all 4 of them, will be on their knees, 8gb vram???, forget it. maybe 6gb vram and settings,etc but at full 8gb vram? 100% even with 4 980 gtx's oc'ed, will be on their knees. 5-6gb sounds good, over that, our cards ain't fast enough to do something like that yet and maintain very high fps at the same time. LOL, and LG is talking about 8k resolution this year? PLEASE!! Games wont move period at that resolution, as of now no, OLD!! games perhaps, Today's games? FORGET IT... If people own 4k, Try maxing out with dsr, should give you a hint how fast it will move, lol. 5gb sounds perfect. Shoot, playing farcry4 @ slightly over 2560X1600 res(DSR), + maxed out settings + nvidia inspector 2xmsaa + 2xSGSSAA, sucker is moving like 17+fps to 20+ fps, seems to moving alright, looks great, lol. I cant imagine running 8k with farcry4 with everything maxed out @ 60fps, holy good god moly, thats NEVER going to happen for a LONG LONG TIME. As of the Mantle+DX12: All I got to say is this, its about time, how many years later and still haven't found out a way to stack the vram? I knew it was just a matter of time, good god about time, I wonder when this will be coming out in full force? I don't expect anytime soon. Matter of time before Nvidia comes up with there's, lol.
Not sure where a lot of your information comes from if anything other than opinions...... And you still haven't touched on anything I've stated to only contradict yourself in some ways. The TITANs are fully capable to perform well with their 6gb's and it would be pretty damn impressive to have 24gb of V-RAM truly accessible. Seeing the information panel in a game like max Payne telling you your computer has 24gb of V-RAM is impressive as it is already, but couldn't imagine it being true...:puke2:.!! I Always thought that was weird how Max Payne does that. I'm completely satisfied :banana: with how well my quad SLI setup runs with my TITANs OC'd through the roof (I'm not sure what you're running and or using to make such bold judgments). Because if the scaling doesn't do well I simply switch off a card, slap on a bridge and go with triple.......I'm just waiting to see if I want a 4k G-Sync monitor and if they'll offer one with a potentiallly higher refresh rate. And always looking to see if the Asus IPS version is up to snuff with gaming...... With the newer API's sounding like they will allow amazing things with optimization and better throughput everything could be just on the horizon. 4k 60hz is just something that is and has been relevant just recently, so I'm sure it hasn't been a true concern to code for it all together. With future driver optimizations the scaling and performance will always improve over time. I just hope they do not let us old dogs fall to the wayside with future updates... If there's information on your statements then please provide.
https://forums.guru3d.com/data/avatars/m/118/118854.jpg
Not sure where a lot of your information comes from if anything other than opinions...... And you still haven't touched on anything I've stated to only contradict yourself in some ways. The TITANs are fully capable to perform well with their 6gb's and it would be pretty damn impressive to have 24gb of V-RAM truly accessible. Seeing the information panel in a game like max Payne telling you your computer has 24gb of V-RAM is impressive as it is already, but couldn't imagine it being true...:puke2:.!! I Always thought that was weird how Max Payne does that. I'm completely satisfied :banana: with how well my quad SLI setup runs with my TITANs OC'd through the roof (I'm not sure what you're running and or using to make such bold judgments). Because if the scaling doesn't do well I simply switch off a card, slap on a bridge and go with triple.......I'm just waiting to see if I want a 4k G-Sync monitor and if they'll offer one with a potentiallly higher refresh rate. And always looking to see if the Asus IPS version is up to snuff with gaming...... With the newer API's sounding like they will allow amazing things with optimization and better throughput everything could be just on the horizon. 4k 60hz is just something that is and has been relevant just recently, so I'm sure it hasn't been a true concern to code for it all together. With future driver optimizations the scaling and performance will always improve over time. I just hope they do not let us old dogs fall to the wayside with future updates... If there's information on your statements then please provide.
Just saying at that high amounts of vram, our cards today will not push as fast as people will hope for, currently. I can understand 5-6gb vram, but 8gb or higher? That requires tremendous amount's of gpu power, and keeping the frame rates 60 fps+ at all times? We wont be seeing them for quite some time. Do you use nvidia inspector to further enhance your aa on top of the game as well? I usually do that too, lol. Drops the fps even further. Although I love your titans, mainly because of the vram 🙂. Could use some xtra here.
data/avatar/default/avatar19.webp
Hope this works for 3D render engines using GPUs, it's going to be amazing.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Unless its easy to implement, doesnt require an engine to be specifcally for and can be applied to any game, then it wont be massively adopted. Just like Dx11 features.
I would say the later..
There is a catch though, this is not done automatically, the new APIs allow memory stacking but game developers will need to specifically optimize games as such.
And we all know how much game devs like to fiddle with APis just to make some extra gpu features possible, yeah not much..
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
No just no. We will still have full 8GB memory. But what this will do is be amazing for no hitting that 3.5GB wall that a lot of people are seeing. I don't see this at all I have hit 3.9GB and not had issues. Same over at PCPer who have tested BF4 at 6K resolution and only found the 970 to perform around 12% slower than the 980 and they too did not see any hiccups. This will essentially increase that wall, not just for 970 users but for everyone. This HAS to be made compulsory for devs if your game uses a lot of memory!!
Tbh considering the 512mb is still faster then pci-e bus I would be amazed if it slowed down anything loads. Maybe some people just have defective 512mb.
https://forums.guru3d.com/data/avatars/m/90/90726.jpg
Tbh considering the 512mb is still faster then pci-e bus I would be amazed if it slowed down anything loads. Maybe some people just have defective 512mb.
Could also be a problem with certain engines or even the game Shadow of Mordor. I found I had wild framerates at different places within the game.. 100fps solid in some areas, then look at nothing in particular to the left or right and it shot down 40fps.. ??? And my memory usage was below 3.5 during these times. Happened with settings up high and low.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
^ Bigger rock cliffs or rocks and its textures or something is with rocks in general has strange latency issues, at least that's what I saw. But it didnt drop that much here though, form ~ 70-80fps to mid 55's for a micro second and only if I panned camera really slowly (ultra textures for "6gb").
https://forums.guru3d.com/data/avatars/m/238/238382.jpg
Be better for 970 users, as they'll have 7GB :P
It took me a few moments to get it... xD I really hope game developers push games out with dx12 support much quicker than we have seen with previous iterations. And I hope Mantle gains more ground.
data/avatar/default/avatar20.webp
I guess it depends on how many shaders you have, but is cutting the frame buffer in half going to save you that much in memory? Unless you actually have full memory sharing, you are still going to have to cache textures and geometry in both cards separately. The memory saved by being able to load some data into one but not the other seems inconsequential. I don't see a case (other than completely different scenes being rendered by each card) where a texture is sent to one GPU but then, for some reason the other GPU doesn't need it and can cache something else instead. Both GPUs will inevitably need to render that texture at some point (if they are rendering the same scene), so each GPU will need a copy and two copies will still be needed. Also split frame rendering seems like it could screw up a lot of fullscreen shaders, unless you overdraw enough across the split boundaries to get the data you need for the shader.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
So many people don't get it at all and race hype train. It has to be derailed with reality! What is memory bandwidth of your card 150GB/s? 220GB/s? 320GB/s? At how much data can get that second card via PCIe every second as 1st accesses data from second at same time? PCIe 3.0 x16 ~= 16GB/s if we omit any other communication than 2 GPUs we get that one GPU can access data from 2nd's vram at 8GB/s. If it needs to take just 512MB from there to render each frame, you will have 16fps at best. So, Dual GPU cards... how much faster is their interconnect on PCB? Is that some miraculous 64GB/s line? in that case 32GB/s for each direction and 512MB accessed data would allow for 64fps at best. We are talking about data chunks quite small (512MB) as we intend to happily use 2*3GB~2*6GB of vram as many seems to hype for. This is good news, but for HW designed in future with this in mind.
https://forums.guru3d.com/data/avatars/m/227/227994.jpg
I wonder why this took so long.
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
I get what you are trying to say, but from my knowledge on GPU's and their V-RAM I could still be wrong here. Does the RAM not control the flow of information (textures, cache and other stuff) from the main boards RAM/CPU communication......? Theoretically with a fast enough CPU and main board RAM you should able to talk to the GFX cards all day long with no issues having more of any of these items in the equation. I get that there are always going to be different speeds between how things communicate and therefore is the potential for a bottleneck. But even as it sits today our SSD's are still the bottleneck regarding throughput to the rest of the system as I'll never go with anything less than a raid 0 config for performance? But if things are done right and the system RAM is utilized properly to hold/send data to and from the CPU/GPU properly we end up with no issues at all in performance. I would just think that it would all be similar albeit using more RAM properly. Just my thoughts on how I percieve the system to talk to itself. I do not think that I'll upgrade my system config until something like Nvidias NV-Link or something similar becomes mainstream. CPU and GPU can talk directly with nothing interfering like limitations on bus speeds or other hardware limitations we have today.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
Could also be a problem with certain engines or even the game Shadow of Mordor. I found I had wild framerates at different places within the game.. 100fps solid in some areas, then look at nothing in particular to the left or right and it shot down 40fps.. ??? And my memory usage was below 3.5 during these times. Happened with settings up high and low.
Possible culprit also. No way it is the 512mb section lol taking in all the facts about it. They gave wrong specs nothing else.
https://forums.guru3d.com/data/avatars/m/118/118854.jpg
So when do you all think this will go into full effect? I think after they mentioned about it, probably early as this summer??? Do you all think that this will go into full effect once the radeon 3xx series is released? I think its Very possible.
https://forums.guru3d.com/data/avatars/m/224/224796.jpg
My Eyefinity Rig hopes this is true. I would love to get full use of my "8 GB) even if it's only on future games.