Windows 10 update enabled DirectX 12

Published by

Click here to post a comment for Windows 10 update enabled DirectX 12 on our message forum
data/avatar/default/avatar05.webp
No you will not. All the performance gains and some of the new features will be seen on current GPUs (Fermi and later Nvidia GPUs, and any AMD GCN GPUs), the game just needs to be DX12. You only need a DX12 GPU for some of the new graphics features that will be introduced. Like with DX11 and tessellation, DX12 will have some new features that require new hardware. DX12 will be Win10 only. But Win10 will be a free upgrade to all Win7 and 8.1 users anyway. These are the facts, straight from MS. Ignore anyone who says otherwise.
Last thing I saw from MS was that DX12 would be available to 8.1 users but not for Win 7.
https://forums.guru3d.com/data/avatars/m/251/251773.jpg
Hey guys, is it possible to install it on the USB 3.0 stick?
data/avatar/default/avatar25.webp
Hey guys, is it possible to install it on the USB 3.0 stick?
yes...even USB 1.0 or 2.0 or 2.1...simply try google for how 😀 but since you are asking, I would not advice it....it is absolutely _NOT_ bug free and MS will push quite a few updates. If you are brave, try it but be aware that anything you have might get lost. I dropped the test because the version in december made it not workable for me (and there was no way to stop it from updating to it). Way too many bugs for a day to day usage. On the plus side though, it felt a bit snappier and faster than Win7 or 8.1, nothing one can put the finger on really.
https://forums.guru3d.com/data/avatars/m/236/236974.jpg
The article is a quite misleading attempt to describe the end effect of Direct3D 12 in layman's terms. First, version number of the API/runtime has no relevance to hardware features of the GPU anymore, because Direct3D 10.1 introduced the concept of feature levels, and Direct3D 11 extended feature levels to D3D9-class hardware. All GPUs produced since 2003 - cards like GeForce 6000 or Radeon X600, laughable by today's performance standards - are supported by the main Direct3D 11 API and runtime. You can program these older cards in a subset of the Direct3D 11 API and scale the performance from low to high end without reverting to Direct3D 9. What is relevant is GPU's feature level, which encapsulates things like the number of hardware registers, shader language model and processing stages, etc. Each upper level is a strict superset of any lower level and often includes features which were once optional on lower levels. There are currently 7 such feature levels - 9_1, 9_2 and 9_3 for various 9.0a/b/c cards (10level9), and levels 10_0, 10_1, 11_0 and 11_1 for more modern cards. Again, the API is still the same - Direct3D 11, a superset of Direct3D 10. There is no level 11_2, but confusingly Direct3D 11.2 API/runtime introduces CAPs (capability bits) again for some important optional features which are not made part of any level. (This is how marketing folks at Nvidia always come about not really supporting 10_1 and 11_1 feature levels - they claim full support for Direct3D 10.1, 11.1 and 11.2 API instead, which is actually true for any card as old as 2003, so it's a clever way of saying nothing informative.) Direct3D 12 is expected to be very similar in this respect - though the 12 API and the runtime require level 11_0 hardware starting with Radeon HD 7000 (GCN 1.x) and GeForce 400 (Fermi/Maxwell/Kepler). How the proposed new hardware features will be presented is unknown yet - probably as a feature level 11_3 (since there will be Direct3D 11.3 runtime, more on that later), but it could be another set of optional CAPs. So all this talk about "you certainly need D3D12 level hardware to fully exploit blah blah blah" is misinformed, and of course marketing people wouldn't mind you buying a new graphics card. The truth is, current Direct3D 11 cards should work very well with Direct3D 12 (including XBox One which has a Radeon HD77x0 level GPU inside). There are certainly no 12_0 level cards on the horizon - no SM6.0 with additional shader stages, no realtime raytracing, nothing like that, and planned additional hardware features in Direct3D 12 hardly warrant a new feature level 12_0. So today the only reason for an upgrade would be improvements in graphics performance, not some minor new features. Now the Direct3D 12 API is quite revolutionary in itself, since Microsoft basically takes latest 11_0 GPUs and makes them the lowest common denominator, throwing away most hardware abstraction levels present since the early days of Direct3D. So now there are no generalized surface formats, no type checking and automatic resource/memory management in the API, etc. The driver and the runtime are as close to the actual shader ALUs, the scheduler and the onboard memory as it is practically possible, with much less intermediate levels. This should free a lot of additional CPU time to use on your actual game logic, if you can handle the low level details. Which means two things. First, there is a completely new driver model, WDDM 2.0, which seems to only feature user-mode driver. Kernel-mode component is either eliminated completely, or generalized and moved into the OS kernel. The driver and the API are "immutable" - i.e. basically stripped of any read-modify-write operations such as texture format conversion which can stall inter-process synchronization - and that allows true multicore rendering where each of multiple CPU cores can process multiple command streams, unlike Direct3D 11. Second, the existing Direct3D 10/11 runtime is still there for compatibility purposes and basically becomes a higher-level API for those who do not want or need low-level access provided by Direct3D 12 and prefer still convenient way of having automatic resource management, however non-optimal from performance point of view. Which means that any new hardware features will need to be implemented at Direct3D 11 level as well, since many developers will still be working with it for a foreseeable future. Hence Windows 10 will have a Direct3D 11.3 API/runtime and my expectations are there will either be level 11_3 or more optional CAPs at level 11_1, and not a separate level 12_0, at least initially. That again makes all this talk about non-existing "DX12 hardware" (i.e. "level 12_0") seem like total bul****. Hope it makes sense.
https://forums.guru3d.com/data/avatars/m/90/90808.jpg
Bring on the beta driver AMD/Nvidia. I am downloading now. The testing will commence, ETA 16 ::Macrosoft:: minutes.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Bring on the beta driver AMD/Nvidia. I am downloading now. The testing will commence, ETA 16 ::Macrosoft:: minutes.
Windows 8 drivers run fine in 10, as for DX12, there are no apps that support the API yet so I dont think there is much of a point.
https://forums.guru3d.com/data/avatars/m/242/242935.jpg
The article is a quite misleading attempt to describe the end effect of Direct3D 12 in layman's terms. ....
Interesting, so AMD GCN cards and up could in theory be just as DX12 capable as a GTX980?
https://forums.guru3d.com/data/avatars/m/220/220626.jpg
I've in the past been very much against DX12 being Win10 only. I've heavily in favor of mass adoption of DX12, I would love a PC gaming future where we get the most out of our hardware just like devs do with consoles. It still feels frustrating that the huge user base of Win7 wont be getting DX12 which will slow devs use of the API, but the fact that the new OS will be a free upgrade for these people is at least a good middle ground for what I want. I'll take it. Not quite a full 360, more like a 180.
data/avatar/default/avatar27.webp
I've in the past been very much against DX12 being Win10 only. I've heavily in favor of mass adoption of DX12, I would love a PC gaming future where we get the most out of our hardware just like devs do with consoles. It still feels frustrating that the huge user base of Win7 wont be getting DX12 which will slow devs use of the API, but the fact that the new OS will be a free upgrade for these people is at least a good middle ground for what I want. I'll take it. Not quite a full 360, more like a 180.
A 360 would have you going in exactly the same direction as you were before. A 180 exactly the opposite. So, maybe you're somewhere in between 90 and 180 degrees. 🙂
data/avatar/default/avatar31.webp
In the end does Maxwell have full hardware DX12 support?
https://forums.guru3d.com/data/avatars/m/183/183855.jpg
In the end does Maxwell have full hardware DX12 support?
Did you read the entire thread from start? Or did you just read the last 2 posts? Your answer is in the thread.
https://forums.guru3d.com/data/avatars/m/236/236974.jpg
Interesting, so AMD GCN cards and up could in theory be just as DX12 capable as a GTX980?
Of course they are, it's the same architecture as the XBox One, which is getting Direct3D 12 as well. The language used makes me really doubt that GTX980 supports any of these new proposed "hardware" features, and they are only minor tweaks on top of level 11_1 anyway: conservative rasterization volume tiled resources rasterizer ordered view typed UAV load See also: Nvidia’s ace in the hole against AMD: Maxwell is the first GPU with full DirectX 12 support http://www.extremetech.com/computing/190581-nvidias-ace-in-the-hole-against-amd-maxwell-is-the-first-gpu-with-full-directx-12-support Most DirectX 12 features won’t require a new graphics card http://www.extremetech.com/gaming/198204-most-directx-12-features-wont-require-a-new-graphics-card
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
Of course they are is, it's the same architecture as the XBox One, which is getting Direct3D 12 as well. The language used makes me really doubt that GTX980 supports any of these new proposed "hardware" features, and they only minor tweaks on top of level 11_1 anyway: conservative rasterization volume tiled resources rasterizer ordered view typed UAV load
Maxwell (9xx) series supports those features on a hardware level through DX11.3 http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/4 Some of the features are suppose to be available on a software level like you suggested, tiled resources for example was demonstrated on a 770. I'm curious to know if all of 12's features will be available to 11.3 (except the low overhead) or if they plan on adding more hardware orientated feature levels (and if they can still be supported on DX11 cards) beyond that. Only time will tell.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
Bravo DmityKo, I never knew all of that! =] That's very interesting to know about DX12, even funnier that it all just seems like a marketing ploy from Nvidia and AMD now to push new DX level cards.
https://forums.guru3d.com/data/avatars/m/236/236974.jpg
Maxwell (9xx) series supports those features on a hardware level through DX11.3
Other reviews, like the ExtremeTech bit above, do not really make such bold statements, so I'd leave some room for errors of judgement.
Some of the features are suppose to be available on a software level like you suggested, tiled resources for example was demonstrated on a 770
This is not possible - it requires virtual memory tables and TLBs in each texturing unit. If hardware support is not there, it can't be implemented in the WDDM driver.
I'm curious to know if all of 12's features will be available to 11.3 (except the low overhead) or if they plan on adding more hardware orientated feature levels (and if they can still be supported on DX11 cards)
Since Direct3D 11 and Direct3D 12 will go in parallel for some time, I'd actually expect any new hardware features to remain available in Direct3D 11. Microsoft can introduce feature level 12_0 in D3D11 or level 11_3 in D3D12 - the numbers don't really matter as current 11_x cards are fine with either version of the API.
even funnier that it all just seems like a marketing ploy from Nvidia and AMD
It's not about a marketing ploy from GPU makers, it's about how the media never gets the little details right.
data/avatar/default/avatar31.webp
Can't pretty much any feature be done through software, although at a great performance cost?
I've in the past been very much against DX12 being Win10 only. I've heavily in favor of mass adoption of DX12, I would love a PC gaming future where we get the most out of our hardware just like devs do with consoles. It still feels frustrating that the huge user base of Win7 wont be getting DX12 which will slow devs use of the API, but the fact that the new OS will be a free upgrade for these people is at least a good middle ground for what I want. I'll take it. Not quite a full 360, more like a 180.
I also hope the Xbox One will drive DX12 adoption, although I guess the same could have been said about DX11.2 (since the Xbox One supposedly had a specialized DX11.2)
https://forums.guru3d.com/data/avatars/m/236/236974.jpg
Can't pretty much any feature be done through software, although at a great performance cost?
Nope. At the lowest level, you have a few thousand SIMD (ALU) units with their own instruction set and register file, and a few dosen memory lookup units (TMU). The user-mode driver compiles shader bytecode to a machine code used by that particular architecture. If a particular ALU instruction doesn't support a required combination of registers, or a required number of operands, or a needed data format which is critical for some shader-language feature, there is nothing you can do about it. The same goes for virtual memory - you have to have virtual page tables and memory caches in the TMU. Theoretically you can do data conversion on the fly and emulate missing instructions with inline macros, but it would be a dozen times slower. Think about emulating AVX instructions on older x86 CPUs, or numerous X87 emulators of i80386 era. It's far more effective to let the application choose an optimal code path based on actual capabilities of the hardware.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
So, basically, the performance hit would be similar to using x87 for PhysX vs SSE4?
https://forums.guru3d.com/data/avatars/m/236/236974.jpg
So, basically, the performance hit would be similar to using x87 for PhysX vs SSE4?
I think it would be worse. The difference is, applications and APIs are coded with high-level languages and the programmer knows everything about the algorythm and the desired end result, so a lot of optimisations can be made either manually or by the compiler. On the driver level, you have no choice but just to emulate the missing bytecode instruction exactly the way it was specified, using the assembly code of the machine architecture - which for example means a simple swizzle operation takes a single register store on architectures that implement it natively, but requires several register load/store operations to emulate, consuming considerably more instruction slots and registers. Even worse or outright impossible for more complex instrucitons or features that require dedicated hardware or lots of memory accesses, like virtual page tables, texture filtering etc. Other approach is to detect and replace affected shader code with an optimized version - which both AMD and NVidia have been doing in the early years. But that's basically cheating, and anyway they don't have the resources to analyze and rewrite every shader code on the planet that requires a certain missing feature.
https://forums.guru3d.com/data/avatars/m/252/252888.jpg
Hey guys, is it possible to install it on the USB 3.0 stick?
What exactly? If you're talking about Windows, no way. You'd be crippled simply because of the low I/O. I've tried it on a USB 3.0 stick with Linux and it still was far from enjoyable. And that stick reached 200Mbps 😉