Download: Quake II RTX Download version 1.2

Published by

Click here to post a comment for Download: Quake II RTX Download version 1.2 on our message forum
https://forums.guru3d.com/data/avatars/m/242/242573.jpg
wavetrex:

How's that high horse feeling? Smug enough? What about the rest of us who can't afford an $1200 videocard ?
You do what everyone else does when they don't have something... deal with it. And/or go buy a RTX 2080 Super for $649.
https://forums.guru3d.com/data/avatars/m/274/274006.jpg
erho:

You're mistaken. Q2RTX does utilize RT cores when run on RTX cards. It can run on all Pascal and Turing 16-series cards aswell but it will then use shader cores instead of RT cores for the ray tracing tasks which is very inefficient, resulting in much lower framerates. So no, not like Neon Noir.
Thanks Do you know if there is a utility available that can graph RT core utilization?
https://forums.guru3d.com/data/avatars/m/142/142454.jpg
wavetrex:

Yes, but performance is abysmal. Can't remember when they added it in the drivers, but it wasn't very long ago. Uses the shader cores to do RT calculations, which obviously isn't anywhere near as efficient as dedicated hardware... but it can be done. See this article for example: https://www.guru3d.com/news-story/crytek-releases-neon-noir-ray-tracing-benchmark.html Raytracing is not limited to RTX cards.. they simply do it faster, that is all. (but still not fast enough... yet)
OK thanks for the info. With all details on max, it runs at 50-60fps on my 2080 @ 2560x1080 so yeah, I can imagine it's not too great on a non-RTX card! Can't remember if my 3DFX card used to give me better than 60fps or not..
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
erho:

That's a bit of an exaggeration. The game is quite playable on a 2080Ti @ 1440p 100% resolution. Also playable with 2070's and 2080's with resolution scale or just 1080p.
And that is exactly my issue with RTX, which is why I skipped this first RTX gen. You need an extremely expensive card to run ray-tracing enabled titles at a decent framerate on the monitor's native resolution (QHD or higher being common these days). I haven't bought an 3440x1440 monitor only to play at 1080 with black bars... And not going to pay for just a videocard as much as my entire PC costed (including videocard!) only to admire some reflections "OOOOH shiny ! That would be $1200+ please". Hell no ! By saying: 3000 series needed for a good experience means - (Almost) EVERYONE could get a good experience, that's the large majority gamers that buy mid-range cards (like xx60 to xx70), which are already too expensive for a lot of people. Not just the 1% which can afford that outrageously expensive 2080 Ti.
data/avatar/default/avatar17.webp
Well in this case, next generation of card is not yet what you want. Neither the 60 or the 70 series of the next rtx will play 1440p titles with full raytracing at a decent fps at a decent price. ( sub 500$ ). You can bet on it.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
if the nvidia's tensor cores will become standard and widely accepted remains to be seen ... but one thing is sure for the 3000 series the tensor cores counts will go up right now they are bottlenecking the shader cores and is evident when you run a game with rtx on the cards consume less power than when they run with out rtx on ... if anything that tells me that there are cuda cores laying back and having mojitos while they wait for the tensor cores to catch up .
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
What many people fail to realize that when going from "X" nm to "X/2" nm litography, the number of transistors that are possible to put on a certain chip size is not double... it's QUADRUPLE ! Current nVidia GPUs are made with 14nm (I know it's called 12nm but ... that's just marketing, it's still 14nm class), and the next gen will move to 7nm EUV (which means it's real 7nm, not the fake 7nm which AMD currently uses !) (14 / 7) ^ 2 = +300% potential increase in transistor counts on the same chip size ! That is INSANE. On the new process they could easily quintuple number of RT /Tensor cores, or more ! Even if no increase in clock speeds happen the simple math of 4 times more transistors could result in a gigantic leap in performance ! But obviously, it won't be THAT massive leap because NV will choose to reduce the chip size to produce them cheaper, much cheaper, and only increase the performance... well, by at most 100%. So half the surface area, double the performance. That would still be AMAZING, an RTX 3070 could be way faster than 2080 Ti, while costing much less to make... --- Yeah, AMD will use the same 7nm EUV in their next GPUs (RDNA 2), so big potential performance increases there too.
data/avatar/default/avatar06.webp
When I saw an article pop up about remastering a classic game for free I was curious how this community would somehow turn that into a negative since it has the word "nvidia" in it. You guys didn't disappoint....
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
NaturalViolence:

When I saw an article pop up about remastering a classic game for free
It's not free you need to own the actual game (in Steam for example). Yes it's very cheap but not free. And it's not really "remastered", still has the same level design, same textures (for the most part), same characters and AI. Just the graphics engine is redone (for the Nth time...)
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@wavetrex actually the 16/14nm are 20nm finfet there was a 20nm node traditional but was not worth it at all for marketing reasons the 20 nm finfet was renamed to 16/14 cause it was so much better , the current 7nm is more in line with the density intel has on 10nm. About the 3xxx series i very much doubt they will go for such huge chips in such a new process the reason nvidia was able to make those huge chips is the maturity of the node , honestly i think you are way too optimistic with your expectations but till the reviews we have no clue we don even have a fake leak yet we wille see in few months!
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
It can't be worse that's for sure. Even if it's just 30% faster than 2000 "Super" series (at the same price points), it would still be a decent overall upgrade.... since 2000 to 1000 wasn't much of an upgrade at all when it comes to "Performance per Dollar". If it's not, nvidia can go f*** themselves, I'm switching to AMD, I had enough of getting ripped off by them ... 480 (jesus!), 680 (quite good), 980 (decent, good power efficiency), 1080 (fast but expensive) .. skipped 2080 (WAY too expensive)
https://forums.guru3d.com/data/avatars/m/263/263205.jpg
wavetrex:

And that is exactly my issue with RTX, which is why I skipped this first RTX gen. You need an extremely expensive card to run ray-tracing enabled titles at a decent framerate on the monitor's native resolution (QHD or higher being common these days). I haven't bought an 3440x1440 monitor only to play at 1080 with black bars... And not going to pay for just a videocard as much as my entire PC costed (including videocard!) only to admire some reflections "OOOOH shiny ! That would be $1200+ please". Hell no ! By saying: 3000 series needed for a good experience means - (Almost) EVERYONE could get a good experience, that's the large majority gamers that buy mid-range cards (like xx60 to xx70), which are already too expensive for a lot of people. Not just the 1% which can afford that outrageously expensive 2080 Ti.
Forget ray tracing. 3440x1440 is a pretty demanding resolution, even without ray tracing. You stepped firmly into enthusiast territory and left the mainstream at that point. You can't expect a nearly 4 year old, non-halo card to keep up without some serious compromises in settings or performance. It's pay to play when you pass a certain point. Good thing is, there are tons of options for everyone to enjoy the hobby.
https://forums.guru3d.com/data/avatars/m/208/208308.jpg
Just played a few levels on a GTX 1080. It definitely works, but needs resolution scaling option set to 25% minimum to sustain decent fps, meaning everything is very blurry... But well, it works. Guess it will need next gen RTX 3000 series for a good experience.
Well, GTX 1080s were not meant to run ray tracing to begin with. Second, on an actual RTX card, the game runs very decently, my RTX 2080 runs it at 50-60fps @1440p and 35-40 fps @2160p, all maxed. Don't try to justify the fact you don't want to buy a new card that is actually better than yours with made up excuses, you'll end up just sounding cheap and envious.
https://forums.guru3d.com/data/avatars/m/142/142454.jpg
wavetrex:

What many people fail to realize that when going from "X" nm to "X/2" nm litography, the number of transistors that are possible to put on a certain chip size is not double... it's QUADRUPLE !
I think CPUs are built in 3D so actually it's theoretically 8x. In reality, you don't get anywhere near that though.
https://forums.guru3d.com/data/avatars/m/246/246564.jpg
wavetrex:

If it's not, nvidia can go f*** themselves, I'm switching to AMD, I had enough of getting ripped off by them ... 480 (jesus!), 680 (quite good), 980 (decent, good power efficiency), 1080 (fast but expensive) .. skipped 2080 (WAY too expensive)
I'm in the same boat. The 2XXX was the first series I skipped since the GTX280. Maybe I'm getting old, but I just couldn't justify this upgrade, even though I had the money. As for Quake II, I spent an entire year of university playing multiplayer instead of studying. I'm never touching that game again. RTX or no.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
mackintosh:

I'm in the same boat. The 2XXX was the first series I skipped since the GTX280. Maybe I'm getting old, but I just couldn't justify this upgrade, even though I had the money.
then you frankly upgrade to much and need to take this opportunity to learn the value of a dollar. 😱
data/avatar/default/avatar09.webp
wavetrex:

Just played a few levels on a GTX 1080. It definitely works, but needs resolution scaling option set to 25% minimum to sustain decent fps, meaning everything is very blurry... But well, it works. Guess it will need next gen RTX 3000 series for a good experience.
How? The 1080 lacks the Tensor cores required.
https://forums.guru3d.com/data/avatars/m/218/218363.jpg
@ManuelG Thanks a lot! The combination of new drivers and new Quake 2 update fixed all my stuttering problems. Previously I had to set an FPS limit in MSI Afterburner for it to be smooth, but if it dropped a lot of stuttering was introduced. I tried all kinds of combinations with/without V-sync in-game, in-driver, G-sync fullscreen or windowed and so on and the only thing that worked was to set that FPS limit in MSIAB. Setting it in-game didn't work either. Also, the game looks better than ever now.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
angelgraves13:

I'd expect nothing less than 50% improvement in RT performance and 40% improvement in rasterized performance. Remember that Turing didn't change nodes, so the fact they could squeeze any performance out was amazing, but that was mostly due to them doubling cache sizes and simultaneous INT and FLOAT.
Turing is 60% larger than Pascal (481mm2 vs 775mm2). There really isn't that much difference between a die shrink and nvidia expanding the total die size in terms of potential performance. The limiting factor with 7nm EUV is going to be power.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
angelgraves13:

Remember that Turing didn't change nodes
yes it did. Pascal high end is 16nm 1030 is the only part on 14nm Turing is 12nm.