NVIDIA’s Morgan McGuire: “First triple-A game to require a ray tracing GPU will be released in 2023”

Published by

Click here to post a comment for NVIDIA’s Morgan McGuire: “First triple-A game to require a ray tracing GPU will be released in 2023” on our message forum
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
Astyanax:

they aren't going to cut parts out of their top end products just for dinosaurs who want to stay on the old stuff. they didn't do it with tesselation, or SM3.0.
a 1680ti would be cool though.
data/avatar/default/avatar06.webp
Denial:

? It definitely does. I don't know if it's a good example compared to DXR - I think DXR is a more complicated feature to measure the "impact" of but both require additional hardware.
Right you are. Regardless, it didn't significantly add to the chip complexity and cost, unlike DXR.
airbud7:

a 1680ti would be cool though.
I'd buy a GTX 2080 ti at 800 $ in a heartbeat. But the 1400 $ prices being charged for the RTX 2080 ti, due to useless features i will never use...
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
Dragam1337:

Tesselation didn't reqruire additinal hardware. SM3.0 helped improve performance. DXR reqruires additional hardware, and significantly hampers performance. People want better performance, not worse performance, hence why barely anyone uses DXR, even if they have an RTX gpu, and hence why GTX cards without RT cores etc would sell better.
Then you would have performance pref of 1070ti, 1080, and 1080ti. Which would make no sense and instead just flood the market more.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Dragam1337:

Right you are. Regardless, it didn't significantly add to the chip complexity and cost, unlike DXR.
I don't think we know how much complexity DXR actually requires. RTX cards have Tensors which aren't required for RT. Strip the tensors off and how much bigger is a RT enabled GTX chip? Is it possible to make that core smaller and more efficient or implement DXR in a better way? For example AMD's implementation is looking like a part of the fetch in it's variant of the RT core is going to be done in the texture units - which should make the hardware footprint of it's "RT core" smaller. It's too early to tell.
vbetts:

Then you would have performance pref of 1070ti, 1080, and 1080ti. Which would make no sense and instead just flood the market more.
I mean you'd get the performance of 2080Ti/2080 etc - I think people falsely think you'd get more performance - but you won't because you are TDP limited. You would theoretically get cheaper prices though - if Nvidia passed the savings to consumers.
data/avatar/default/avatar06.webp
vbetts:

Then you would have performance pref of 1070ti, 1080, and 1080ti. Which would make no sense and instead just flood the market more.
No? I would prefer 2080 ti performance (preferably much higher performance) at more reasonable prices, which would be possible without RT cores etc.
data/avatar/default/avatar30.webp
Denial:

I don't think we know how much complexity DXR actually requires. RTX cards have Tensors which aren't required for RT. Strip the tensors off and how much bigger is a RT enabled GTX chip? Is it possible to make that core smaller and more efficient or implement DXR in a better way? For example AMD's implementation is looking like a part of the fetch in it's variant of the RT core is going to be done in the texture units - which should make the hardware footprint of it's "RT core" smaller. It's too early to tell.
Fair enough - i will personally just never use DXR if i can help it, and would much rather see the die space go to additional rastetazarion performance and / or cheaper prices.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
Dragam1337:

No? I would prefer 2080 ti performance (preferably much higher performance) at more reasonable prices, which would be possible without RT cores etc.
Then people would but this and not their shiny 2080(or 2080S now I guess), and again flooding the market with more products just not needed. They've already done so with the funky Turing GTX line.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Dragam1337:

Tesselation didn't reqruire additinal hardware. SM3.0 helped improve performance.
Tesselation required an entire additional setup pipeline because nvidia implemented their polymorph engine with a fully parallel input. oh, Denial beat me to it.
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
vbetts:

Then you would have performance pref of 1070ti, 1080, and 1080ti. Which would make no sense and instead just flood the market more.
yea but a gtx 1660ti is bumping heads with 1070ti with much less power draw...
data/avatar/default/avatar40.webp
vbetts:

Then people would but this and not their shiny 2080(or 2080S now I guess), and again flooding the market with more products just not needed. They've already done so with the funky Turing GTX line.
Imo they should only have made 1 raytracing gpu - the highest possible tier. DXR performance is so bad that it doesn't make sense with anything other than a 2080 ti. But yeah, their lineup is way too cluttered, likely due to failing sales of the RTX lineup.
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
sorry if OT but would a GTX 2080ti "rt disabled" use less power than a RTX 2080ti?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
airbud7:

sorry if OT but would a GTX 2080ti "rt disabled" use less power than a RTX 2080ti?
Nope - they are powered gated when not in use.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
airbud7:

yea but a gtx 1660ti is bumping heads with 1070ti with much less power draw...
People that buy into mid range cards like the 1660 series though most likely aren't worried about power draw.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Denial:

I don't think we know how much complexity DXR actually requires. RTX cards have Tensors which aren't required for RT. Strip the tensors off and how much bigger is a RT enabled GTX chip? Is it possible to make that core smaller and more efficient or implement DXR in a better way? For example AMD's implementation is looking like a part of the fetch in it's variant of the RT core is going to be done in the texture units - which should make the hardware footprint of it's "RT core" smaller. It's too early to tell.
It's not like the Nvidia would especially want to have the Tensor cores in the RTX cards. I believe it's just that the GPUs can't meet the expected FPS goals by using the regular AA while RT is enabled. It's a duct tape and bubble gum fix to the problem. I wouldn't be terribly surprised if the next gen of Nvidia gaming cards lack the Tensor and only sport RT cores, dealing with the AA the old-fashioned way. Nvidia has most of the time been pretty good at getting more muscle out of their GPUs.
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
loser7:

It would be required because it's much easier for devs to setup the lighting for a game with ray tracing and when hardware is good enough that they can use it across the board, they will stop using older methods for lighting effects all together to make things easier.
2023 seems to be very very optimistic though unless nVidia funds the game. Why would a developer purposely cut himself from half the market without nVidia paying them to do so? Next gen PS and Xbox wont be capable of ray tracing. Maybe a PC only game but even there i mean in 2023 there will still be a lot of people not owning a ray tracing capable gpu. RTX 2060 wont be strong enough for proper full blown raytracing in 2023. 2070 will be more than borderline by that time.
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
vbetts:

People that buy into mid range cards like the 1660 series though most likely aren't worried about power draw.
yea^...funny yet cool how technology advances so fast....really good times for every computer geek.....so many choices I tell ya!
data/avatar/default/avatar07.webp
MonstroMart:

2023 seems to be very very optimistic though unless nVidia funds the game. Why would a developer purposely cut himself from half the market without nVidia paying them to do so? Next gen PS and Xbox wont be capable of ray tracing. Maybe a PC only game but even there i mean in 2023 there will still be a lot of people not owning a ray tracing capable gpu. RTX 2060 wont be strong enough for proper full blown raytracing in 2023. 2070 will be more than borderline by that time.
The next gen PS and xbox are both meant to have ray tracing. If they develop the game for those then it would actually be more work to have to put in non-ray traced lighting render paths for old PC's. Being as AMD and Nvidia will have been offering ray tracing gpu's for PC's for years they might decide by then it's just not worth it.
data/avatar/default/avatar16.webp
Dribble:

The next gen PS and xbox are both meant to have ray tracing. If they develop the game for those then it would actually be more work to have to put in non-ray traced lighting render paths for old PC's. Being as AMD and Nvidia will have been offering ray tracing gpu's for PC's for years they might decide by then it's just not worth it.
I find it kinda odd that next gen console would focus on RT rather than high refresh rates
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
airbud7:

a 1680ti would be cool though.
Yet another revision of the 1080Ti? o_O
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
buhehe:

I find it kinda odd that next gen console would focus on RT rather than high refresh rates
Why are these two things mutually exclusive? You can have high refresh rates and RT hardware.