Upcoming NVIDIA GeForce RTX 4060 Ti: PCI-Express 4.0 x8 Interface & Enhanced Specifications

Published by

Click here to post a comment for Upcoming NVIDIA GeForce RTX 4060 Ti: PCI-Express 4.0 x8 Interface & Enhanced Specifications on our message forum
https://forums.guru3d.com/data/avatars/m/241/241158.jpg
"awaited GeForce RTX 4060 Ti will be equipped with a PCI-Express 4.0 x8 interface", and is that a good thing? Serious question.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
edilsonj:

"awaited GeForce RTX 4060 Ti will be equipped with a PCI-Express 4.0 x8 interface", and is that a good thing? Serious question.
I wouldn't say good, but it's fine - should keep up with DRAM bandwidth, which is all that really matters.
https://forums.guru3d.com/data/avatars/m/220/220755.jpg
just asking.. if i plan to use this GPU on a PCIE 3.0 motherboard, won't this be a problem?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
reix2x:

just asking.. if i plan to use this GPU on a PCIE 3.0 motherboard, won't this be a problem?
So long as you don't hugely overflow your VRAM you should be fine.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
schmidtbag:

So long as you don't hugely overflow your VRAM you should be fine.
It would not be hard reaching the 8gb limit in quite a few games already. I think it can not be good even on gen4.
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
schmidtbag:

I wouldn't say good, but it's fine - should keep up with DRAM bandwidth, which is all that really matters.
Are you even defending this at all................. This is the 6500xt but WORSE!!!!!!!!!!
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Agonist:

Are you even defending this at all................. This is the 6500xt but WORSE!!!!!!!!!!
....how? 6500 XT had 4x PCI-Express lanes, which worked fine on PCI-Express 4.0, but not great on 3.0 This is double the bandwidth to the 6500 XT, and even on PCI-E 3, is the same bandwidth as PCI-E 4 4x lanes. This shouldn't be a problem unless you try and run it on PCI-E 2, not everything needs to be fully filled just to increase complexity and costs. Simple as that.
data/avatar/default/avatar11.webp
If recent info on sales numbers for nVidia GPUs are accurate (What i am seeing on youtube about the subject I mean) then nVidia is having trouble moving 4080s and 4070s. With 4060 having less cores (shaders, cuda whatever they are called) than 3060, and also using PCI-e x8, this may affect people the same way 8GB gpus are right now. Meaning it WILL affect you, you just dont know when and where. Personally I am not buying any GPU till there is a decent price point to it. I bought a 1080 with a full waterblock for 600.(with customs and sales tax). A 6950XT "would" be about the right upgrade, but I dont like the power hog it is, and 4070 has too little VRAM. So im gonna wait and see.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Undying:

It would not be hard reaching the 8gb limit in quite a few games already. I think it can not be good even on gen4.
This is a 1440p card at best. No need to max-out textures, which is where most of your VRAM is going. Turn down textures a notch and it should have sufficient VRAM with no loss to visual fidelity, except maybe if you like staring point-blank at walls. It will still very likely get 100% filled in poorly optimized games, but it won't be non-stop feeding from DRAM like it would with 4K textures.
Agonist:

Are you even defending this at all................. This is the 6500xt but WORSE!!!!!!!!!!
As others have explained, I don't really get how this is worse. If the 6500XT had x8 lanes (which the 4060 Ti) it would have been in much better shape. Still a little too short on VRAM for its capabilities, but it wouldn't have been a complete failure of a product. The 4060 Ti really ought to have 12GB, but the non-Ti I think is fine with 8GB.
https://forums.guru3d.com/data/avatars/m/290/290911.jpg
schmidtbag:

This is a 1440p card at best. No need to max-out textures, which is where most of your VRAM is going. Turn down textures a notch and it should have sufficient VRAM with no loss to visual fidelity, except maybe if you like staring point-blank at walls. It will still very likely get 100% filled in poorly optimized games, but it won't be non-stop feeding from DRAM like it would with 4K textures. As others have explained, I don't really get how this is worse. If the 6500XT had x8 lanes (which the 4060 Ti) it would have been in much better shape. Still a little too short on VRAM for its capabilities, but it wouldn't have been a complete failure of a product. The 4060 Ti really ought to have 12GB, but the non-Ti I think is fine with 8GB.
Doesn t count VRAM but Bus . 128-bit is awful for today games . Better to buy 256-bit Nvdia card
https://forums.guru3d.com/data/avatars/m/269/269645.jpg
Backwards compatiblity. So it will still be x8 on a 2.0 or 3.0 board, which a card like this is more likely to get put into. Would be interesting to see if there are any big drops in such cases.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
andy rett:

Doesn t count VRAM but Bus . 128-bit is awful for today games . Better to buy 256-bit Nvdia card
Pulling @Undying into this reply because I know you've groaned about the bus width too: You can't just ignore memory frequency. Note the memory clock speeds - they're pretty high. The 4060's memory bandwidth is effectively 18Gbps, whereas a 4080 is 22.4Gbps, despite having a 256 bit bus. The 4080 has 20% more memory bandwidth yet it has more than double the amount of cores. The 4080 is expected to play in 4K, whereas the 4060 is probably expected to play in 1440p. That's a 66% resolution increase, and while bandwidth doesn't scale linearly with resolution, it makes a difference (especially when you consider texture detail ought to be dropped when going to 1440p to maximize performance). So if anything, the 4060's memory configuration is actually proportionately better than higher-end cards.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
schmidtbag:

Pulling @Undying into this reply because I know you've groaned about the bus width too: You can't just ignore memory frequency. Note the memory clock speeds - they're pretty high. The 4060's memory bandwidth is effectively 18Gbps, whereas a 4080 is 22.4Gbps, despite having a 256 bit bus. The 4080 has 20% more memory bandwidth yet it has more than double the amount of cores. The 4080 is expected to play in 4K, whereas the 4060 is probably expected to play in 1440p. That's a 66% resolution increase, and while bandwidth doesn't scale linearly with resolution, it makes a difference (especially when you consider texture detail ought to be dropped when going to 1440p to maximize performance). So if anything, the 4060's memory configuration is actually proportionately better than higher-end cards.
4080 have 256bit bus on the 22.4gbps memory speed = 736gbps memory bandwidth 4060 have 128bit bus on the 18gbps memory speed = 288gbps memory bandwidth Tell us again how much difference is the that really? Maybe more than 20%? 😛
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Undying:

4080 have 256bit bus on the 22.4gbps memory speed = 736gbps memory bandwidth 4060 have 128bit bus on the 18gbps memory speed = 288gbps memory bandwidth Tell us again how much difference is the that really? Maybe more than 20%? 😛
I'm confused about where all that extra bandwidth is coming from. Total bandwidth is determined by bus width and frequency. In other words, 1000MHz at 256-bits in theory ought to yield roughly the same performance as 2000MHz at 128-bit. The 4080 has a much lower memory frequency (not half but close to it) yet even if you divide the 4080's bandwidth in half via a 128-bit bus, you're still getting more than 288Gbps, despite the lower frequency. The only significant difference I can see is the 4080 uses GDDR6X, which to my understanding, does yield a substantial increase in bandwidth.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
GDDR is causing confusion because its real frequency and the number of bits it can transfer every second is different from generation to generation. Original "DDR", according to its name, Doubled the Data Rate , but that's a long time ago.... we are now up to 16 bits per pin per clock. GDDR6 is quad pumped, meaning for every tick of the clock it moves 4 pulses of data through the same wire. However, GDDR6X also uses a special modulation that transfers two data bits with every pulse, making it octal-pumped. So, 4080 running its memory memory clock at 1400 Mhz, but being DDR, it transfers data both on the up and down cycle, results in 2800 millions of pulses x 8 bits for every pulse, so 22.4gbps PER PIN. Multiply that by 256, and you get a whooping 5.7 tbps (terabits per second) .. divide by 8 to get bytes, that's 700 GB/s. (Not gigabits as written by Undying) 4060 Ti uses just GDDR6 ... it has higher base clock 2250 Mhz ( DDR 4500 ), but without the two-bits-per-pulse thing, so in the end, 4500 x 4 = 18 gbps per pin. Multiply by 128, that's 2.3 tbps, divided by 8, that's 281.25 GB/s (gigaBYTES per second) Conclusing - It has only 40.2% of the memory bandwidth of the 4080.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
JiveTurkey:

Backwards compatiblity. So it will still be x8 on a 2.0 or 3.0 board, which a card like this is more likely to get put into. Would be interesting to see if there are any big drops in such cases.
x8 on 3.0 will be just fine, x8 on 2.0 could cause a slight bottleneck, and the likelihood that someone is going to buy this and somehow still be on motherboards from 2012 or earlier.........not likely. Not impossible, but by no means in literally any way realistically a market to cater to.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Aura89:

x8 on 3.0 will be just fine, x8 on 2.0 could cause a slight bottleneck, and the likelihood that someone is going to buy this and somehow still be on motherboards from 2012 or earlier.........not likely. Not impossible, but by no means in literally any way realistically a market to cater to.
We'll see how fine is it when being vram limited at x8.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Undying:

We'll see how fine is it when being vram limited at x8.
It'll be fine. Full stop.