GeForce GTX TITAN P might see August Announcement

Published by

Click here to post a comment for GeForce GTX TITAN P might see August Announcement on our message forum
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Titan Penis? If they managed to drop that Tesla price from $10,000 to something like two grand (don't expect this to be under $1,500 if it launches now), and produce more than two at a time, it is possible I guess. It will be good for PR, even if there are none to be found.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
If a 1080 with literally less than half the hardware reaches $800 for a meaningless FE, then this at $1,500 would be a comparative bargain. Especially if they leave the compute part intact. Then it's a steal.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Why would they be wasting wafers on Consumer grade Titan P? As they stated, they have enough orders for Tesla P100, that they'll not be able to fulfill them till end of the year.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Why would they be wasting wafers on Consumer grade Titan P? As they stated, they have enough orders for Tesla P100, that they'll not be able to fulfill them till end of the year.
Because this will likely not be a full GP100/102. This will be a way to make money off of otherwise junk waffers. I expect this to be like the original Titan and be a cut GPU.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Because this will likely not be a full GP100/102. This will be a way to make money off of otherwise junk waffers. I expect this to be like the original Titan and be a cut GPU.
That would make quite a lot of sense, until we remember that they could do the same with compute GPUs too, and at higher margins. Unless they have some kind of marketing research that tells them which is the most profitable curve of CUs/Cost for compute GPUs (ie, the max amount of profit per CU they can make per CU/waffer with GPGPU), and that reached the point where it made sense to try the consumer market.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
So, HBM2 availability apparently never was the problem. Nvidia simply chose not to use it in 1080 (or 1070). AMD isn't even finished with the Vega GPU chip itself, so the memory situation is inconsequential in their case.
data/avatar/default/avatar25.webp
Where are AMD fanboys to say, Nvidia is on the rush because Vega will be so powerful that they try to sell as much high end Pascal as they can. LMAO :evilgrin:
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Reading comprehension seems to be in short supply lately.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Isn't GP102 a different chip? I personally don't even think Titan P will come with HBM2 at all. That's not to mention that GP100 has a split SM, 104 doesn't. Nvidia is separating it's compute cards and gaming cards more then usual with Pascal.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
So, HBM2 availability apparently never was the problem. Nvidia simply chose not to use it in 1080 (or 1070). AMD isn't even finished with the Vega GPU chip itself, so the memory situation is inconsequential in their case.
How did you figure out that availability for hbm2 wasn't an problem. Then they would have taken that instead of gddr5x for 1080 because that is not exactly most common one.
https://forums.guru3d.com/data/avatars/m/262/262197.jpg
you dont switch from gddr5(x) to hbm(2) within a day's decision. the whole pcb layout is affected by this decision. so as a manufacturer you cant just go like, yeah we will wait as long as possible with the memory decision and then just go for what's best available. you may be able to do that with any kind of sdram modules (gddr5,gddr5x) but hbm means change in architecture. so we are speaking of different board design.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
HBM2 cost is significantly more then GDDR5x and it arguably has zero benefit in gaming. The most you get out of it is the memory controller shrink. The difference in power is like 15w at most compared to GDDR5x, might even be less on 16nm. HBM2 was never going on the 1080. Edit: I should have said "little" benefit in gaming, not zero. It obviously has some as I outlined in my post. My point is that the benefit isn't worth the cost trade off, especially with a 1080.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
How did you figure out that availability for hbm2 wasn't an problem. Then they would have taken that instead of gddr5x for 1080 because that is not exactly most common one.
If the performance of the 1080 was where they wanted it to be with GDDR5x, there was literally no reason to go ahead with HBM2, which would probably give them enough of a bump so that they can create another product cycle after the 1080.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Who wants to speculate the 1080Ti will be 384bit GDDR5x?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Who wants to speculate the 1080Ti will be 384bit GDDR5x?
I honestly think the Titan P will be too.
data/avatar/default/avatar36.webp
Can't wait, 1080 for sale 🙂 .
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Can't wait, 1080 for sale 🙂 .
I'll give you $250 for that out dated 1080 you have.:banana:
https://forums.guru3d.com/data/avatars/m/105/105985.jpg
oh great $1k+ tp then Vega in oct. or so they say and then the ti the same week or right before launch of Vega,,,,,,there has got to be some other cards by amd I would think to fill in between 480 and fury?
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Rx 490?
data/avatar/default/avatar03.webp
I'll give you $250 for that out dated 1080 you have. ill go to $300 🙂