NVIDIA GeForce Ampere 3DMark Time Spy Benchmarks show it to be 30 percent faster than RTX 2080 Ti
In the category completely a rumor, news hot the web today that an alleged NVIDIA GeForce "Ampere" GPU made an appearance in 3DMark Time Spy. Not details name wise is noted down so it is assumed to be a GeForce RTX 3080 or RTX 3090.
Rumored specs have been doing rounds for a while now, but this round the 3DMark Time Spy score was dug up by by _rogame (Hardware Leaks), the card listed scored 18257 points, roughly 30 percent faster than the RTX 2080 Ti Founders Edition. Futuremark SystemInfo does bring a readout of the GPU clock at 1935 MHz with a memory clock at 6000 MHz. That last one is interesting as typical memory actual clocks are listed as 1750 MHz for 14 Gbps GDDR6. Please take the usual disclaimers in mind.
Rogame claims to know with 100 percent certainty that this test was run by an Nvidia employee.
NVIDIA GeForce RTX 3080 Now Better Visualized with 3D Renders - 06/08/2020 09:32 AM
The bubble busted over the weekend when photos leaked of the alleged GeForce RTX 3080, the new front and back fan design is pretty unique. A Reddit user now has made some 3D renderings of the card, pr...
Here are the PC requirements for F1 2020, AMD Ryzen 5 2600X and Nvidia GeForce GTX 1660 Ti - 06/04/2020 07:34 AM
Codemasters announced the minimum and recommended requirements for its annual F1 game, in this case, F1 2020. The requirements are reasonable for the times and taking into account the visual quality i...
Games will only be on Nvidia GeForce Now after developer opts-in - 05/28/2020 08:12 AM
GeForce NOW is an extension of the PC ecosystem. There is no cost for developers — games just run without difficult porting requirements — helping them reach millions of players who don’t have g...
Rumor: NVIDIA GeForce RTX 3070 and 3080 Coming Q3 2020 + Specs - 04/29/2020 02:25 PM
The rumor mill can't stop the chatter about NVIDIA's series 3000 cards, and that will probably stay that way until announced. Today yet another twitter user posted a chart, with names and specificat...
NVIDIA GeForce Now Game Streaming Service Loses Xbox Game Studios and Warner Brothers - 04/21/2020 10:42 AM
Diversification often leaves a market all splintered up, each company holding its own ground and tied exclusively towards a service. Take for example Streaming services like (initially) Netflix, now d...
Senior Member
Posts: 11809
Joined: 2012-07-20
Same thing you accuse AMD's OGL drivers of. If you used standard features of API, it would not run for them while it would for all others.
This is one thing everyone should understand. That's what raytracing is about.
Sadly, not with current RTX, nor with next ones or RDNA2.
Seen some tech talk about current good practices for raytracing power distribution (AMD).
Basically one AO/GI-ray per pixel, one shadow-ray per pixel. (Showcase had one light source)
And then 1/16th to 1 reflection-ray per pixel.
That's why AMD had that ugly looking all reflective demo. To show that they can do all that's being done via raytracing on every single pixel of screen. (Not just on some particular parts.)
But that was not enough to do proper high quality photorealistic raytracing.
Senior Member
Posts: 8053
Joined: 2014-09-27
Same thing you accuse AMD's OGL drivers of. If you used standard features of API, it would not run for them while it would for all others.
Yeah, it works for all others. That's the thing. OpenGL allows vendor extensions. They all have them. Products can support them. ATi/AMD has a ton of extensions published, same as NVidia.
Here is the registry. Anybody can contribute.

That's how OpenGL works.
You still haven't answered how you explain your EVIL NVIDIA theory when the AMD OpenGL Linux driver is actually better than NVIDIA's.
This is one thing everyone should understand. That's what raytracing is about.
Sadly, not with current RTX, nor with next ones or RDNA2.
No, it isn't. It's not a replacement for raster techniques, that would be unfathomably stupid. But it can do things that they cannot do.
Again, why do you excuse their obvious incompetence.
Senior Member
Posts: 11809
Joined: 2012-07-20
Same thing you accuse AMD's OGL drivers of. If you used standard features of API, it would not run for them while it would for all others..Yeah, it works for all others. That's the thing. OpenGL allows vendor extensions. They all have them. Products can support them. ATi/AMD has a ton of extensions published, same as NVidia.
Here is the registry. Anybody can contribute.

That's how OpenGL works.
You still haven't answered how you explain your EVIL NVIDIA theory when the AMD OpenGL Linux driver is actually better than NVIDIA's.
Except that given situation with nVidia HW design was that their HW was not capable to even do those things. (That card which saved nVidia was Riva 128. And had 8 out of 32 blend modes HW limit while DX mandated 32. That's a lot in times of fixed function HW in terms of texture manipulation capability.)
This is one thing everyone should understand. That's what raytracing is about.
Sadly, not with current RTX, nor with next ones or RDNA2.
No, it isn't. It's not a replacement for raster techniques, that would be unfathomably stupid. But it can do things that they cannot do.
Again, why do you excuse their obvious incompetence.
Who's incompetence. What's your problem. I am not even mentioning anything that can be read as excuse. It is general raytracing post about image quality. And it does not even say anything about rasterization.
Are you in your mind reading something else than what I am writing?
Senior Member
Posts: 22204
Joined: 2008-07-14
- Stop mentioning Intel or nVIdia in your advertising. Ever notice how they never mention AMD?
Apparently you're blind since Intel has mentioned AMD quite a bit over the last couple years.
- Stop turning every damn Intel/nVIdia thread into an AMD circle jerk. AMD fans have become the annoying protester with a megaphone trying to disrupt someone speaking. Yea, your like minded friends will high five you, but everyone else thinks you're being an idiot.
Pot, meet Kettle.....
You and many others regularly attempt to turn every AMD related thread into Intel/NVidia "circle jerks".....
Senior Member
Posts: 8053
Joined: 2014-09-27
Issue here is that you are biased and look for supporting evidence to do so. While very people who you use, say something else.
And that's 2014 on top of it. That's beginning of GCN and time nobody sane even touched OGL outside of Linux.
This is so wrong. OpenGL is the entry point for graphics programming for most universities in the world. It's fine as an API, and it's going anywhere. Of course I'm biased. It runs like dogshit. It always has. The same hardware in Linux runs fine. I still don't understand how you explain that.
And this one needs very separate addressing. Because this tells that there is something very wrong with your views.
AMD paid tons of money (they actually overpaid as that was only way) for ATi, and that is reason why nVidia does not have GPU monopoly.
That was big financial blow for your sake as result!
It wasn't for neither my or your sake. Hector Ruiz saw that unless AMD had two sources of income, and unless they could compete with Intel which was making their own GPUs at the time (the ones we know as integrated now), AMD would be dead. That's why they overpaid, so that they wouldn't be irrelevant in five years. No company does you favors.
And for your information, AMD has intel on left side and nVidia on right side.
That's: revenue of $75B against AMD's CPU division. And revenue of $11B against GPU division where AMD's revenue is $7B.
Go, and tell me how bad job they do in comparison on how they are "paid".
Sure. So if your company is smaller, and surrounded by cutthroat giants, the solution is to mess up your driver releases, so that the unpopular OS gets the good driver. If anything, because AMD is in that situation, it makes decisions like these even worse for them as a company.
Not necessarily sure I'd agree with that - I'm currently playing MGSV at 5120x2880DSR on my 1440p native monitor and the image quality increase is enormous. It's incredibly impressive.
Probably a bad example, but see it in another way. A film at 1080p still looks much more real than any game, at any resolution. If you can have processing that enables that kind of fidelity per pixel, you don't need as many pixels.
Didn't they have revenue of $2B a year ($500M per quarter) while having $100M loss per quarter at that time? in Q1 2006 they made more marketing announcement than they did in entire year before, just to keep their stock price from crushing fast. So they were already about to sell.
Jensen, who committed fraud to keep nVidia afloat in bad times. And while it enabled him to get very powerful GPU out of the house, it was far from good one in terms of industry standard. And then he went around and got game developers to bend and not follow industry standard, so games would actually run on those cards.
Fraud is a big word for what every company does in its growth cycle, which is touting their projected value. That's completely normal and every investor in the world knows that. What do you mean "industry standard". What did they do?
That resulted in simple thing, death of most of other GPU manufacturers, who did follow standard and who could not benefit from implementing given features.
(Investing transistors into now useless features. And being beaten by nVidia who invested transistors into doubling performance of those few they had.)
Jensen Huang even admitted it while painting himself as an hero. He was daring, he did bet all he had (and even what he did not have) and won. But he is not someone I would trust. And without trust, there is no cooperation.
All those people who complain about not GPU competition and praise nVidia at same time...
This all sounds very romantic and very not specific. Nvidia is here because 3dfx killed themselves with the 3000 series, and because of Maxwell. Then they won't go anywhere because of CUDA. They were lucky once and then competent twice. AMD has always either been very unlucky, bullied or shortshighted.