Nvidia Profits Tripled In Q4 2016

Published by

Click here to post a comment for Nvidia Profits Tripled In Q4 2016 on our message forum
https://forums.guru3d.com/data/avatars/m/238/238382.jpg
I think you have mistaken Nvidia for AMD.
I am going to have to disagree, all my previous video cards are nvidia, the one I have now is my first AMD video card, my reason for switching was the driver crashes.... since going with an AMD video card, I have not had that same experience.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
I guess we will always see individuals from both camps using their (statistically useless) personal experiences with drivers as somehow representative of the thousands of other GPU owners who may not have issues. :wanker:
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
Entry level gaming cards used to cost £60-£70, just couple years ago. Now price is doubled. When you have 2 major players only (nVidia and AMD), then it is very easy to fix and control prices without the need to say anything to the competitor. If it becomes too profitable then Intel, probably will improve their cards. I doubt that new player will come into the market, because patent laws will be used to crush it.
Yeah, companies are randomly starting to charge us more and more. It's not like inflation is something that exists. Or the fact that entry-level GPUs are now iGPUs that we find in our processors. Firing up a few neurons before posting is encouraged here.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Somebody is doing business right for their own interests and the company's. Not necessarily the customers though, but Nvidia keeps growing as it seems.
https://forums.guru3d.com/data/avatars/m/58/58723.jpg
Yeah, companies are randomly starting to charge us more and more.
Not randomly. Entry level gaming card is one which is used for minimum requirements of most popular game titles. nVidia x50 range contains entry cards for modern gaming. nVidia x40 and x30 range is for old games. nVidia entry cards (release date - official recommended price, 2017 market price): nVidia GTS 450 (2010 - $129, 2014 - $65, 2017 - unavailable) nVidia GTX 750 Ti (2013 - $149, 2014/2017 - $80) nVidia GTX 950 ($159, 2017 – $127) nVidia GTX 1050 ($139, 2017 - $109) Prices from www.videocardbenchmark.net (Amazon, Newegg) Important note: GTX 950 has similar performance to 750 Ti. Have you noticed that market price for GTX 950 is much higher? One of the possibilities is that nVidia is trying to dry down inventory and control production of new cards in order to reduce supply and keep market prices of new cards high. Which is expected, considering that only 2 major players left on the gaming GPU market.
It's not like inflation is something that exists.
Makes sense if you are using Russian Rubles or Zimbabwean Dollars. Information above shows that it has little to do with inflation (see the note).
Or the fact that entry-level GPUs are now iGPUs that we find in our processors.
Argument would make more sense if Triple-A titles used Intel GPU for minimum requirements. Intel GPU runs fine older games or games on lowest possible settings.
Firing up a few neurons before posting is encouraged here.
Don't be smug, unless you made sure that facts are on your side and opponent can't back his arguments with logic and evidence 🙂.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
Not randomly. Entry level gaming card is one which is used for minimum requirements of most popular game titles. nVidia x50 range contains entry cards for modern gaming. nVidia x40 and x30 range is for old games. nVidia entry cards (release date - official recommended price, 2017 market price): nVidia GTS 450 (2010 - $129, 2014 - $65, 2017 - unavailable) nVidia GTX 750 Ti (2013 - $149, 2014/2017 - $80) nVidia GTX 950 ($159, 2017 – $127) nVidia GTX 1050 ($139, 2017 - $109) Prices from www.videocardbenchmark.net (Amazon, Newegg) Important note: GTX 950 has similar performance to 750 Ti. Have you noticed that market price for GTX 950 is much higher? One of the possibilities is that nVidia is trying to dry down inventory and control production of new cards in order to reduce supply and keep market prices of new cards high. Which is expected, considering that only 2 major players left on the gaming GPU market. Makes sense if you are using Russian Rubles or Zimbabwean Dollars. Information above shows that it has little to do with inflation (see the note). Argument would make more sense if Triple-A titles used Intel GPU for minimum requirements. Intel GPU runs fine older games or games on lowest possible settings. Don't be smug, unless you made sure that facts are on your side and opponent can't back his arguments with logic and evidence 🙂.
Okay, maybe you didn't deserve such a brash response - but you didn't think it through before posting. Even this post, which I'm replying to, is based on false premises. Firstly, for the love of god DO NOT use that website for comparing GPUs. It's very inaccurate. Instead, use this: http://hwbench.com/vgas/geforce-gtx-750-ti-vs-geforce-gtx-950 And as we can see, the 950 is 22% faster than the 750 Ti in gaming. This automatically dismantles your first argument because considering that both cards were being produced at the same time, it makes sense for the 950 to be more expensive. And I disagree with x30/x40 is for old games at min details. HD Graphics is significantly slower than a 460 for example (the 460 is more than twice as fast). And you can even play modern games in 720p with those things. Sh!ttily, but you can. Thus there is still a market for the x30/x40. Just because they don't get the spotlight doesn't mean they completely suck. They're really fine for modern games in 720p. That's the definition of entry level. Inflation applies to every country out there, not just Russia or Zimbabwe, I don't know what you're getting at. Maybe I didn't catch your point. And 'minimum requirements' frequently use very old GPUs. If you extrapolate towards iGPUs, you'll see that many modern games will run on them. Sure, some AAA titles like ROTR will run like cock but others aren't so demanding.
https://forums.guru3d.com/data/avatars/m/58/58723.jpg
but you didn't think it through before posting. Even this post, which I'm replying to, is based on false premises.
I was basing my first comment on my past experience buying cards, and then I backed it up with real numbers, so it can't be called "false premises".
Firstly, for the love of god DO NOT use that website for comparing GPUs. It's very inaccurate.
I've used well known site, which collected real data from more that 800,000 Video Cards. Why, for the fictional god's love, I should believe your site, which do not clearly tell where they are getting data? 🙂.
And as we can see, the 950 is 22% faster than the 750 Ti in gaming.
Yes, you are right here and I was wrong about performance. When it comes to PassMark scores they are 41% better on 950.
This automatically dismantles your first argument because considering that both cards were being produced at the same time, it makes sense for the 950 to be more expensive.
Not enough to dismantle. GTX 750 was released in 2013. One years later its price halved. GTX 950 was released in 2015, but its price dropped very little. There are two major explanations: a) Manufacturing cost of GTX 750 is very cheap, so it could go down half way easily. b) nVidia managed to take control on inventory and distribution, so they could control supply in order to increase price/profits on GTX 950. It works, because there are not enough GPU manufacturers to compete properly. c) Something else... Both explanations are normal business practice.
And I disagree with x30/x40 is for old games at min details... Sh!ttily, but you can.
In 2013 47% of monitors had resolution 1920x1080 and higher, and we are talking about PC games with gaming cards. x30/x40 cards are for 2-4 year old games and low settings. "Sh!ttily, but you can" is accurate slogan when it comes to these cards and modern games, especially when they are badly ported from consoles.
Thus there is still a market for the x30/x40.
Agree. Sometimes, I use nVidia GT 740 to play Battlefield 4 on lowest settings 1680x1050 at work 🙂.
They're really fine for modern games in 720p.
PC master race is not fine with 720p 🙂.
Inflation applies to every country out there, not just Russia or Zimbabwe, I don't know what you're getting at.
Inflation of US dollar or Euro is too low and could not explain huge price differences inside range of 2-4 years, especially if prices vary ±30%. Not like Zimbabwean Dollar which can be used to explain any price hike next morning.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Agree. Sometimes, I use nVidia GT 740 to ... at work
Isn't that statement alone all you need to be paying attention to? It is widely accepted nowadays that a GTX 1050 or a RX 460 are considered entry level for gaming, both of which are drastically better than a 740. As stated by xIcarus, IGPs are now known as entry level GPUs in general (so, not even gaming specific). By today's standards, anything below a 1050 or 460 is not meant for gaming. Their market is for things like low-power home and office PCs, media centers, and healthy upgrades to outdated PCs. They're also good for office-based OpenCL tasks (like spreadsheet calculations).
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
I was basing my first comment on my past experience buying cards, and then I backed it up with real numbers, so it can't be called "false premises". I've used well known site, which collected real data from more that 800,000 Video Cards. Why, for the fictional god's love, I should believe your site, which do not clearly tell where they are getting data? 🙂. Yes, you are right here and I was wrong about performance. When it comes to PassMark scores they are 41% better on 950. Not enough to dismantle. GTX 750 was released in 2013. One years later its price halved. GTX 950 was released in 2015, but its price dropped very little. There are two major explanations: a) Manufacturing cost of GTX 750 is very cheap, so it could go down half way easily. b) nVidia managed to take control on inventory and distribution, so they could control supply in order to increase price/profits on GTX 950. It works, because there are not enough GPU manufacturers to compete properly. c) Something else... Both explanations are normal business practice. In 2013 47% of monitors had resolution 1920x1080 and higher, and we are talking about PC games with gaming cards. x30/x40 cards are for 2-4 year old games and low settings. "Sh!ttily, but you can" is accurate slogan when it comes to these cards and modern games, especially when they are badly ported from consoles. Agree. Sometimes, I use nVidia GT 740 to play Battlefield 4 on lowest settings 1680x1050 at work 🙂. PC master race is not fine with 720p 🙂. Inflation of US dollar or Euro is too low and could not explain huge price differences inside range of 2-4 years, especially if prices vary ±30%. Not like Zimbabwean Dollar which can be used to explain any price hike next morning.
What real numbers? You screwed up your argument by using videocardbenchmark which is known to be unreliable, while hwbench is the de-facto standard for comparing CPUs and GPUs. I'm serious, this is not about 'why should you trust what I post'. Hwbench is pretty much universally accepted as one of the best GPU and CPU comparison web applications today. I warmly invite you to use it. Since your performance comparison was inaccurate, your price comparison is largely invalid by extension. The 750Ti had its price cut because it was 'included' in the 900 lineup so-to-speak. The 750Ti was being produced along with the 950 as if they were the same generation of cards (they kind of were, 750Ti is Maxwell). 750Ti was the exception, not the rule. The 950 was completely replaced by the 1050, and thus the 950: a) didn't get a price cut b) its production ceased If you look back through time, what happened to the 950 happens almost all the time. For example the 770 didn't drop in price just because the 970 appeared. It was simply replaced by the 970 - 770 production ceased at that time. There are also cases like the 290x which got its price cut because of competition. I feel what you're saying about resolution, however I wasn't referring to the monitor's resolution but the ingame resolution. Sometimes, with a sh!tty enough card, you are forced to drop it lower. A good amount of people have 4k monitors or TVs, that doesn't mean they all rock GTX1080s to properly play at that resolution. People with x40 cards are usually casual gamers who emphasize web browsing or watching movies and the likes. Same goes for 1080p. It's the standard today, it doesn't mean everyone can game on it. And you pretty much proved my point by saying you sometimes game on a 740 at work. About the inflation talk, I finally understood what you were referring to. I thought you were talking about it long term, like since the 6600GT days or something like that. Short-term price increase usually happens either when they get new tech into their cards, tech that has consumed a lot of R&D budget, or when they switch to an expensive or immature fab node. As we all know, these are the first generation of cards on a 16/14 nm process. I'll admit I'm not sure about this, but I recall companies like Samsung saying that 14nm didn't bring a massive decrease in transistor cost. Something about yield rates? This might explain why Nvidia has jacked up its prices this generation. And this also makes me wonder what margins AMD is getting from their cards. Does not explain why the 950 was so expensive though.
https://forums.guru3d.com/data/avatars/m/58/58723.jpg
What real numbers? You screwed up your argument by using videocardbenchmark which is known to be unreliable
Oficially Hwbench.com is owned by LUIS AGUSTIN CIALCETA (LUISG2249@GMAIL.COM) in TRINIDAD, Uruguay, registered in 2015, and receives 7 times less users than PassMark's videocardbenchmark.net site. PassMark is a very well known and reputable software developer whose products are well known amongst professional for 20 years now.
while hwbench is the de-facto standard for comparing CPUs and GPUs
In Trinidad?
Since your performance comparison was inaccurate, your price comparison is largely invalid by extension.
No. It just shows that you did not get what I've wrote.
The 750Ti had its price cut because it was 'included' in the 900 lineup so-to-speak.
It is not so simple.
b) its production ceased
Now you are getting somewhere. nVidia is licensing production of their cards to third part manufacturers and probably forcing production ceasing dates on them to limit supply. Don't forget that main question to answer is: "Why entry gaming cards no longer drop in price after 2 years by 50% since release of GTX 750?"
And you pretty much proved my point by saying you sometimes game on a 740 at work.
Not exactly. It runs fine 4 year old game, on sub-standard resolution on lowest settings. That card is incapable to run any modern game, like latest two CoDs properly and ruins gaming experience.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
I really have no idea why hwbench's provenience or amount of traffic (especially traffic, it's not like passmark actually sells stuff) matters. No offense, but at this point you're inventing arguments in order to discredit hwbench for no real reason other than winning this debate. By that logic, GPUboss is amazing for comparing GPUs; when in reality their scoring system is the biggest piece of sh!t for comparing GPU performance.
PassMark is a very well known and reputable software developer whose products are well known amongst professional for 20 years now.
You just owned yourself without even realizing, because you used the word 'professional' with a completely different meaning. Well known amongst the professional market, yes. Not amongst gamers. You should take a look at their products on their home page. Back into it, Passmark is a synthetic benchmark. We are talking about games. Christ, instead of arguing with me why don't you just take a look at some of passmark's results: GTX 1080 -> 12,001 points GTX 980Ti -> 11,392 points So the 1080 is only 5.3% faster than the 980Ti as far as Passmark is concerned. How about we mix it up a little? GTX 970 -> 8,591 points Fury X -> 8,301 points The 970 is not only near the Fury X, it's even 3.5% faster. You seriously think passmark is a good indication of gaming performance? Dude sorry, I just can't conceptualize how you could possibly think this. At this point I'm asking myself if you're trolling or just completely ignorant. There, I'm smug again. Because you deserve it again. Oh, and for the lolz - passmark has an entry for a 970Ti. Yep. 970Ti. Chew on that one for a second. I won't even bother replying to the rest of your post since it only contains biased assumptions like "Trinidad" or "No. It just shows that you did not get what I've wrote" without giving a single fact to back that up. It's fine not to know stuff. But claiming you do when in reality you know exactly sh!t puts you in a very bad light. Case closed. Unless you actually have something intelligent to say, I won't bother with you anymore. Sorry.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I'd have to agree that Passmark (as well as other synthetic benchmarks) are generally useless tests. Any review that uses them, I skip right over those because they contribute nothing toward my knowledge of the product. Though, there is 1 thing they do that is helpful - they illustrate the potential of a product compared to another one of it's kind in a different performance tier. So for example, they're great at showing how much better a 1080 is from a 1070, but they're not very good at showing how much better a 1080 is from a 980.
https://forums.guru3d.com/data/avatars/m/58/58723.jpg
you're inventing arguments in order to discredit hwbench for no real reason other than winning this debate.
There is nothing to invent. In order to discredit reputation you must have it first. I know where PassMark got their data. Please tell me where hwbench.com acquired their data?
You just owned yourself without even realizing, because you used the word 'professional' with a completely different meaning.
PassMark in test business for 20 years and they wrote GPU test software to prove this. They are professionals. Their tests are more on "synthetic" side, but data is accurate.
Back into it, Passmark is a synthetic benchmark. We are talking about games.
Sure and apple is not made from chemicals 🙂. Synthetic tests have advantage of giving pure performance results of the product with the minimal impact of other factors (bottlenecks). Synthetic tests are more comparable, due to minimised influence of other components. Real word-tests are also important, but they both have their advantages and disadvantages.
Christ, instead of arguing with me why don't you just take a look at some of passmark's results: GTX 1080 -> 12,001 points GTX 980Ti -> 11,392 points So the 1080 is only 5.3% faster than the 980Ti as far as Passmark is concerned.
Christ won't help you 🙂. Synthetic test shows maximum performance which can be achieved on hardware. If you use these cards with same amount of memory, inside PC with the fastest CPU, set them to default frequency (no overclocking) and set game settings to highest quality (Ultra 4K), so that the only GPU would be bottle-necking, then in theory 1080 should perform only 5.3% faster than the 980Ti. For example: 1080 was just 10% faster than 980Ti in similar conditions: http://www.trustedreviews.com/nvidia-geforce-gtx-1080-review-performance-benchmarks-and-conclusion-page-2
It's fine not to know stuff. But claiming you do when in reality you know exactly sh!t puts you in a very bad light.
Projecting? 🙂
Case closed. Unless you actually have something intelligent to say, I won't bother with you anymore. Sorry.
a) Do not trust sites which hides their data sources. b) Learn advantages of Synthetic tests. KEK
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
@EJocys I don't think you understand the argument against synthetic benchmarks... It doesn't matter what they reveal if the numbers they produce can never be achieved in any real-world benchmark. Think of it like the HP or KW rating of a car engine. Many brands state the power at the crank at peak torque, which for many cars is as high as 6000RPMs. That number is useless and meaningless, because not only is it physically impossible for you to achieve, but you would have to rev your engine near redline the entire time, and that's just plain annoying. That number doesn't account for how much actual USABLE power you have, and that usable power can also be sapped from weight or crappy tires. Synthetic benchmarks for PCs are no different; they tell you nothing useful, they just give you a really big number to toot your horn about. At the end of the day, if your Passmark score is the only thing your product is #1 at, you do NOT have the best product. It doesn't matter how good a product is in theory when stuff like drivers, CPU overhead, PCIe overhead, memory latency, poorly optimized software, etc etc get in the way.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
Wow you actually brought arguments this time. You did a sh!t job because nothing from what you said holds true, but I congratulate you for actually trying. Hwbench's reputation? Go on /r/PCMR, /r/buildapc or /r/pcgaming and make a post stating how Passmark is accurate when talking about gaming performance. If you get a positive vote ratio I'll eat my own d!ck. That thing, along with CPU/GPUboss are despised on reddit's hardware communities with good reason. You will see most comments quoting hwbench, anandtech and even tomshardware's GPU hierarchy list as being the most accurate you can find. I don't even need to bring hwbench's reputation into this (which is pretty good, since it has the respect of multiple big reddit communities). It's about Passmark's crap reputation. That thing is not good for comparing game performance. Stop denying this already. Schmidtbag's analogy with the peak power rating in cars is spot on regarding synthetic benchmarks in general. Synthetics usually give you a best-case scenario where certain parts of the GPU (depending on the type of benchmark) are used at full load (without any bottleneck between them) at the same time. This does not happen in reality, just like having a 32-core CPU doesn't necessarily mean better gaming performance compared to a quad-core. There will be internal and sometimes external bottlenecks. Did you even look at that poor excuse of a review you posted? 1. their sample size is 5 games, which is ridiculously small. 2. they only tested 4k, which may or may not be indicative or performance across the board. different cards react differently to certain resolutions. 3. they tested what looks like a custom 980Ti to a Founder's 1080 which again skews the result set. they made up for it by overclocking the Founder's, but not in all tests. Now regarding your conclusion regarding this benchmark. ..If you average out the difference between the 980Ti and the non-OC 1080 in those game benchmarks, you get a difference of exactly 19.228915662650596%. 19+% is pretty far from the 10% you're quoting. Just another piece of proof that you don't know what you're talking about. Did you even look at the review you posted? In fact, if you add the overclocked results, you'll see that it starts to resemble what anandtech and hwbench are saying, that the difference averages out around 30% in favor of the 1080: http://hwbench.com/vgas/geforce-gtx-980-ti-vs-geforce-gtx-1080 http://www.anandtech.com/bench/product/1715?vs=1714 All of this happens while Passmark says the difference is 5%. Are you really so blind about this? And let's say I understand you don't trust hwbench. But if you don't trust anandtech, frankly, you're an idiot. There's no way around it. Now instead of being a wise-ass, I suggest you thoroughly read my post this time. It's clear you didn't bother last time and I'm stuck here repeating the same basic things to you. I assure you this won't happen again. However if you choose to ignore the hard and concise facts I presented to you in this post, you're simply empty-headed. Have a good lecture.
https://forums.guru3d.com/data/avatars/m/58/58723.jpg
@EJocys I don't think you understand the argument against synthetic benchmarks... It doesn't matter what they reveal if the numbers they produce can never be achieved in any real-world benchmark.
Synthetic benchmarks shows maximum and pure achievable power of the hardware. In order to unlock its maximum potential, every other component must not be a bottleneck.
Synthetic benchmarks ... tell you nothing useful
Incorrect. They are telling actual power of the product.
It doesn't matter how good a product is in theory when stuff like drivers, CPU overhead, PCIe overhead, memory latency, poorly optimized software, etc etc get in the way.
Exactly this is the flaw of the real-life benchmarks. If GPU is put into powerful PC and benchmark is done on the optimised software then Real-life benchmark should match results of Synthetic test. Problem is that real-life benchmarks are frequently less reliable due to unclear influence of other factors. People are frequently mislead when somebody shows "real-life" results of GTX 1080 and 980 Ti, where in real life GTX 1080 could be overclocked 8GB card and 980 Ti could be 2GB card tested in different conditions. Both tests, synthetic and real-life, have their advantages. Throwing away synthetic tests is just throwing away good information.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
Doesn't change the fact that synthetics are crap at measuring game performance. Game performance is why you buy a card. Don't try to twist this into an argument whether synthetics are useful or not. The initial fire of this debate started when you stated synthetics are a good measure of gaming performance. Schmidtbag is talking about real world performance - and I agree with him: synthetics are useless if you're interested in real world performance. That's exactly what I proved you wrong about.
https://forums.guru3d.com/data/avatars/m/58/58723.jpg
Doesn't change the fact that synthetics are crap at measuring game performance.
If we continued on car analogy provided by schmidtbag, then "synthetic test" is an engine test designed to measure maximum horse power of the engine. "Real life" test is designed to measure maximum speed of kilometres/hour of car with that engine.
Game performance is why you buy a card.
When it comes to GPU, you are buying engine, not the car. Both tests provides valuable information.
synthetics are useless if you're interested in real world performance.
Any thing is useless if you don't know how to use it 🙂.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
A synthetic is an engine test with the purpose of measuring crank horsepower. And no, the GPU is not the engine. The engine is the GPU's cores. In reality, you lose up to 30% of the engine's power due to the transmission. But what about the car's grip? Or what about the engine's weight and position which can adversely affect handling? What about the gear ratios? Total weight? A Fiesta ST has 180HP and goes around the Top Gear Track in 1:32:70. An Audi TT 3.2 Quattro has 250HP yet does the same track in the exact same amount of time. Proof: http://fastestlaps.com/tracks/top-gear-track An engine is just a number in a sea of variables. Its power is measured by a synthetic benchmark, but in a real life workload (a race track) its performance is different to such a degree that the engine's power doesn't matter almost at all. The same thing happens to GPUs. Since you can't bring a single piece of evidence to support your claims, you've unfortunately lost this debate. Despite your apparent ignorance, I hope you learned something from this.
https://forums.guru3d.com/data/avatars/m/58/58723.jpg
You did a sh!t job because nothing from what you said holds true.
You've mentioned God, Christ, threw insults and now dealing in absolutes. Can't put my finger on who you remind me 🙂.
but I congratulate you for actually trying
I would thank you, if it was less condescending.
Hwbench's reputation? Go on /r/PCMR, /r/buildapc or /r/pcgaming and make a post stating how Passmark is accurate when talking about gaming performance.
Are you claiming that /r/PCMR, /r/buildapc and /r/pcgaming are as smart as you?
If you get a positive vote ratio I'll eat my own d!ck.
I guess you are expert in this 🙂.
It's about Passmark's crap reputation. That thing is not good for comparing game performance. Stop denying this already.
PassMark is for comparing GPU performance.
This does not happen in reality, just like having a 32-core CPU doesn't necessarily mean better gaming performance compared to a quad-core. There will be internal and sometimes external bottlenecks.
This is exactly why Synthetic tests are useful. They give you pure performance with minimal impact from environment. Same for car engines. When you compare car engines, you use horse powers, when you are comparing racing cars, you use kilometres per hour.
In fact, if you add the overclocked results, you'll see that it starts to resemble what anandtech and hwbench are saying, that the difference averages out around 30% in favor of the 1080:
You are ignoring conditions, I've clearly mentioned, when talking conditions when synthetic and real-life tests begins to show similar results. Problem with real-life tests is that you have to clearly specify all conditions under which tests were done (Hardware specification of PC). "Real-time" results could be misleading to a customer who's PC configuration is different from one used in test. Combination of average FPS of multiple 'real-life' tests have a flaw, because tests for older cards have more tests done on older PC's with more bottlenecks. Same goes for Synthetic test, but with smaller degree, because there are less things to influence results.
http://hwbench.com/vgas/geforce-gtx-980-ti-vs-geforce-gtx-1080 http://www.anandtech.com/bench/product/1715?vs=1714 All of this happens while Passmark says the difference is 5%. Are you really so blind about this?
Please show me conditions under which theses tests were done and exact specs of the card and PC. Oh, yes you can't because this data is unavailable. I can trust tests if all conditions are clearly revealed. On PassMark site you can go to individual tests and see the details. Same GPU can show 50% difference in performance when test is done on old generation and new generation CPU. http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+980+Ti&id=3218 This is the problem with some "real-life" test sites, which combines tests from different times.
But if you don't trust anandtech, frankly, you're an idiot. There's no way around it.
Trying to construct "straw man" argument? "Tit for tat", so let me try it too: If you don't thing that Earth is round, frankly, you're an idiot. There's no way around it. Anandtech looks much better. I think you've missed the point. I am not a fanatic when it comes to different type of test 🙂. I've said multiple times that both test types have their advantages and disadvantages.