AMD might Sell ATI and split-up as separate companies

Published by

Click here to post a comment for AMD might Sell ATI and split-up as separate companies on our message forum
https://forums.guru3d.com/data/avatars/m/128/128096.jpg
Also equally shocking, is how it completely obliterates the entry-level GPU market: [spoiler] http://media.bestofmicro.com/T/J/497431/original/17-IGP-GTA-V.png [/spoiler] But the biggest upset isn't even the performance, it's the insanely low power usage (especially when you compare it to the addition of a discrete entry-level card in a system). The R7 250 by comparison uses at least 30w by itself. [spoiler] http://media.bestofmicro.com/S/5/497381/original/07-Power-Consumption-Total-Idle.png http://media.bestofmicro.com/S/4/497380/original/06-Power-Consumption-Total-Gaming.png [/spoiler]
https://forums.guru3d.com/data/avatars/m/128/128096.jpg
Here's the bottom line (sorry for triple posting), the semiconductor business is very unforgiving. AMD's board member Robert Palmer said himself: "Designing microprocessors is like playing Russian roulette. You put a gun to your head, pull the trigger, and find out four years later if you blew your brains out."
https://forums.guru3d.com/data/avatars/m/235/235344.jpg
Have to remember that quote; love it.
data/avatar/default/avatar03.webp
I'm just going to requote this since I don't want to waste time writing something similar in my own words: What happened was that AMD had some of the best integrated graphics available. They had DX11 functionality integrated on CPUs long before Intel. And these weren't ****ty Intel HD3000 POS's; these were really good GPUs (for the size), which could actually handle some modest gaming as well as genuine compute tasks. Their problem was that this great IGP was attached... to a (relatively) ****ty CPU. And when people buy CPUs, they buy them to be CPUs first and foremost. AMD knew this, but they also bet on GPU-threading technologies like OpenCL and C++AMP, which would really showcase their APUs. They bet that applications in 3-4 years would use this tech to make substantial performance gains. So, the crappy CPU wouldn't matter so much. That didn't work out. Which meant that these great GPUs went mostly unused. Coupled with this is the fact that gaming performance has been increasingly limited by the CPU rather than the GPU. Even at the medium end, the single-threaded nature of APIs like D3D11 and OpenGL meant that a slow CPU could hurt graphics performance. Remember when AMD was complaining about how something needed to be done about driver overhead in games? This is why; because their CPUs couldn't cut it. This is also why they created Mantle and kicked the entire graphics API industry in the balls, leading to Metal/Vulkan/D3D12. But as with many things with AMD, it's too late; the damage was already done. If this had happened 2-4 years ago, then APUs could have really been something. But now, Intel has figured out how to make a decent CPU-integrated GPU. And a half-decent driver for it. So even that advantage is lost.
I believe the main reason was further compounded by the fact that most users aren't heavy gamers. The amount of people playing even moderately intensive games on the PC are minimal/slim at best compared to the vast number of PC users out there. For this, something as simple as HD3000 (which is almost 2X slower than HD4000) is enough for almost everyone as it plays videos fine, and handles your Flash games pretty nicely too (more CPU than GPU for that). Where AMD absolutely missed the mark was splitting FX and APUs into different markets. If AMD had integrated GPUs with the FX chips, then it'll have been a no-compromise solution unlike the current lopsided APUs. The cost would have been high, and thermal performance might not have been well, but might have ended better than the situation AMD is in now for CPUs. With this, by leveraging OpenCL acceleration built into multiple prosumer projects (like Adobe's suite, many other video editors, some rendering packages), can allow it to become a viable, cheap, workstation chip, that previously could probably even match Intel + Dedicated vs the AMD + same dedicated (+ iGPU). Intel has now caught up, but the chips are aimed at a different crowd. They are probably looking at slim computers, things like HTPCs, AIOs, and the likes that sometimes demands a little gaming/compute without a dedicated GPU while the majority of time utilizing the fast CPU. The fact remains that most applications need the CPU much more than the GPU, and the balance will likely remain the same in the future. Some things just can't be threaded well enough to take advantage of just a few CPU cores, much less a GPU.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Really, the Intel iGPU's don't have good compute power? http://www.tomshardware.com/reviews/intel-core-i7-5775c-i5-5675c-broadwell,4169-7.html [spoiler] http://media.bestofmicro.com/S/J/497395/original/12-IGP-Maya-OpenGL.png http://media.bestofmicro.com/S/I/497394/original/13-IGP-Showcase-DirectX.png http://media.bestofmicro.com/S/O/497400/original/16-IGP-Cinebench-OpenGL.png [/spoiler] Where's your HSA god now?
You write compute and link rendering. That review completely evaded iGPu acceleration of anything btw. Every app which can and is GPU accelerated on AMD platform states (CPU only). So, when talking HSA, why do you even link something what disables it for testing purposes?
data/avatar/default/avatar19.webp
You write compute and link rendering. That review completely evaded iGPu acceleration of anything btw. Every app which can and is GPU accelerated on AMD platform states (CPU only). So, when talking HSA, why do you even link something what disables it for testing purposes?
What are you talking about? Those applications do use the GPU. Why else would Haswell be that much slower than Broadwell? The link you posted is the one that is CPU-only. The fact remains that Broadwell's GT3e is simply faster than AMD's current APUs, but at 2X the price (maybe a bit more than that due to motherboards).
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
What are you talking about? Those applications do use the GPU. Why else would Haswell be that much slower than Broadwell? The link you posted is the one that is CPU-only. The fact remains that Broadwell's GT3e is simply faster than AMD's current APUs, but at 2X the price (maybe a bit more than that due to motherboards).
That is same review Chillin posted. And that's entire point. He posted review which does CPU only benches (even if that app supports GPU acceleration which btw. is lowest step to HSA) as proof that HSA is failure. Edit: Actually he did not even do that. He linked OpenGL/DirectX rendering benches as proof that it has low compute power. (Like there was never Titan X and its rendering vs compute capabilities. Those things are not going automatically hand in hand.)
data/avatar/default/avatar40.webp
That is same review Chillin posted. And that's entire point. He posted review which does CPU only benches (even if that app supports GPU acceleration which btw. is lowest step to HSA) as proof that HSA is failure.
His page mostly shows off the iGPU in CAD programs. Other than AutoCAD 2D, everything else shows Broadwell multiple times faster than Haswell... Broadwell is only ~5% faster clock for clock vs Haswell in IPC, the rest are gained via GT2->GT3e hence it is measuring iGPU acceleration performance in workstation applications. I agree that the rest of the benchmarks you linked are CPU-only, but it doesn't change the fact that GT3e is faster nor a better part for CAD applications. For encoding and others, I suspect Broadwell will still be faster, but you are paying for it 2X like I said before. I haven't really seen any in depth comparison between Broadwell's GT3e vs Kaveri for compute, just in games and synthetics. Even if Broadwell is a bit slower in combined compute "HSA" applications, for most cases it'll still be faster in pure graphics mode or pure CPU. Hopefully Carizzo brings more. Previously, if AMD only releases a 6+ core APU, I'd switch alot of older desktops out for them at home. Now, Broadwell has become competitive...
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
His page mostly shows off the iGPU in CAD programs. Other than AutoCAD 2D, everything else shows Broadwell multiple times faster than Haswell... Broadwell is only ~5% faster clock for clock vs Haswell in IPC, the rest are gained via GT2->GT3e hence it is measuring iGPU acceleration performance in workstation applications. I agree that the rest of the benchmarks you linked are CPU-only, but it doesn't change the fact that GT3e is faster nor a better part for CAD applications. For encoding and others, I suspect Broadwell will still be faster, but you are paying for it 2X like I said before. I haven't really seen any in depth comparison between Broadwell's GT3e vs Kaveri for compute, just in games and synthetics. Even if Broadwell is a bit slower in combined compute "HSA" applications, for most cases it'll still be faster in pure graphics mode or pure CPU. Hopefully Carizzo brings more. Previously, if AMD only releases a 6+ core APU, I'd switch alot of older desktops out for them at home. Now, Broadwell has become competitive...
Isn't it beautiful comparison of real life usage for targeted audience of those chips today? There is no denial that intel brought big improvement in rendering power. But does it translates to HSA? Is that kind of workload which is done by CPU and will be offloaded to iGPU by HSA means? Sorry to say: "No". Those are classical rendering pipeline tests.
data/avatar/default/avatar38.webp
Isn't it beautiful comparison of real life usage for targeted audience of those chips today? There is no denial that intel brought big improvement in rendering power. But does it translates to HSA? Is that kind of workload which is done by CPU and will be offloaded to iGPU by HSA means? Sorry to say: "No". Those are classical rendering pipeline tests.
I think the main problem is that there just aren't that many HSA applications out there. Most workstation applications do have offloading, but it is only for specific stuff, like Photoshop's filters, video editor's filters and encoding/decoding, and rendering suite's ray tracing/certain rendering aspects. CPU performance is always important in this regard. I use a number of Adobe applications (Photoshop, Illustrator mainly) and (Solidworks and Autodesk suites mainly) CAD programs daily. I also do some quick mock ups and renders (Octane, Keyshot). However, the programs we use are all highly CPU dependent with only a limited aspect GPU accelerated (Keyshot is 100% CPU, Octane is CUDA only for now).
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I think the main problem is that there just aren't that many HSA applications out there. Most workstation applications do have offloading, but it is only for specific stuff, like Photoshop's filters, video editor's filters and encoding/decoding, and rendering suite's ray tracing/certain rendering aspects. CPU performance is always important in this regard. I use a number of Adobe applications (Photoshop, Illustrator mainly) and (Solidworks and Autodesk suites mainly) CAD programs daily. I also do some quick mock ups and renders (Octane, Keyshot). However, the programs we use are all highly CPU dependent with only a limited aspect GPU accelerated (Keyshot is 100% CPU, Octane is CUDA only for now).
Right, that is because Carrizo is 1st HSA 1.0 chip I know about. I was not happy that Kaveri was not 1st one. Now developers may actually start developing on something real and see how it responds. My unhappiness came from fact that I could not test it myself and from industry not having enough time to play with it before Zen launches. As Zen launches HSA 1.0 chips will be available to software developers for like 1.5 year at best (or worst since it would mean very late launch). If AMD pushed it into Kaveri, it would be 2.5 years for adoption. That is big difference in software developing life cycle.
https://forums.guru3d.com/data/avatars/m/224/224067.jpg
AMD denies rumor that it’s mulling breakup or spinoff We say “reportedly,” because according to AMD’s own spokesperson Sarah Youngbauer told ExtremeTech the following: “While we normally would not comment on such a matter, we can confirm that we have no such project in the works at this time. We remain committed to the long-term strategy we laid out for the company in May at our Financial Analyst Day.” http://www.extremetech.com/extreme/208659-reuters-claims-amd-mulling-breakup-or-spinoff-company-denies-rumors
https://forums.guru3d.com/data/avatars/m/115/115710.jpg
Isn't it beautiful comparison of real life usage for targeted audience of those chips today? There is no denial that intel brought big improvement in rendering power. But does it translates to HSA? Is that kind of workload which is done by CPU and will be offloaded to iGPU by HSA means? Sorry to say: "No". Those are classical rendering pipeline tests.
At the moment HSA is completely irrelevant and will stay like that for quite some time. There's no real software that anyone actually uses that's HSA compliant.
data/avatar/default/avatar01.webp
Here's the bottom line (sorry for triple posting), the semiconductor business is very unforgiving. AMD's board member Robert Palmer said himself: "Designing microprocessors is like playing Russian roulette. You put a gun to your head, pull the trigger, and find out four years later if you blew your brains out."
Yeah, Intel iGPUs have advanced a lot, but, look at the CPU in that benchmark, if you have an top tier i5 or an i7, you are not going to be playing games on the iGPU, but take AMDs offerings and they are a lot more capable. With dx12 the overhead should be removed, plus being able to pair any iGPU with any dGPU will be killer, then add that HBM will come to APUs and probably be used as system memory aswell( this last part is made up, but it could be a possibility) plus the Zen cores, which on paper look amazing. As I said before, the ATI buyout was a good move done at the wrong time.
https://forums.guru3d.com/data/avatars/m/181/181448.jpg
They aren't going to spin off their GPU, someone is just saying this to drive AMD share prices up/down before the launch of the new cards.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Spinning off their GPU business would be one of the worst decisions they can make!!! They have to concentrate in releasing good products (CPUs and GPUs) with good prices and in a timely manner instead of multiple delays... They also need to improve greatly they commercial and marketing departments because they have some lovely products with good prices that people aren´t buying because they don´t know they exist!... C´mon AMD, stop jerking around and get serious, we need it!
https://forums.guru3d.com/data/avatars/m/254/254132.jpg
AMD need to get rid of this driver stigma too. Most people I speak to about AMD know about driver problems and they're not even into the technical side of things... It's bad business. It's either driver problems or hot running power guzziling CPU's. It's nowhere near as bad as the stigma suggests but it still puts off customers. They need to go head to head with these lingering problems. Talk about drivers and how they're upping their game, etc. Bring out new desktop CPU's and focus on heat and power consumption as well as the performance increases. Even today we have driver delays, half a year between WHQL, even 4 months between beta a few drivers ago... If money is the problem then something needs to happen internally or they will have to sell part of it off.
data/avatar/default/avatar04.webp
What a recipe, ey/? Freeze your main business by spending all your cash in buying the company; starve for 10 years while losing the value several times over, and then split the same company. "Having each company operate entirely independently makes no sense, since we've already discussed that it's what these two can do together that makes this acquisition so interesting. - Anand 2006. -
https://forums.guru3d.com/data/avatars/m/152/152580.jpg
"AMD Considering Spinning Off Its GPU Business" Another sensation crafted to counter the latest increase in share prices of AMD. Such actions can be seen quite often recently.
https://forums.guru3d.com/data/avatars/m/128/128096.jpg
AMD need to get rid of this driver stigma too. Most people I speak to about AMD know about driver problems and they're not even into the technical side of things... It's bad business. It's either driver problems or hot running power guzziling CPU's. It's nowhere near as bad as the stigma suggests but it still puts off customers. They need to go head to head with these lingering problems. Talk about drivers and how they're upping their game, etc. Bring out new desktop CPU's and focus on heat and power consumption as well as the performance increases. Even today we have driver delays, half a year between WHQL, even 4 months between beta a few drivers ago... If money is the problem then something needs to happen internally or they will have to sell part of it off.
The WHQL issue is definitely hurting their mainstream appeal. Most people never manually update ANYTHING on their computer, that's why we have Windows Update center that does it automatically in the background. You would be shocked how many people don't even know they can update. AMD has better start submitting driver to Microsoft to be WHQL, or else they might as well give up hope of ever being reputable (especially for OEM's).