Nvidia Delivers Xavier SoC in 2018

Published by

Click here to post a comment for Nvidia Delivers Xavier SoC in 2018 on our message forum
data/avatar/default/avatar10.webp
They need to release gpu for us gamers. Getting tired of reading titles about new 10** series releases,would not be to surprised we get 1080 ti 5XT ULTRA MeGa titanrocker GAMER edition X Terra-clocked before something new comes... Wouldnt blame anyone if they get the impression its a brand new gpu every time there a new variant gets released... Hoping volta will come before next summer. Or maybe something from the red team.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
sneipen:

They need to release gpu for us gamers. Getting tired of reading titles about new 10** series releases,would not be to surprised we get 1080 ti 5XT ULTRA MeGa titanrocker GAMER edition X Terra-clocked before something new comes... Wouldnt blame anyone if they get the impression its a brand new gpu every time there a new variant gets released... Hoping volta will come before next summer. Or maybe something from the red team.
I don't get it, how often do you expect to get a new generation of GPUs? It's only been a year since the bulk of the 10 series was released. You have some pretty unreasonable expectations.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Kind of curious to see how this one gets adopted compared to PX2. They opened sourced both the hardware/software for the platform: http://nvdla.org/
data/avatar/default/avatar24.webp
xIcarus:

I don't get it, how often do you expect to get a new generation of GPUs? It's only been a year since the bulk of the 10 series was released. You have some pretty unreasonable expectations.
You ask me what expectations i have, to figure out for you self what i expect. Hmm. Pascal isnt a brand new architecture, some say its Maxwell on speed, sure there are changes etc buts. The jump from 28nm to 16nm is new. When you see some of the roadmaps nvidia had pascal wasnt even in them, volta supposed to take over. Volta has also been in production for some time now, so its not like they are starting from scratch, it wouldnt be impossible if vega beat the crap out of pascal we would see volta based gpu's this year or maybe early 2018. could be wrong but not impossible considering its in production but its expensive, the lack of competition affected the gpumarked for some time now same goes for cpu marked until amd released the new cpu's. So, i would not say its unreasonable to hope for volta next year.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
sneipen:

So, i would not say its unreasonable to hope for volta next year.
It's not, but moaning when Pascal is only midway though its lifecycle is. But yeah; Pascal is a modified Maxwell, I don't think anybody doubts this. Regarding the roadmap, I'm pretty sure Pascal is what Volta would have been had there been a decent supply of HBM. So it wouldn't have changed much. The reason they changed the name is because they had already associated Volta with stacked DRAM.
data/avatar/default/avatar07.webp
I am always amused when reading that every Nvidia's architecture, lets call it N, is nothing but (N-1) on steroids. If that was true and if indeed "N" was basically yesteryear tech, AMD should be able to surpass NV easily, no? Yet the gap between the innovator(AMD) and the caveman(NV) is only widening.... haha wtf
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
TOPs? Uh... does Nvidia understand who their target demographic is? Their potential customers aren't idiots - they're engineers, and they aren't going to be swayed by a marketing gimmick that nobody else uses. Even FLOPs (the original unit of processor performance) is a bit vague by today's standards, since performance varies greatly depending on your calculations. If Nvidia wanted to have impressive numbers in a way that actually has meaning, they'd mention the FP16 performance - something that the automotive industry is actually likely to use and understand. Xavier is already a good product, but the way Nvidia is marketing it makes it seem like they're trying to up-sell it, as though it isn't good enough vs the competition. Just doesn't make sense - they're shooting themselves in the foot.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
I thought TOPS was for Tensor-Ops ? or am I getting it wrong ? Also, read today that Tesla is going with Intel with their next gen infotainment system, so thats one less customer for Tegra Chips.....
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Evildead666:

I thought TOPS was for Tensor-Ops ? or am I getting it wrong ?
Nope - seems to me it really is "trillion operations per second". If what you said were true, then I would take back my statement about it being a stupid unit. You may be confusing TPUs (tensor processing units) which can serve a similar purpose, but I haven't found anywhere that says TOPS is specific to tensors.
Also, read today that Tesla is going with Intel with their next gen infotainment system, so thats one less customer for Tegra Chips.....
I was confused - for a moment I thought you were talking about Nvidia's Tesla series (Tesla is such an over-used name...). Seems a little weird they'd go for Intel, though. I would think their cars would be more dependent upon GPU than CPU.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Nope - seems to me it really is "trillion operations per second". If what you said were true, then I would take back my statement about it being a stupid unit. You may be confusing TPUs (tensor processing units) which can serve a similar purpose, but I haven't found anywhere that says TOPS is specific to tensors. I was confused - for a moment I thought you were talking about Nvidia's Tesla series (Tesla is such an over-used name...). Seems a little weird they'd go for Intel, though. I would think their cars would be more dependent upon GPU than CPU.
TOPS is a measure of 8 bit integer (INT8) performance, which is typically used for inferencing in cars - not FP16, which used for training. I think with the cars their focus is on cost by any means possible. There also seem to have shipped the Model 3 in a "void" as far as computing goes. The X1 is fairly old now, a Volta based Tegra is probably still a year or more away, AMD's Ravenridge just got delayed but wouldn't have launched in time for Model 3 production regardless - so they probably just using off the shelf Intel stuff to save $. As far as it being GPU heavy, idk - it's mostly just interface stuff, maps might be a little GPU heavy but bottleneck for the old system (which I think was Tegra 2 based) was definitely CPU. Web browsing for example is horrendously slow even in new Model S's regardless to whether your phone is tethered or not. Also AFAIK no new Model S's are shipping with Intel - it's only the Model 3 as of right now. There was a thread about it on /r/tesla the other day with people who just bought a Model S complaining that the interface on the 3 is like 10x faster than the S despite the massive cost difference.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

As far as it being GPU heavy, idk - it's mostly just interface stuff, maps might be a little GPU heavy but bottleneck for the old system (which I think was Tegra 2 based) was definitely CPU. Web browsing for example is horrendously slow even in new Model S's regardless to whether your phone is tethered or not.
This is exactly why I brought up FP16 - to my knowledge, Tesla (among other brands) do a lot of visual processing (via cameras or IR matrices) for autopilot and other safety features. FP16 is a sweet spot for good approximation at a good speed, but not too inaccurate like int8. I'm aware Teslas also use a lot of other sensors (like ultrasonic), which would likely work just fine with int8, but also don't really warrant the need of a GPU, either. Meanwhile, I don't think int8 would help much in terms of web browsing or other issues regarding slow UI performance. I'm not dismissing your points or saying you're wrong (after all, I am not in the automotive industry), I'm just saying FP16 performance would give engineers a rough idea what the performance is like for both basic and complex calculations, since they do need to do both.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

This is exactly why I brought up FP16 - to my knowledge, Tesla (among other brands) do a lot of visual processing (via cameras or IR matrices) for autopilot and other safety features. FP16 is a sweet spot for good approximation at a good speed, but not too inaccurate like int8. I'm aware Teslas also use a lot of other sensors (like ultrasonic), which would likely work just fine with int8, but also don't really warrant the need of a GPU, either. Meanwhile, I don't think int8 would help much in terms of web browsing or other issues regarding slow UI performance. I'm not dismissing your points or saying you're wrong (after all, I am not in the automotive industry), I'm just saying FP16 performance would give engineers a rough idea what the performance is like for both basic and complex calculations, since they do need to do both.
Well Tesla uses it's own software implementation - so I don't know if they favor FP16/Int8 for inferencing, Nvidia's own car stack mostly utilizes INT8, which Volvo/Audi and the rest of Nvidia's partners are mostly using, so it would make sense for them to advertise the INT8 performance. Further, the PX2 doesn't support FP16 on it's dGPU's, it's iGPU only - so I don't think Nvidia intends on FP16 being utilized to the same degree as INT8. That obviously might change with Xavier but with their focus with Volta on Tensor cores for increased INT8 performance - I don't think it will. As far as the infotainment system, it's handled by a completely separate device in Tesla cars (historically a Tegra 2) - not the PX2 that runs the autopilot software. That being said, it would obviously be better if Nvidia just gave more information in general on the platform, including FP16 performance. I just don't necessarily think it needs to be their "main" advertising metric.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
I agree Denial. Course Nvidia isn't marketing to Tesla at all since they know Tesla just signed up with AMD to deliver a semi-custom chip. This was a raw raw for Nvidia's partners so they can prevent others from going to AMD for a semi-custom chip.