Tesla HW 4.0 Chips baked with TSMC at 7nm have a Monolithic design

Published by

Click here to post a comment for Tesla HW 4.0 Chips baked with TSMC at 7nm have a Monolithic design on our message forum
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
Whilst I'd personally love a Tesla, not really comfortable with an autopilot. I realize it is the future, but I get simple pleasure from just driving my own automobile.
data/avatar/default/avatar03.webp
I assume that all current Teslas will have to have these retrofitted to get full self driving for people who paid for that functionality? That's quite a few cars as Tesla is selling 100k per quarter and hopes to expand.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Andy Watson:

I assume that all current Teslas will have to have these retrofitted
My guess is it will be quite simple to swap in the new card, just like upgrading a GPU. A few screws, a few wires, done. Not everyone will do it, like @Maddness , most probably won't care and are okay with just having the safety features (lane keeping, automatic braking, collision avoidance). The overall cost for Tesla for the swap will be tiny, compared to the massive profits that a fully autonomous army of Robo-Taxis can bring, assuming they manage do achieve Level 5 autonomy.
data/avatar/default/avatar34.webp
Is it really worth the struggle to make a new chip? Why don't just create a Custom AMD cheap, since it would be a big win win for both. Tesla would make it they own Epyc/Threadripper which is much more OP than this, like the chinese did, and AMD would ....win.....
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
D1stRU3T0R:

Is it really worth the struggle to make a new chip? Why don't just create a Custom AMD cheap, since it would be a big win win for both. Tesla would make it they own Epyc/Threadripper which is much more OP than this, like the chinese did, and AMD would ....win.....
Because an ASIC designed for Tesla's home grown self driving stack will easily outperform any GPGPU/CPU AMD could make at a fraction of the cost per unit. There is a reason why ever large company doing AI work is going vertical.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

Because an ASIC designed for Tesla's home grown self driving stack will easily outperform any GPGPU/CPU AMD could make at a fraction of the cost per unit. There is a reason why ever large company doing AI work is going vertical.
To add to this point: ASICs can also use a fraction of the wattage to accomplish the same task. You only have the bare minimum necessary transistors and clock speed to sufficiently accomplish the goal. Considering we're talking about an electric car here, every watt counts.
data/avatar/default/avatar01.webp
Denial:

Because an ASIC designed for Tesla's home grown self driving stack will easily outperform any GPGPU/CPU AMD could make at a fraction of the cost per unit. There is a reason why ever large company doing AI work is going vertical.
Thanks for eli5 🙂
https://forums.guru3d.com/data/avatars/m/278/278874.jpg
wavetrex:

My guess is it will be quite simple to swap in the new card, just like upgrading a GPU. A few screws, a few wires, done. Not everyone will do it, like @Maddness , most probably won't care and are okay with just having the safety features (lane keeping, automatic braking, collision avoidance). The overall cost for Tesla for the swap will be tiny, compared to the massive profits that a fully autonomous army of Robo-Taxis can bring, assuming they manage do achieve Level 5 autonomy.
But isn't the current autopilot sold with the arguments that in a near future it will be fully autonomous ? I doubt the one that bought that options will be happy to know that they need to buy the new autonomous card to get level 5 autonomy.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
People paid for the feature, not for the card. There's a difference... If Tesla says: "We need to replace the card with a better one FOR FREE, just bring the car in", I'm sure most that paid for the feature will be happy to do so. It's like getting a GTX 1080 upgraded to RTX 3080 for free. Wouldn't you do it ? 🙂
data/avatar/default/avatar23.webp
schmidtbag:

Considering we're talking about an electric car here, every watt counts.
I'm not convinced. kW*hour is a percentage of typical Tesla car energy consumption , so adding adding a watt would increase this by 0.001%
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Noisiv:

I'm not convinced. kW*hour is a percentage of typical Tesla car energy consumption , so adding adding a watt would increase this by 0.001%
Remember, at least some of the computers are (or can be) running 24/7. Not all of the "idle" systems are super efficient, like the "sentinel mode", which records from multiple cameras and uses kinda basic forms of computer vision to identify movement patterns. I'm sure that's at least 30W of continuous use just for that alone. If you didn't drive all weekend and left sentinel mode on the whole time, that would add up to around 1.4KWh, which is about 2% of the car's total battery capacity (at least if you don't get the full-range models). Or in another perspective, roughly 8km. That's a lot of distance for something that basically just runs in the background, and for a single service. Meanwhile, consider the actual wattage of the hardware when the car is being used. Autopilot is relatively computationally expensive. I imagine it must easily exceed 100W when you account for all of the cameras, sensors, algorithms, rendering the GUI, and the radios for navigation and communication. For a long enough road trip, all of that probably shaves tens of kilometers off your total distance, which is the difference between reaching your destination vs needing to make a stop at a charging station (and most people would rather not deal with the wait).
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Thinking about this a few days, it would actually work with that rumor, that Nvidia wanted to budge their way into way more 7nm TSMC than they actually got (and thus, fabbing most of their GPUs on Samsung 8nm according to the rumor). With Tesla's cash, they easily could buy their way into TSMC without the attitude, slimming yields left for Nvidia with AMD already having had their contracts made ahead of time, iirc.
data/avatar/default/avatar07.webp
The thing to remember about the custom vs OOB chip argument is that up until Gen 3, Teslas were using OOB nVidia Jetson chips. They moved away from them once they figured out what functionality was needed and designed their own chips to meet those needs and discard what wasn't. It's not like they didn't put a lot of thought into the decision, they literally spent years working this out in actual vehicles on the roads before they made the decision to move to custom silicon.
data/avatar/default/avatar18.webp
illrigger:

The thing to remember about the custom vs OOB chip argument is that up until Gen 3, Teslas were using OOB nVidia Jetson chips. They moved away from them once they figured out what functionality was needed and designed their own chips to meet those needs and discard what wasn't. It's not like they didn't put a lot of thought into the decision, they literally spent years working this out in actual vehicles on the roads before they made the decision to move to custom silicon.
I doubt Tesla designed anything. Why would they need Broadcom at all if they themselves are designing chips?
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Noisiv:

I doubt Tesla designed anything. Why would they need Broadcom at all if they themselves are designing chips?
Yah jim Keller was just drinking coffee and playing darts with the rest of the team there !
data/avatar/default/avatar37.webp
Venix:

Yah jim Keller was just drinking coffee and playing darts with the rest of the team there !
that would certainly explain why 2 yrs after Keller left Tesla still don't have anything to show, and why for the upcoming chip they chose to hire no less then the world-class semi design company other possibility is that it's really Tesla's inhouse chip, but they're calling it "jointly" developed just for the fuk of it.