Even as the Terafab project launches, Musk says Tesla and SpaceX AI will continue purchasing Nvidia chips at scale for AI training workloads. By Stewart Burnett
Elon Musk confirmed in an 18 March social media post that Tesla and SpaceX AI—the entity formed following SpaceX’s acquisition of xAI last month—will continue ordering Nvidia chips at scale, even as Tesla’s Terafab in-house chip project purportedly prepares to launch later this week. In a separate post, Musk clarified that Tesla’s AI5 chip is primarily optimised for edge compute in Optimus and the Cybercab rather than the large-scale data centre training workloads for which Nvidia’s hardware remains essential.
The distinction matters, given the timing of Musk’s comments. Nvidia’s annual GTC conference ran from 16-19 March, during which Chief Executive Jensen Huang projected US$1tr in AI infrastructure demand between 2025 and 2027 and unveiled its Alpamayo autonomous driving platform—a full-stack solution designed to give any automaker access to unsupervised autonomy capabilities. BYD, Hyundai, Nissan, and Uber were among those announcing adoption of Nvidia’s Drive platform, with Uber committing to a Drive-enabled robotaxi fleet across 28 global markets by 2028. Tesla, by contrast, had no presence at the event.
Musk’s framing of the Terafab as an edge inference play rather than a training compute replacement is among the more explicit clarifications of what the project is actually intended to do. Tesla’s AI5 chip, targeted for mass production in mid-2027 via TSMC and Samsung, is claimed to offer ten times the compute of the current-generation AI4 chip.
Tesla and Musk himself have repeatedly framed the chip as being central to its autonomous driving and robotics ambitions. The Terafab is positioned as eventually bringing that production in-house, but Nvidia remains the only credible supplier of the GPU clusters required to train the models that those chips will run.
To be sure, the Nvidia commitment and the Terafab announcement are not contradictory, but they do expose the gap between Tesla’s long-term vertical integration ambitions and its near-term dependency on a supply chain it is trying to escape. Nvidia is simultaneously Tesla’s most important compute partner and the company most actively arming its competitors with arguably superior autonomy tools.
Musk has also confirmed that Tesla expects a wide release of an update to its Full Self-Driving (Supervised) software within the next few weeks. The Austin robotaxi pilot, now running since July 2025, remains the sole active deployment of Tesla’s robotaxi service—despite highly outlandish forward guidance suggesting the service would cover half the US population by end-2025. Data from National Highway Traffic Safety Administration filings indicates that the service experiences crash rates up to nine times higher than human counterparts.
Musk’s comments also represent the first public reference to the combined SpaceX-xAI entity as SpaceX AI, effectively locking in the branding following the February 2026 merger. The deal, which valued the combined entity at US$1.25tr, was preceded by Tesla’s board approving a US$2bn xAI investment—a decision that has since attracted shareholder litigation alleging breach of fiduciary duty. Particularly after Musk acknowledged weeks later that xAI had not been “built right first time around”, and fired multiple of his co-founders.
Autonomous Driving,E-Mobility,Manufacturing,Markets,News,OEMs,Software-Defined Vehicle,Stewart Burnett,TeslaStewart Burnett,Tesla#Tesla #maintain #Nvidia #orders #inhouse #chip #push1773913767
More Stories
Pony.ai, CATL partner on first L4 electric light truck
UK lays regulations for automated passenger services
Leapmotor reveals China-only B05 Ultra at Beijing show