At its GTC global AI conference this week in San Jose, CA, Nvidia announced significant automotive advancements powered by AI (artificial intelligence) and accelerated computing.
The biggest news was an expanded partnership with General Motors, the largest U.S. automaker, to develop next-gen vehicles, factories, and robots using Nvidia’s AI, simulation, and accelerated compute platforms. Nvidia also unveiled Halos, a consolidated AI-powered safety system offering for AVs (autonomous vehicles) and future physical AI.
In addition, the AI powerhouse showcased its expanding automotive ecosystem among automakers, truck OEMs, and suppliers and new AI Enterprise software platform offerings running on Drive AGX and NIM microservices to enhance in-vehicle experiences with generative and agentic AI.
General Motors vehicle and factory AI
Nvidia’s expanded collaboration with General Motors on next-generation vehicles and manufacturing using AI, simulation, and accelerated computing marks the second recent OEM adopting Nvidia’s three major computers for automotive applications. It follows the announced addition of global automaker sales leader Toyota to Nvidia’s full-line customer base at CES 2025.
GM has been using Nvidia GPU platforms for training AI models across various areas including simulation and validation. The companies will now work to build custom AI systems using Nvidia accelerated compute platforms, including Omniverse with Cosmos, to train AI models for optimizing GM’s factory planning and robotics. GM will also use Nvidia Drive AGX for in-vehicle hardware for future advanced driver-assistance systems and in-cabin enhanced safety driving experiences.
“GM has enjoyed a longstanding partnership with Nvidia, leveraging its GPUs across our operations,” said Mary Barra, Chair and CEO of General Motors. “AI not only optimizes manufacturing processes and accelerates virtual testing but also helps us build smarter vehicles while empowering our workforce to focus on craftsmanship.”
GM will use the Omniverse platform to create digital twins of assembly lines, allowing for virtual testing and production simulations to reduce downtime. The effort will include training robotics platforms already in use for operations such as material handling and transport, along with precision welding, to increase manufacturing safety and efficiency.
The automaker will build next-generation vehicles on Drive AGX based on Nvidia’s Blackwell architecture running the safety-certified DriveOS operating system. Delivering up to 1000 TOPS (trillion operations per second) of compute, the in-vehicle computer can speed the development and deployment of safe AVs at scale.
“The era of physical AI is here, and together with GM, we’re transforming transportation, from vehicles to the factories where they’re made,” said Jensen Huang, Founder and CEO of Nvidia. “We are thrilled to partner with GM to build AI systems tailored to their vision, craft and know-how.”
During GTC, Nvidia hosted a fireside chat with GM to discuss the companies’ extended collaboration and delve into how AI is transforming automotive manufacturing and vehicle software development.
Full-stack AV safety system
Nvidia says that physical AI is accelerating the development of AVs to ensure the safety of passengers and pedestrians. That’s why the company announced at GTC a comprehensive system called Halos that brings together the company’s chips, software, tools, and services to help ensure the safe development of AVs from the cloud to the car with a focus on AI-based, end-to-end software stacks.
“At the technology level, it spans platform, algorithmic, and ecosystem safety,” explained Ali Kani, VP and General Manager of Nvidia’s Automotive Platform. “At the development level, it includes design-time, deployment-time and validation-time guardrails. And at the computational level, it spans AI training to deployment using three Nvidia computers: DGX for AI training, Omniverse and Cosmos running on OVX for simulation, and Drive AGX for deployment.”
Serving as an entry point to Halos is the AI Systems Inspection Lab, which allows automakers and developers to verify the safe integration of their products with Nvidia technology. Announced at CES 2025 in January, it is the first global program to be accredited by the ANSI National Accreditation Board for an inspection plan integrating functional safety, cybersecurity, AI safety, and regulations into a unified safety framework, said Kani. Inaugural members of the lab include Ficosa, Omnivision, Onsemi, and Continental.
Key elements of Halos are related to platform, algorithmic, and ecosystem safety.
For platform safety, Halos features a system-on-a-chip (SoC) with hundreds of built-in safety mechanisms. It includes DriveOS software, a safety-certified operating system that extends from CPU to GPU; a safety-assessed base platform that delivers the foundational computer needed to enable safe systems for all types of applications; and Drive AGX Hyperion, a hardware platform that connects SoC, DriveOS, and sensors in an electronic control unit architecture.
Algorithmic safety for Halos includes libraries for data loading and accelerators as well as application programming interfaces for data creation, curation, and reconstruction to filter out, for example, undesirable behaviors and biases before training. Halos features training, simulation, and validation environments harnessing the Omniverse Blueprint for AV simulation with Cosmos world foundation models to train, test, and validate AVs. Its AV stack combines modular components with end-to-end AI models to ensure safety with AI models in the loop.
The ecosystem is looked after by safety datasets with diverse data and safe deployment workflows, comprising triaging workflows and automated safety evaluations, along with a data flywheel for continual safety improvements, demonstrating leadership in AV safety standardization and regulation.
Nvidia says that Halos brings together its vast amount of safety-focused technology research, development, deployment, and collaborations that include 15,000+ engineering years invested in vehicle safety; 10,000+ hours of contributions to international standards committees; 1000+ AV-safety patents filed; 240+ AV-safety research papers published; and 30+ safety and cybersecurity certificates.
It is strengthened by recent significant safety certifications and assessments of Nvidia automotive products. The DriveOS 6.0 operating system conforms with ISO 26262 ASIL-D (automotive safety integrity level D) standards. TÜV SÜD granted the ISO/SAE 21434 cybersecurity process certification to its automotive SoC, platform, and software engineering processes. TÜV Rheinland performed an independent safety assessment of the company’s Drive AV for the United Nations Economic Commission for Europe related to safety requirements for complex electronic systems.
Expanding automotive ecosystem
As usual, GTC is serving as a showcase for Nvidia’s latest collaborations that span the automotive industry. Adding to the GM announcement on the passenger car front was Volvo Cars, which is using the Drive AGX in-vehicle computer in its next-generation electric vehicles, and its subsidiary Zenseact is using Nvidia’s DGX platform to analyze and contextualize sensor data to unlock new insights and train future safety models. Lenovo has teamed with robotics company Nuro to create an end-to-end system for SAE Level 4 AVs built on Drive AGX in-vehicle compute.
On the trucking side, Nvidia’s AI-driven technologies are helping to address pressing challenges such as driver shortages, rising e-commerce demands, and high operational costs. Gatik is integrating Drive AGX for the onboard AI processing necessary for its Isuzu Class 6 and 7 trucks that offer driverless middle-mile delivery of a wide range of goods for customers including Tyson Foods, Kroger, and Loblaw. Uber Freight is also adopting Drive AGX as the AI computing backbone to enable carrier fleet customers to enhance efficiency and save costs. Torc Robotics is developing a scalable, physical AI compute system for autonomous trucks using Drive AGX in-vehicle compute and DriveOS operating system with Flex’s Jupiter platform and manufacturing capabilities to support scaled market entry in 2027.
Earlier this year, Nvidia announced the Omniverse Blueprint, a reference workflow for creating rich 3D worlds for AV training, testing and validation. The blueprint is expanding to include Nvidia Cosmos world foundation models to amplify photoreal data variation. Cosmos is already being adopted including by Plus, which is embedding Cosmos physical AI models into its SuperDrive technology, accelerating the development of its Level 4 self-driving trucks.
On the supplier front, Foretellix is extending its integration of the blueprint using the Cosmos Transfer WFM to add conditions like weather and lighting to its sensor simulation scenarios to achieve greater situation diversity. Mcity is integrating the blueprint into the digital twin of its AV testing facility to enable physics-based modeling of sensor data.
CARLA, which offers an open-source AV simulator, has integrated the blueprint to deliver high-fidelity sensor simulation. Capgemini will be the first to use CARLA’s Omniverse integration for enhanced sensor simulation in its AV development platform.
Nvidia is using Nexar’s edge-case data to train and fine-tune Cosmos’ simulation capabilities. Nexar is tapping into Cosmos, neural infrastructure models, and the DGX Cloud platform to boost its AI development, refining AV training, high-definition mapping, and predictive modeling.
Magna is using Nvidia’s Blackwell architecture-based Drive AGX Thor platform for integration in automakers’ vehicle roadmaps, delivering active safety and comfort functions along with interior cabin AI experiences. See Futurride’s coverage here.
Agentic AIs and NIMs
Mobility companies can take advantage of a growing Nvidia AI Enterprise software platform running on Drive AGX to enhance in-vehicle experiences with generative and agentic AI.
At GTC, Cerence AI is showcasing xUI, its new LLM-based AI assistant platform to advance the next generation of agentic in-vehicle user experiences. The hybrid platform runs in the cloud and onboard the vehicle, optimized first on Drive AGX Orin. The foundation for xUI is its CaLLM family of open-source foundation models fine-tuned on its automotive dataset and its inference performance is bolstered by Nvidia’s TensorRT-LLM library and NeMo.
SoundHound is demonstrating its next-generation in-vehicle voice assistant, which uses generative AI at the edge with Nvidia Drive AGX, enhancing the in-car experience by bringing cloud-based LLM intelligence directly to vehicles.
At GTC, Nvidia also released new Nvidia NIM microservices for automotive designed to accelerate the development and deployment of end-to-end stacks from cloud to car. For in-vehicle applications, they include BEVFormer, a transformer-based model that fuses multi-frame camera data into a unified bird’s-eye-view representation for 3D perception, and SparseDrive, an end-to-end autonomous driving model that performs motion prediction and planning simultaneously, outputting a safe planning trajectory.
For automotive enterprise applications, Nvidia offers a variety of models including NV-CLIP, a multimodal transformer model that generates embeddings from images and text, and Cosmos Nemotron, a vision language model that queries and summarizes images and videos for multimodal understanding and AI-powered perception.
- Nvidia CEO Jensen Huang announces an expanded GM collaboration at GTC.
- Nvidia is helping GM with manufacturing AI.
- Halos brings together Nvidia’s hardware and software for AV safety.
- Nvidia’s Halos AV safety system.
- Gatik is adopting Nvidia Drive AGX.
- Torc is developing a physical AI compute system with Nvidia.
- Uber Freight is adopting Nvidia Drive AGX.
- Magna ADAS tech enabled by Nvidia Drive Thor.