Today, at its GTC Paris AI conference held in conjunction with Viva Technology startup/tech event, Nvidia revealed its latest physical and industrial AI news. In a media briefing just before the event, Ali Kani, Vice President of Automotive for Nvidia, highlighted the latest announcements before CEO Jensen Huang gave his keynote today.

The big news was that Nvidia’s full-stack Drive AV software, which includes active safety, parking, and automated driving, is now in production, launching first in the Mercedes-Benz CLA.

The new model represents the first application of the new Mercedes-Benz Operating System (MB.OS) in series production. The OEM says the chip-to-cloud architecture enables the vehicle software to be transferred via a central Mercedes-Benz Intelligent Cloud server instead of via various hardware modules.

Production of that model at the OEM’s Rastatt plant is “drive-flexible,” with hybrid and fully electric vehicles rolling off the same line as vehicles with conventional drive systems. The ramp-up of CLA production there was preceded by the extensive remodeling of an existing assembly hall in record time. In preparation for production, the plant simulated the conversion of the hall virtually, acting as a pioneer of the Digital First approach in the Mercedes-Benz global production network.

Within just a few weeks, a new production line was configured and optimized in Hall 4.0 with the help of high-precision digital simulation techniques, allowing for considerable efficiency gains in terms of construction time and costs. Rastatt is serving as a blueprint for the global roll-out of MB.OS in all Mercedes-Benz vehicle plants.

The plant is also pioneering the use of AI-controlled process engineering in its top-coat paint booths. By monitoring relevant sub-processes using AI instead of conventional control systems, it was possible to reduce energy consumption by 20% and significantly shorten process ramp-up time.

The Mercedes-Benz news comes as other significant Nvidia partner launches in Europe.

In a few months, Volvo’s new ES90 electric sedan will hit the market, built on Nvidia tech including dual AGX Orin computers running safety-certified Drive OS and trained on DGX in the cloud. JLR is scheduled to launch its next generation of cars, previewed by Jaguar Type 00, next year using Nvidia’s full-stack drive AV platform.

 

AI becomes physical

Physical AI is the next wave of AI, according to Kani.

“It understands the laws of physics and can generate actions based on sensor inputs,” he said. “Physical AI will embody three major types of robots that comprise industries totaling $50 trillion. It’s a huge opportunity for facilities like factories and warehouses of our European partners, transportation robots like autonomous vehicles, and robots like humanoids and other autonomous mobile robots.”

Nvidia has been preparing for the Physics AI revolution in the automotive industry for the last 15 years in Europe with its Drive applications.

“It started with embedded, where auto OEMs began to value having high-performance computers in their cars to power infotainment systems,” said Kani. “Then came the growth of autonomous driving that turned into our next big investment into, as Jensen calls it, a zero-billion-dollar market.”

The company now has three computers and platforms for the segment—DGX and Nvidia AI, RTX Pro and Omniverse, and AGX and Drive AV—that power automotive design and engineering to factory digital twins, robotics and AVs, and enterprise AI applications.

“Nvidia’s automotive business is expected to reach $5 billion this year, yet only 1% of the billion cars on the road are L2+ capable today,” said Kani. “Ultimately, we believe every car will be fully autonomous, making automotive a trillion-dollar opportunity for Nvidia.”

The largest segment of this opportunity is in the cloud, according to Kani.

“We partner with essentially all the auto and ecosystem OEMs to build their AI factories on DGX in the cloud,” he said. “Most automotive makers are building digital twins of their car factories on Omniverse.”

Nvidia is partnering with the automotive ecosystem to digitalize the workflow from factory automation to vehicle design, AV development and deployment, retail, and marketing. Drive is the company’s automotive development platform, and its DGX service in the cloud helps partners train their AI factories. The company also helps its partners simulate and test their AI products using Omniverse and Cosmos. Once trained and tested in the cloud, Drive AV software runs inference on Drive AGX in vehicle supercomputers.

“We’re the only company building tools and technologies to accelerate this cloud-to-car development flow,” said Kani. “Our mission is to help our partners learn from vast amounts of both real and synthetic driving behavior to safely navigate complex environments and scenarios with human-like decision-making.”

 

Truly full stack

Kani emphasized that Nvidia’s offering is truly a full-stack solution.

“The traditional way that an AV was built is that there’d be one [supplier] that would give you perception, and then typically you would have a Tier One do fusion,” he explained. “An OEM would do some of the fusion too and some of the planning and control, and they’d often pick a different Tier One to do this for parking. And often one would do L2+ software, and the other one would do active safety.”

Nvidia offers a single software stack that does parking, active safety, and L2+ or L3 driving.

“The scope of the product is full stack in the car, but then it’s also full stack because someone needs to train the models on that stack and someone needs to simulate and test those models in simulation,” And Nvidia is also doing all of that,” he said.

However, the company also architected its stack to be modular.

“We have cases like Tesla, which is our largest automotive customer,” he explained. “They don’t use us in their car computer or the software, but we do help them in training. They chose to partner with us on the DGX computer.

More broadly, OEMs are looking to simplify development, most migrating to one platform for all autonomy-related applications. According to Kani, OEMs have traditionally been building entry, mid-range, and high-end ADAS cars, in many cases with different stacks.

“All the OEMs have realized it’s not really the right way to do something like self-driving,” he said. “You want all your cars to be safe and be able to drive from any place. Segmenting your fleet isn’t really a good strategy because, one, it’s not what customers want, but second, how do you validate three stacks at the same time? You find a problem and you have to fix it on three different platforms. It’s just too much investment for partners to make.”

 

New AI tools and simulation tech

At GTC Paris, Nvidia also announced new AI tools and simulation technologies to advance AV development. The company is releasing three new Cosmos foundation models that have been post-trained on AV data.

The Dreams Drive model can generate different lighting and weather conditions in multi-view videos, helping models perform better in tough conditions like fog and rain. The DiffusionRenderer model takes a single video and changes scene lighting and edit materials, amplifying the variation needed to build safe AV software. The Cosmos Predict-2 is a top-performing world foundation model that’s faster and more scalable for creating high-quality multi-camera videos for the automotive market.

In addition to these models for AV developers, Nvidia is upgrading its Omniverse Blueprint for AV simulation, building in a new feature called NuRec (neural reconstruction), which generates an interactive 3D simulation from video captures. To ease development, the company announced a new Cosmos integration with CARLA, the world’s most widely used AV simulator, and the release of a new drop of synthetic and real video data to Nvidia’s open-source physical AI data set available now on Hugging Face.

“Ultimately, these tools and blueprints will accelerate AV research and development, which helps our automotive ecosystem,” said Kani.

 

Halos end-to-end AV safety

At GTC Paris, Kani said that four new automotive companies—Bosch, Easyrain, Nuro, and Wayve—have now joined the AI Systems Inspection Lab, a key element its Halos full-stack chip-to-deployment safety system, to be assessed by Nvidia. They join members announced earlier this year that include Continental, Ficosa, Omnivision, Onsemi, and Sony Semiconductor Solutions.

Nvidia has been building its Halos full-stack chip-to-deployment safety system to safeguard AV stacks for nearly two decades. It’s open at every layer of the stack, so the company’s partners can integrate its tools and methodologies into their products and services anyway they want.

The AI Systems Inspection Lab helps partners ensure their system integrations meet rigorous safety and cybersecurity standards through impartial assessments. It has been newly recognized by the world’s leading independent safety certification bodies like ANAB (American National Standards Institute‘s Accreditation Board), TÜV SÜD, Exida, CertX, and UL Solutions.

“Together, this solidifies Nvidia as the global leader in AV safety,” said Kani.

At GTC Paris, Nvidia also announced that it is extending Halos from AVs to robotics. Leading robotics companies like Boston Dynamics, Kion, and Advantech are joining the Halos AI inspection lab to evaluate their levels of safety.

“This comes at the right moment as the world’s humanoid and autonomous industrial robot ecosystem is looking for solutions to ensure worker and human safety,” said Kani.

 

Meanwhile, at CVPR

Kani says that Nvidia’s AV development has greatly benefited from the world-class research it is doing with the next-generation foundation models.

Also, this week, at the CVPR computer vision conference in Nashville, TN, Nvidia won the end-to-end autonomous driving challenge for the second consecutive year. This year’s winning model, Generalized Trajectory Scoring (GTRS), advances end-to-end planning by proving how autonomous vehicle software evaluates and selects driving trajectories. GTRS scores options based on safety, comfort, and rule compliance, making AV systems more robust and scalable in real-world conditions.

Notably, this year’s challenge introduced synthetic data and more unpredictable scenarios. GTRS addresses these by combining diffusion and transformer models to better generalize to new environments.