At CES 2026 today, Nvidia unveiled the Alpamayo family of open AI models, simulation tools, and datasets designed to accelerate the development of safe, reasoning‑based autonomous vehicles (AVs). The new solution addresses a number of challenges to the existing development of autonomy.

As the company explains it, AVs must safely operate across a range of driving conditions, with rare and complex scenarios remaining some of the toughest challenges for autonomous systems to safely master. Traditional AV architectures separate perception and planning, which can limit scalability when new or unusual situations arise. Recent advances in end-to-end learning have made significant progress, but overcoming these long-tail edge cases requires models that can safely reason about cause and effect, especially when situations fall outside a model’s training experience.

“The ChatGPT moment for physical AI is here—when machines begin to understand, reason, and act in the real world,” said CES keynoter Jensen Huang, Founder and CEO of Nvidia. “Robotaxis are among the first to benefit. Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments, and explain their driving decisions. It’s the foundation for safe, scalable autonomy.”

The Alpamayo family introduces chain-of-thought, reasoning-based vision language action (VLA) models that bring humanlike thinking to AV decision-making. These systems can think through novel or rare scenarios step-by-step, improving driving capability and explainability, which is critical to scaling trust and safety in intelligent vehicles, and are underpinned by the Nvidia Halos safety system.

The new family integrates three foundational pillars—open models, simulation frameworks and datasets—into a cohesive, open ecosystem that any automotive developer or research team can build upon. Rather than running directly in-vehicle, it involves large-scale teacher models that developers can fine-tune and distill into the backbones of their complete AV stacks.

At CES, Nvidia is releasing Alpamayo 1, AlpaSim, and new physical AI open datasets.

The company says that Alpamayo 1 is the industry’s first chain-of-thought reasoning VLA model designed for the AV research community. Now on Hugging Face, the 10-billion-parameter architecture uses video input to generate trajectories alongside reasoning traces, showing the logic behind each decision.

Developers can adapt Alpamayo 1 into smaller runtime models for vehicle development or use it as a foundation for AV development tools such as reasoning-based evaluators and auto-labeling systems. Alpamayo 1 provides open model weights and open-source inferencing scripts.

Future models in the family will feature larger parameter counts, more detailed reasoning capabilities, more input and output flexibility, and options for commercial usage.

Available on GitHub, AlpaSim is a fully open‑source, end-to-end simulation framework for high‑fidelity AV development. It provides realistic sensor modeling, configurable traffic dynamics, and scalable closed‑loop testing environments, enabling rapid validation and policy refinement.

With the new physical AI open datasets, Nvidia says it is offering the most diverse large-scale, open dataset for AVs on Hugging Face. They contain 1,700+ hours of driving data collected across a wide range of geographies and conditions, covering rare and complex real-world edge cases essential for advancing reasoning architectures.

Together, the company says that these tools enable a self-reinforcing development loop for reasoning-based AV stacks.

Mobility leaders and industry experts, including Lucid, JLR, Uber, and Berkeley DeepDrive, are showing interest in Alpamayo to develop reasoning-based AV stacks that will enable level 4 autonomy.

 

Drive Hyperion and sensor ecosystem expansion

At CES 2026, Nvidia announced that the global Drive Hyperion ecosystem is expanding to include Tier 1 suppliers, automotive integrators, and sensor partners, including Aeva, Aumovio, Astemo, Arbe, Bosch, Hesai, Magna, Omnivision, Quanta, Sony, and ZF. This builds on collaborations unveiled at Nvidia GTC Washington, D.C., to advance Level 4-ready autonomous passenger vehicles with Drive Hyperion, while applying the same platform to long‑haul freight to bring safe and secure full-self-driving capabilities across commercial transport.

“Everything that moves will eventually become autonomous, and Drive Hyperion is the backbone that makes that transition possible,” said Ali Kani, Vice President of Automotive at Nvidia. “By unifying compute, sensors and safety into one open platform, we’re enabling our entire ecosystem, from automakers to the AV software ecosystem, to bring full autonomy to market faster, with the reliability and trust that mobility at scale demands.”

Nvidia is developing a unified ecosystem to give automotive customers the confidence that sensing systems and other hardware are fully compatible with Drive Hyperion, ensuring reliable performance and seamless integration while streamlining development, reducing testing time ,and lowering overall costs.

Aumovio, Aeva, Arbe, Hesai, Omnivision, and Sony are also among the latest partners to qualify their sensor suites on the open, production‑ready Drive Hyperion architecture. This growing sensor ecosystem spans cameras, radar, lidar, and ultrasonic technologies that enable automakers and developers to build and validate perception systems optimized for Level 4 autonomy.

Nvidia says that, by building domain controllers or qualifying sensors and other technologies on Drive Hyperion, the company’s partners gain compatibility with its full‑stack AV compute platform, speeding development, simplifying integration, and accelerating time to market.

 

Mercedes-Benz CLA debuts Drive AV innovations

The AI company is bringing its Drive AV software with enhanced Level 2 point-to-point driver assistance capabilities to U.S. roads by the end of this year, starting with Mercedes-Benz. The OEM’s new CLA,  the first vehicle featuring the MB.OS platform, introduces MB.Drive Assist Pro ADAS features powered by Nvidia’s full-stack Drive AV software, AI infrastructure, and accelerated compute.

This unified architecture’s advanced Level 2 automated driving capabilities bring expanded functionality, including point-to-point urban navigation through complex city environments, advanced active safety with proactive collision avoidance, and automated parking in tight spaces. In addition, it allows for cooperative steering between the system and the driver.

Nvidia Drive AV uses an AI end-to-end stack for core driving, alongside a parallel classical safety stack built on its Halos safety system that adds redundancy and safety guardrails. As a result, vehicles can learn from vast amounts of real and synthetic driving data to assist drivers in safely navigating complex environments and scenarios with humanlike decision-making.

Humanlike urban driving is enabled by Nvidia’s end-to-end AI deep learning models. These models interpret traffic, allowing vehicles to navigate intelligently through lane selection, turns, and route-following in congested or unfamiliar areas. The models can better understand vulnerable road users (pedestrians, cyclists, scooter riders) and respond proactively—such as by yielding, nudging, or stopping—to prevent collisions. They also better assist drivers during trips to navigate safely from any address to any address.