Nvidia today announced during its GTC AI conference in San Jose an expanded collaboration with Hyundai Motor Co. and Kia Corp. to advance next-generation autonomous driving technologies built on the Nvidia Drive Hyperion autonomous vehicle development platform.

The collaboration brings together Hyundai Motor Group’s software-defined vehicle (SDV) capabilities, global vehicle fleet, and autonomous driving development expertise with Nvidia accelerated computing, AI infrastructure, and autonomous driving software to support the development of scalable, data-driven autonomous driving systems across the Group’s vehicle platforms.

As part of the expanded relationship, Hyundai Motor Group plans to integrate Nvidia’s autonomous driving technologies, enabling level 2 and above systems across select vehicles to deliver enhanced safety and intelligent driving capabilities.

“The future of mobility will be built on AI and software,” said Rishi Dhall, Vice President of Automotive at Nvidia. “We’re combining Hyundai Motor Group’s leadership in vehicle engineering with Nvidia’s accelerated computing and AI to build safe, intelligent, Nvidia Drive-based autonomous driving systems—from advanced driver assistance in select production vehicles to scalable robotaxi services with Motional.”

Nvidia will also explore expanded collaboration with Hyundai Motor Group’s autonomous driving joint venture, Motional, to further advance Level 4 robotaxi capabilities and accelerate next-generation autonomous mobility services.

“The expanded partnership with Nvidia marks an important milestone in realizing Hyundai Motor Group’s vision for safe and reliable autonomous driving technology,” said Heung-Soo Kim, Executive Vice President and Head of Global Strategy Office of Hyundai Motor Group. “Based on a unified, Group‑wide collaborative framework, we will strengthen our differentiated technological competitiveness—from Level 2 and above autonomous driving technology to Level 4 robotaxi services.”

Last week, Motional announced that Uber riders in Las Vegas can now be matched with one of its all-electric Ioniq 5 robotaxis. Custom-designed for ride hailing, the Motional robotaxi is one of the first SAE Level 4-capable AVs to be certified under the U.S. Federal Motor Vehicle Safety Standards. Initially, Motional robotaxis will feature a vehicle operator monitoring the road ahead from behind the steering wheel. A fully driverless service—with no human operator in the vehicle—is expected to begin by the end of this year.

“Motional is ready to put our extensive ride-hail experience to work with Uber again,” said David Carroll, Vice President of commercialization at Motional. “With our AI-first autonomous driving system, we’re able to seamlessly navigate hundreds of in-demand pick up and drop off locations where Uber riders want and expect to be able to go, whether that’s major hotel casinos on the Strip, shopping in Town Square, or exploring downtown Las Vegas.”

The new Nvidia collaboration will enable Hyundai Motor Group to develop a scalable autonomous driving stack built on Drive Hyperion and support a range of capabilities, from advanced driver assistance to higher levels of autonomy, scalable from Level 2 ADAS to Level 4 autonomous driving.

By combining Hyundai Motor Group’s large-scale fleet data and SDV development capabilities with Nvidia’s AI computing platform, the companies aim to accelerate a continuous development cycle that includes large-scale, real-world driving data collection, AI model training and refinement, simulation, validation, and deployment across production vehicles.

Underpinning the Hyperion‑based deployment is Alpamayo 1.5, the latest iteration of its new family of AI models, simulation frameworks, and datasets. With it, the AI leader is bringing chain‑of‑thought, vision‑language‑action capabilities to the AV stack so Hyperion‑based vehicles can better reason and explain their decisions on the road.

At GTC, Nvidia launched and open-sourced the new version of Alpamayo, its portfolio of AI models, simulation frameworks, and datasets designed to enable Level 4 autonomous driving through reasoning-based, human-like judgment. Version 1.5 follows 1.0, just released at CES 2026.

Alpamayo processes video, motion, and text prompts to generate driving trajectories, complete with clear reasoning traces. This latest release includes support for navigation inputs, as well as post-training scripts to help developers kickstart their workflows.

The latest version takes driving video, ego-motion history, navigation guidance, and natural language prompts as inputs, then outputs driving trajectories with reasoning traces. This enables developers to steer behavior and specify constraints directly through navigation and text prompts. In addition, the Alpamayo portfolio now includes post-training scripts to enable model adaptation for researchers and developers.

With Alpamayo 1.5, vehicles can more effectively learn from rare or unpredictable events—such as unusual road hazards and complex human behavior—by replaying scenarios, querying model decisions, and applying updated behavioral guidance through prompts and navigation settings.

The model also adds flexible multi-camera support and configurable camera parameters, simplifying reuse of the same AI driving stack across vehicle lines and sensor configurations while preserving compatibility with existing Alpamayo integrations.