Nvidia’s GTC kicks off today with a keynote by Jensen Huang, Founder and CEO.
The annual event, which brings together the world of AI experts and startups in many industries, is getting an increasing injection of automotive technology as vehicles migrate from mechanical/electrical to software-defined products that are safer, more customizable, and continually get better through OTA (over-the-air) updates. The company is positioning itself as it says vehicles are becoming “data center on wheels” in the next giant leap in compute innovation.
Due to the COVID-19 pandemic, Nvidia has shifted GTC to an all-digital event this year. The physical event typically saw about 10,000 people attend, but the switch to digital has enabled over 150,000 people to register. According to Danny Shapiro, Senior Director of Automotive at Nvidia, this year there are more than 1500 sessions, about 150 of those on automotive subjects.
“Nvidia has a complete end-to-end platform, and this is not just technology for autonomous vehicles, [or] inside the vehicle itself, but really extends throughout the data center,” said Shapiro, summing up the automotive news coming out of GTC this year. “It involves the whole development pipeline from collecting the data.”
Nvidia is engineering its solutions for accelerating development and training the AI (artificial intelligence) is a huge part of that, said Shapiro. Simulation plays a vital role in testing and validating, he added, and there is news on “how our Drive Sim software…is going to be leveraged, not just for pure simulation, but a lot more in terms of how we use physically accurate, timing accurate simulation for many different aspects of the automotive industry.”
Volvo, Zoox, and SAIC are among the growing ranks of leading transportation companies using the newest Drive solutions to power their next-generation AI-based autonomous vehicles. Nvidia’s design-win pipeline now totals more than $8 billion over the next six years, reflecting a growing range of next-generation cars, trucks, robotaxis, and new energy vehicles (NEVs).
Drive Atlan for ‘data centers on wheels’
Arguably the biggest announcement that Nvidia revealed is its next-generation AI-enabled processor for AVs (autonomous vehicles), the Drive Atlan, which will deliver more than 1000 TOPS (trillion operations per second) and targets automakers’ 2025 models. The Atlan SoC (system on a chip) fuses AI and software with the latest in computing, networking, and security for unprecedented levels of performance and security.
The new processor will feature Nvidia’s next-generation GPU architecture, new Arm CPU cores, as well as deep-learning and computer-vision accelerators. This data-center-like performance provides automakers ample compute capabilities to build software-defined vehicles that are richly programmable and perpetually upgradeable through secure, over-the-air updates.
“The transportation industry needs a computing platform that it can rely on for decades,” said Huang. “The software investment is too immense to repeat for each car. Nvidia Drive is the most advanced AI and AV computing platform, with rich global software and developer ecosystems—and architecturally compatible for generations. Today, we are announcing the next extension of our roadmap—our new Drive Atlan is truly a technical marvel, fusing all of Nvidia’s strengths in AI, auto, robotics, safety, and BlueField-secure data centers to deliver safe, autonomous-driving fleets.”
The BlueField data processing unit (DPU) adds a broad range of advanced networking, storage, and security services to support complex compute and AI workloads found in AVs. BlueField offers data center infrastructure-on-chip programmability, armed with a safe security enclave to prevent data breaches and cyberattacks. This enables Atlan to be designed from the ground up to handle the large number of AI applications that run simultaneously in autonomous machines, safely, and securely.
Atlan extends the Drive family of SoCs for vehicles with 2025 production targets and beyond. It builds on a history that includes the company’s original processor for autonomous driving, the 30-TOPS Xavier, which is found in production cars and trucks now, and the 254-TOPS Orin to appear in automaker production timelines starting in 2022. Orin delivers the 254 TOPS with 21 billion transistors and integrates Nvidia’s Ampere GPU architecture, 12 Cortex-A78 ARM 64 CPUs, along with deep learning and computer vision accelerators.
Drive Orin coming in 2022
Volvo Cars is expanding its Nvidia collaboration by using the Orin SoC to power the autonomous driving computer in its next-generation models. The automaker aims to be the first with a global footprint to use the 254-TOPS Orin in its forthcoming SPA2 modular vehicle architecture, with the first application being the next-generation XC90 to be revealed next year.
“We believe in partnering with the world’s leading technology firms to build the best Volvos possible,” said Henrik Green, Chief Technology Officer, Volvo Cars. “With the help of Nvidia Drive Orin technology, we can take safety to the next level on our next generation of cars.”
The Orin-powered autonomous driving computer will be combined with software developed in-house by Volvo Cars and its Zenseact autonomous driving software development company. The added computing power and graphics processing delivered by Orin enable advanced sensor suites needed for autonomous driving such as the LiDAR technology developed by Luminar, another of Volvo Cars’ technology partners.
The SPA2 architecture will be hardware-ready for autonomous driving from production start. Its unsupervised autonomous driving feature called Highway Pilot will be activated when it is verified to be safe for individual geographic locations and conditions.
In 2018, Volvo Cars announced it would use Nvidia’s Xavier SoC technology for the core computer on cars based on SPA2. The computer will manage core functionalities for functions such as energy management and driver assistance. It will work with the Orin computer, which is dedicated to compute-intense work such as vision and LiDAR processing and delivering the high ASIL () level required for autonomous driving.
Volvo Cars is centralizing the compute architecture in its next-generation cars to enable them to be safer, more personal, and more sustainable—removing a lot of complexity. Rather than relying on multiple ECUs (electronic control units) around the car that controls individual features and systems, much of the software will now be developed in-house and reside in a vehicle’s central computer. This enables more frequent improvements and growth in features via OTA (over-the-air) updates.
Startup automaker Faraday Future will also deploy the Orin platform in its flagship FF 91 EV (electric vehicle), with plans to achieve advanced highway autonomous driving capabilities and advanced parking and summon features when it goes on sale in 2022.
“FF aims to deliver the latest and most advanced computing capabilities on the FF 91,” said Hong Rao, Vice President, IoV, Autonomous Driving & AI, at FF. “We’ve selected the Nvidia Drive Orin with its seamless upgrade path for our autonomous driving system.”
With this high-performance, energy-efficient compute capability, FF will target more advanced autonomous driving and parking features on its future FF 71 and FF 81 vehicles, which are expected to be available in 2023 and 2024, respectively, also powered by Orin.
The long-awaited FF 91 is back on track as FF prepares to merge with Property Solutions Acquisition Corp., a special-purpose acquisition company (SPAC) managed by Co-CEOs Jordan Vogel and Aaron Feldman. That merger is expected to close in the second quarter of 2021, with the combined company’s goal to launch the FF 91 within 12 months of the merger closing.
The vehicle is engineered with an industry-leading 783 kW for 0-60 mph (0-97 km/h) acceleration in less than 2.4 s. Its largest 130-kW·h battery pack features submerged liquid cooling technology. FF claims its first vehicle’s unique rear-compartment will have the industry’s largest reclining angle of 60° with its zero-gravity rear seats, with a revolutionary user experience designed to create a luxurious living space using an intelligent internet system for high-speed connectivity.
Streamlining ecosystem, development, and simulation
Nvidia said at GTC that will open up its Drive Hyperion 8 autonomous vehicle reference platform for the AV ecosystem, the eighth generation available to the Drive ecosystem later in 2021.
The company is doing this as it says that the next-generation vehicles will be packed with more technology than any computing system and as companies embrace this shift to more intelligent, software-defined vehicles. It believes that developing an AV, “essentially a data center on wheels,” requires an entirely new process. Both the hardware and software must be comprehensively tested and validated to ensure an AV can handle not only the real-time processing for autonomous driving but also withstand the harsh conditions of daily driving.
The eighth-generation Hyperion platform announced at GTC is said to be a fully operational, production-ready, and open autonomous vehicle platform that cuts down the massive amount of time and cost required to outfit vehicles with the technology required for AI features and autonomous driving. It includes the sensors, high-performance compute, and software necessary for AV development—all verified, calibrated, and synchronized right out of the box.
At its core, two Orin SoCs provide ample compute for Level 4 self-driving and intelligent cockpit capabilities. The SoCs can process data from a system of 12 exterior cameras, three interior cameras, nine radars, and one LiDAR in real-time for safe autonomous operation. The platform includes the tools necessary to evaluate the Drive AV and IX software stack, as well as real-time record and capture capabilities for streamlined driving data processing. The entire toolset is synchronized and calibrated precisely for 3D data collection, giving developers valuable time back in setting up and running autonomous vehicle test drives.
With much of the industry leveraging Orin for in-vehicle compute, Nvidia says that Hyperion is the next step for fully autonomous vehicle development and validation since it provides everything needed to validate an intelligent vehicle’s hardware on the road. It’s already streamlining critical self-driving research and development at institutions such as the Virginia Tech Transportation Institute and Stanford University, which are leveraging the current generation in AV research pilots.
During his opening keynote at GTC, Huang also announced the next generation of AV simulation, Drive Sim 2.0, is now powered by Omniverse. The new architecture is aimed at accelerating the development, validation, and deployment of autonomous machines.
Drive Sim enables high-fidelity simulation by tapping into Nvidia’s core technologies to deliver a powerful, cloud-based computing platform. It can generate datasets to train the vehicle’s perception system and provide a virtual proving ground to test the vehicle’s decision-making process while accounting for edge cases. The platform can be connected to the AV stack in software-in-the-loop or hardware-in-the-loop configurations to test the full driving experience.
Omniverse provides a platform that was designed from the ground up to support multi-GPU computing, incorporating a physically accurate, ray-tracing renderer based on Nvidia RTX technology. It includes Kit, a scalable and extensible simulation framework for building interactive 3D applications and microservices.
The platform schedules and manages all sensor and environment rendering functions to ensure repeatability without loss of accuracy. It does this across GPUs and nodes, giving Drive Sim the ability to handle detailed environments and test vehicles with complex sensor suites. It can manage such workloads slower or faster than real-time while generating repeatable results. Not only does the platform enable this flexibility and accuracy, but it does so in a way that’s scalable, so developers can run fleets of vehicles with various sensor suites at a large scale and the highest levels of fidelity.
The solution will be available to developers via an early access program this summer.