Rivian announced on December 11th significant breakthroughs at its first Autonomy & AI Day at its Palo Alto offices. From the hub for its autonomy and technology development teams, the company outlined its roadmap for next-generation vehicle autonomy, introduced an evolved software architecture underpinned by AI (artificial intelligence), and unveiled proprietary silicon to support the efforts.

“We are in the midst of a technology inflection point,” said RJ Scaringe, Rivian’s Founder and CEO. “The way that we approach AI in the physical world has shifted dramatically, and the idea of not having fully capable artificial intelligence across every domain of our lives will be almost impossible to even imagine, and I think artificial intelligence will become as accessible as running water or electricity.”

AI is enabling Rivian to create technology and customer experiences at an increasingly accelerating rate.

“If we look forward three or four years into the future, the rate of change is an order of magnitude greater than what we’ve experienced in the last three or four years,” said Scaringe. “Directly controlling our network architecture and our software platforms in our vehicles has, of course, created an opportunity for us to deliver amazingly rich software. But perhaps even more importantly, this is the foundation of enabling AI across our vehicles and our business.”

Autonomous driving is one manifestation of physical AI, he said.

“I couldn’t be more excited for the work our teams are driving in autonomy and AI,” said Scaringe. “Our updated hardware platform, which includes our in-house 1600 sparse TOPS (trillion operations per second) inference chip, will enable us to achieve dramatic progress in self-driving to ultimately deliver on our goal of delivering L4. This represents an inflection point for the ownership experience, ultimately being able to give customers their time back…”

 

Shifting and accelerating approach to autonomy

A few years ago, it became clear to Rivian leaders that the approach to autonomy needed to shift, with innovations around transformer-based coding and the design of large parameter models. The company’s approach has moved to building a neural-net-like understanding of how to drive instead of following a classical rules-based approach.

Recognizing the massive shift needed in its approach to autonomy, from early 2022, the company began the process of a clean-sheet platform design update designed around an AI-centric approach. This first production result was in the Gen 2 R1 vehicles Rivian launched in mid-2024. With the update, the Gen 2 vehicles received 55 megapixels of cameras, five radars, and ran on an inference platform that was a 10x improvement over its Gen 1 vehicles.

“With the deployment of our Gen 2 R1s, we began the process of building our data flywheel to grow and build our large driving model,” said Scaringe. “Because this AI-centric approach represents a model trained end-to-end through the millions and millions of miles driven on our vehicles, enhancing the perception platform or improving the compute is accretive to the capabilities of the model—meaning, the model only continues to get better as the perception and as the compute platform improve.”

Rivian’s approach to self-driving is designed around the data flywheel concept, for which a deployed fleet has a carefully designed data policy that allows for the identification of important and interesting events that engineers can use to train their large model offline before distilling it back down into vehicles.

Later this month, Rivian is issuing an over-the-air update to its R1 Gen 2 customers that will dramatically expand automated driving capability. Software advancements are coming to the R1 Gen 2 vehicles in the near term with the addition of UHF (universal hands-free) functionality, bringing hands-free assisted driving for extended periods to significantly more places. Coverage will be going from less than 150,000 mi (240,000 km) to more than 3.5 million mi (5.6 million km) of roads in North America, and it is capable of operating off-highway on roads with clearly painted lines.

Starting in 2026, Rivian begin rolling out point-to-point capabilities in which the vehicle can drive address-to-address, said Scaringe.

The Gen 3 hardware architecture launching in 2026 expands capabilities. Over the last few years, Rivian engineers have been developing the substantially enhanced platform. This will underpin a massive leap forward with R2 starting in late 2026.

The next major step beyond point-to-point will be eyes-off, meaning a user can navigate point-to-point with their hands off the wheel, but, importantly, with their eyes off the road.

“This gives you your time back,” said Scaringe. “You can be on your phone or reading a book, no longer needing to be actively involved in the operation of the vehicle.”

Following eyes off, the next major step will be personal SAE Level 4 automated driving, with the vehicle operating entirely on its own.

“While our initial focus will be on personally owned vehicles, which today represent a vast majority of the miles driven in the United States, this also enables us to pursue opportunities in the ride-share space,” he added.

 

Gen 3 autonomy for R2

At last year’s Investor Day, Rivian shared some of its vertical-integration journey with the company’s in-house-developed zonal E/E architecture. Its new autonomy hardware system is similarly vertically integrated, according to Vidya Rajagopalan, Senior Vice President of Electrical Hardware at Rivian.

Her team is responsible for vehicle electrical content ranging from the new in-house 5-nm silicon that operates below a volt to the electric-motor power electronics that operate at 400 V. She says the one common thread that runs across all these designs is a Rivian ethos of vertical integration.

“At Rivian, we have chosen to vertically integrate critical pieces of technology that allow us to differentiate ourselves over time,” she said.

The hardware enabling the Gen 3 autonomy system, coming late next year on the R2 vehicle platform, focuses on three main areas of leadership: sensors, compute, and overall product integration, according to Rajagopalan.

“At Rivian, we have a multimodal sensor strategy that provides a rich and diverse set of data for our AI models to operate.”

As on the R1 platform, the R2 platform has 11 cameras, but they now provide a total of 65 megapixels of data, 10 megapixels more than on the R1. The cameras provide a rich set of two-dimensional data to help “see” the world around the car.

Since cameras do not perform well under non-ideal conditions, like low light, excessive light, and fog,” the R2, also like the R1, uses radar, with one front-facing imaging radar and four corner radars. They are able to see in the darkness while also providing the depth and velocity of objects in their path.

The R2 corner radars are improved, supporting dual mode—short and long range—operation. In short-range mode, they have very high spatial resolution, which enables Rivian to remove the ultrasonic parking sensors from the R2.

 

R2 is first Rivian with lidar

In a Rivian first, the R2 will have a third sensor type—lidar. Unlike a camera, Rajagopalan says that “the other optical sensor” has an active light source, enabling it to see it much better in the dark. Another advantage is that it can provide a three-dimensional view of the world.

“Camera is the main workhorse of our sensor suite, generating the bulk of the data fed to the models, but the radar and lidar are critical to addressing the edge cases, which would otherwise create the long-tail of problem cases,” she explained.

So, why did Rivian choose to introduce lidar now, when some other prominent OEMs are moving away from the sensor. The three main factors that make this the right moment to incorporate lidar are cost, resolution, and size, according to Rajagopalan.

“About 10 years ago, lidars used to cost in the tens of thousands of dollars,” she said. “Today, you can get a very good lidar for several hundreds of dollars. The resolution of lidars has similarly improved tremendously. Today’s automotive lidars have point-cloud densities of the order of 5 million points per second, which is about 25 times better than what we could get 10 years ago.”

Finally, today’s lidars are no longer the large mechanical spinning beasts of the past, she added. They are more compact and more easily integrated into a vehicle, with no “unsightly taxicab style bump” on the roof.

“Our studio and design teams work very closely with the supplier to shape the face of the lidar in such a way so that it blends in beautifully with the R2.”

 

In-house autonomy platform and compute

The company’s software-first approach to autonomy is powered by the RAP (Rivian Autonomy Platform) and an end-to-end data loop for training. With it, the company introduces its LDM (large driving model), a foundational autonomous model trained like an LLM (large language model). Using GRPO (group-relative policy optimization), the LDM will determine the best driving strategies from the “massive” vehicle datasets.

The first iteration of the in-house inference platform includes a neural engine with 800 TOPS. It’s optimized to support camera-centric AI in the physical world and enables a dramatic expansion of Rivian’s autonomy capabilities. When integrated into the Gen 3 autonomy compute platform, it will deliver 1600 TOPS.

At the core of Rivian’s technology roadmap is the transition to in-house silicon designed specifically for vision-centric physical AI. The first-generation Rivian Autonomy Processor (RAP1) uses custom TSMC 5-nm tech that integrates processing and memory onto a single multi-chip module. This architecture delivers advanced levels of efficiency, performance, and ASIL (Automotive Safety Integrity Level) compliance.

Rivian chose to build in-house silicon ECUs for similar reasons—improved velocity, performance, and cost, according to Rajagopalan.

“With our in-house silicon development, we’re able to start our software development almost a year ahead of what we can do with supplier silicon,” she said. “Our hardware and software teams are co-located, and they’re able to develop at a rapid pace that’s just simply not possible with supplier silicon. All of this means we’re able to get to market sooner with the most cutting-edge AI product.”

Company engineers also better understand the application and vehicle architecture, and they are able to optimize the silicon for their use case and build in headroom for future models.

“By building purpose-built silicon, we do not carry the overhead that comes from leveraging a design that was built for some other task and repurposed for autonomous driving,” said Rajagopalan. “This enables us to get the best performance per dollar spent.”

The cost reductions from an in-house design come from optimizing for the vehicle use case and a meaningful reduction in supplier margins.

 

RAP1 and ACM3

RAP1 powers the company’s third-generation autonomy computer called Autonomy Compute Module 3 (ACM3). Key specifications of the module include 1600 sparse INT8 TOPS, enabling the processing power of 5 billion pixels per second.

The RAP1 features RivLink, a low-latency interconnect technology allowing chips to be connected to multiply processing power, making it inherently extensible. It is enabled by an in-house developed AI compiler and platform software.

It leverages the Arm compute design IP, according to Drew Henry, Executive Vice President, Physical AI Business Unit, at Arm, in a LinkedIn post. It is built on his company’s most advanced Armv9 architecture. The Arm Cortex-A720AE CPU design helps to power the autonomy experience of interpreting the environment around the vehicle, running the AI models that predict what will happen next, and choosing safe, reliable actions in milliseconds.

“Because Arm’s compute platform delivers high performance at lower power, Rivian can enable advanced autonomy without compromising range or efficiency—an essential advantage as intelligence moves closer to the edge in both vehicles and other autonomous machines,” wrote Henry. “All this gives Rivian not just the capabilities it needs today, but a reliable platform that can scale to future vehicle generations and other applications of physical AI. Rivian’s third-generation autonomy platform is a clear example of how deeply integrated AI is reshaping what vehicles and machines can do.”