Tensor and Arm today announced a multi-year strategic collaboration to deliver the foundational compute architecture behind the world’s first agentic AI (artificial intelligence) personal robocar. The new automaker is leveraging the Arm compute platform, which unifies hardware, software, and ecosystem enablement, to power physical AI workloads for its entire EV (electric vehicle), simply called Robocar, and said to be the most advanced personal AV (autonomous vehicle) developed for mass production.

“Delivering personal autonomous vehicles at scale requires more than breakthrough AI and autonomy; it demands advanced engineering for safety, redundancy, reliability, and power efficiency,” said Dr. Jewel Li, COO at Tensor. “Our collaboration with Arm leverages their deep, decades-long expertise and leadership in AI-capable compute, which, alongside our broader ecosystem of strategic partners, ensures that the Tensor Robocar moves seamlessly from advanced technology to real-world roads, safely, and reliably.”

Tensor has integrated more than 400 safety-capable, power-efficient Arm-based cores in each vehicle for the highest concentration of Arm technology in a consumer vehicle today. As it scales its personal Robocar fleet for global commercialization in 2026, the companies will work together to enable the SAE Level 4 autonomous capabilities supported by the Arm software ecosystem of more than 22 million developers.

AI is shifting to the physical world, where intelligence is embodied in machines and defining a new class of autonomous systems that sense, decide, and act under uncompromising power, safety, and reliability constraints. Tensor is defining its vehicle around this intelligence rather than adapting autonomy onto legacy platforms.

“Autonomous vehicles are a leading example of how AI is shifting to the physical world, requiring world- class, high-performance, safe and power-efficient compute foundations to transform the future of mobility in meaningful and tangible ways,” said Drew Henry, EVP of Physical AI Business Unit at Arm. “Combined with a deeply established software ecosystem that delivers critical toolchains, safety certification, and industry standards, Arm provides the foundation for pioneering physical AI innovation. Tensor’s Robocar is a standout example of that innovation in action, pairing a clear vision with the engineering rigor needed to bring autonomy to market at scale.”

The Robocar is powered by what is said to be the world’s most vertically integrated Level 4 autonomy stack and a sensor suite of 37 cameras, 5 lidars, 11 radars, 22 microphones, 10 ultrasonic sensors, 3 IMUs, a GNSS, 16 collision detectors, 8 water-level detectors, 4 tire-pressure sensors, a smoke detector, and triple-channel 5G connectivity. As Arm explains, its depth of hardware and software integration represents a significant advancement in automotive sensing, enabling continuous perception, environmental awareness, and resilient system performance across diverse operating conditions.

To make this possible, Tensor is using the Arm compute platform to distribute safety-capable intelligence across the vehicle—from the onboard supercomputer to the sensors—to allow the Robocar to safely perceive and navigate its environment through its foundation models. Each Robocar incorporates 433 Arm-based cores, including Neoverse AE for high-throughput AI processing; Cortex-X for agentic AI cabin and peak performance system control; Cortex-A for drive-by- wire, lidars, redundancy, and general compute; Cortex-R for real-time safety-critical systems; and Cortex-M for low-power subsystem management.

The Arm compute platform enables Tensor to deploy AI workloads across diverse compute domains while meeting stringent automotive safety, thermal, and power requirements, operating in concert with Nvidia-accelerated AI processing to support Tensor’s proprietary autonomy stack.

Arm and Nvidia are part of a long list of partners revealed by Tensor. Automotive supplier partners include Autoliv, ZF, Bosch, Continental, Veoneer, Schaeffler, Batz, ARaymond, ITW, FEV, Omnex, HomeLink, Rocsys, and Brose. Chip-maker collaborators include AMD, NXP, TI, Qualcomm, Samsung, Broadcom, Infineon, Renesas, U-blox, muRata, Hamamatsu, AMS OSRAM, Maxim, STMicroelectronics, and Indie Semiconductor. Its cloud provider is Oracle.

 

From retrofitter to OEM

For those of you unaware, Tensor was reborn from AutoX in August 2025, as it pivoted from a focus on refitting AV technology to producing luxury, Level 4 AVs for personal ownership in its decade-long journey as a company. That pivot began about six years ago, according to Amy Luca, CMO at Tensor, culminating in the San Jose, CA-based agentic AI startup unveiling what it says will be the world’s first personally owned AV on August 13th and showing it to the public for the first time at The Quail during Monterey Car Week 2025 on August 15th.

The company says its Robocar is built on a decade of proprietary engineering prowess, advanced AI, and a ground-up design for autonomous and agentic vehicles to empower individuals to “truly own” their autonomy. Futurride caught up with CCO Li and CMO Luca at CES 2026 to get caught up on the company’s mission and plans.

“We are building a world where individuals own their personal AGI agents, enhancing freedom, privacy, and autonomy,” said Luca. “With Tensor, we’re introducing the world’s first personal Robocar, ushering in the era of AI-defined vehicles. This isn’t a car as we know it. It’s an embodied personal agent that moves you.”

As the company sees it, the overwhelming majority of new vehicles sold in the world are purchased for personal use, and it wants to extend this ownership model into the future with the first volume-produced, consumer-ready AV—designed from the ground up for private ownership at scale.

While today’s AVs are primarily built as robotaxis in heavily maintained, depot-dependent fleets, Tensor says its offering represents a radical departure. Its “truly independent” personal robocar is engineered to serve its owner, not its operator, designed for real-world autonomy without daily technician oversight.

From intelligent sensor-cleaning systems with fluid-washers, wipers, and nozzles, to self-diagnosis, autonomous parking, charging, and protective sensor covers when idle, the company is meant to anticipate and solve the challenges of everyday ownership. It performs its own startup checks, manages maintenance, updates over-the-air, and is always ready—whether parked in a garage with no signal or delivering itself for service.

 

World firsts

Tensor claims 28 world firsts specifically designed to integrate the Robocar into “the modern mobile lifestyle.” In addition to being a fully autonomous, automotive-grade vehicle built for private ownership and mass production, it’s also the world’s first AI agentic vehicle, powered by a multimodal LLMs (large language models) that perceives, reasons, and takes action using tool-use capabilities through MCPs (model context protocols)—while learning user preferences over time.

The Robocar features the world’s most powerful in-vehicle supercomputer, with over 8000 TOPS of Nvidia Drive Thor-X GPU computing capability. This system processes a staggering 53 GB of sensor data per second from over 100 sensors, allowing it to perceive and respond to its environment in real time with robust safety and performance.

Tensor says it is the first to natively support all levels of autonomy (from 0 to 4) and features a “dual-mode” design with deployable steering wheel and pedals that let users drive or be driven.

It also reimagines the in-cabin experience with the world’s first foldable steering wheel, developed with Autoliv, and a sliding display, giving users more space, more flexibility, and a radically new way to interact with the vehicle. This enables operation for both Level 4 autonomy and as a “regular car” that can be human-driven outside of regulatory restrictions.

The retracted steering wheel also provides an indicator to the human behind the wheel of Level 4 operation and helps to keep the human from interfering during autonomous mode. To change from AV to human-driven mode, the car has to stop in a safe area.

The movable steering wheel required special airbag considerations, too. The car has a dynamic airbag system to make sure that it deploys the right airbag in the right situation, according to Luca.

The company says the Robocar isn’t adapted for autonomy but designed and engineered from the ground up to serve AI perception, safety, and reliability. It features over 100 sensors, radar-transparent materials, unobstructed lidar sightlines, and a low hood profile for a safety-first, uncompromised-visibility philosophy. Luca also pointed out the Robocar’s LED screens on the rear exterior so that pedestrians know that the car can see them and when it is safe to cross its path.

Most sensors are designed by Tensor and include proprietary lidar and advanced 17-MP cameras. There are even cameras under the car, “so it does not run over your cat,” according to Luca.

Its advanced E/E (electrical/electronic) architecture delivers full-stack redundancy across power, communication links, and control, across sensors, drive-by-wire systems, and thermal management, ensuring fail-operational performance in any scenario to prevent single points of failure.

 

AV 3.0

Driving the advanced AI built is the Tensor Foundation Model. Unlike traditional rule-based systems, it is entirely data-driven, learning perception, prediction, and planning from vast real-world and simulated datasets.

“We have something that is unique,” said Li. “This is not just an AV 1.0, a rule-based system, or a 2.0, what they’re calling an end-to-end learning system. We have agentic AI along with all the foundation models and world models in the simulation. This is one of a kind. We call it AV 3.0.”

The company aims to lead the way with the term that everyone is beginning to follow.

Based on transformer architecture and advanced sensor fusion, the company says the car navigates challenging conditions, such as night, glare, fog, and rain, with confidence.

The dual-system AI mirrors human cognition. System 1 delivers fast, reflexive responses through imitation learning from expert drivers, while System 2 uses a sophisticated multimodal visual language model to reason through rare and complex edge cases. Trained on proprietary and internet-scale visual data, it enables the car to handle the unexpected.

As “the world’s first AI agentic car,” the Tensor enables natural, conversational interaction whether the user is inside the cabin, nearby, connecting remotely via voice or text, or even waving at it like a friend. The software architecture is designed to be AI-native as a responsive and intelligent companion—one that supports, adapts, and evolves, unlike traditional vehicles.

At its core is a state-of-the-art multimodal LLM embedded within an agentic framework. The AI agent continuously processes data from in-cabin cameras, microphones, and other sensors for perception, using the LLM for advanced reasoning, decision-making, and tool use.

Unlike with cloud-dependent systems, all data on the Tensor robocar—users’ location, preferences, and records—is processed and stored locally on the vehicle, ensuring security and privacy. Users can access their data via the end-to-end encrypted smartphone app or onboard interface, making data impossible to track, extract, or compromise without consent. Additionally, physical camera covers and microphone off switches provide users with guaranteed privacy and discretion.

 

Open-source AI training toolchain

At CES 2026, the company announced the official open-source release of OpenTau (τ), its AI training toolchain designed to accelerate the development of VLA (vision language action) foundation models, which it says are critical building blocks for the next generation of physical AI systems. The open-source AI training platform was built to make large-scale AI training reproducible, accessible, and scalable.

Integrating vision, language, and action into a single multimodal foundation model, the approach enables intelligent systems to understand, reason, and act—and is increasingly recognized as a leading paradigm for embodied AI, with applications spanning autonomous driving, robotic manipulation, and navigation.

Tensor released OpenTau to the global research and developer community to make advanced training capabilities available beyond closed, proprietary environments—helping accelerate innovation across the industry. The company believes that open sourcing is central to its philosophy, providing scientific transparency and enabling independent validation.

“At Tensor, we believe meaningful progress in physical AI requires transparency,” said Jay Xiao, Founder and CEO of Tensor. “OpenTau is our way of giving back to the research and developer community that has helped advance this field. By open-sourcing our training toolchain, we’re supporting broader collaboration—so everyone can build, experiment, and move faster together.”

Designed with reproducibility and extensibility in mind, OpenTau enables researchers and developers to experiment with advanced training strategies that were previously difficult to implement or inaccessible outside of large-scale industrial research settings.

Tensor is inviting researchers, developers, and builders to explore the toolchain, contribute to its evolution, and help shape the future of physical AI by starring the repository, forking and experimenting with the codebase, and building or extending frontier VLA models.

 

Going to market in 2026

The Robocar will be offered to consumers in the U.S., EU, and Middle East markets in 2026. It will be available at luxury car pricing yet to be revealed.

Production will be scaled at VinFast’s Hai Phong factory in Vietnam, where every stage of manufacturing—from stamping and welding to painting and final assembly—will be carried out, paving the way for mass-market deployment.

Tensor is collaborating with global insurance broker and risk advisor Marsh to provide the world’s first insurance policy for robocars.

While focused on personal AVs, Tensor also announced a partnership in October with Lyft, which has reserved hundreds of Robocars for its fleet operations. The collaboration will introduce the world’s first consumer-owned AVs that are “Lyft-Ready” at delivery, allowing owners to monetize their personal cars on the Lyft network. For the first time, a privately owned AV will be able to join the Lyft rideshare network.

“Our mission of ‘Own Your Autonomy’ is about empowering you to take control of your mobility,” said Hugo Fozzati, Chief Business Officer at Tensor. “Through our upcoming partnership with Lyft, we’re taking a bold step forward…by allowing your Tensor to operate as a vehicle on the Lyft platform when you’re not using it, you can effortlessly generate revenue.”

Traditionally, a car sits unused for most of the day, an unproductive time for a depreciating asset. The Tensor/Lyft deal flips this model, turning a luxury vehicle into a productive, income-generating asset.

“Lyft has created opportunities for millions of people to earn on the platform, but right now, one of the last barriers to rideshare is time,” said Jeremy Bird, Executive Vice President at Lyft. “What’s exciting about Tensor is they’re advancing the opportunity that Lyft already creates, removing that final obstacle while reinforcing our vision of a hybrid transportation future.”

By integrating with the Lyft platform, owners can choose to allow their vehicle to operate autonomously on the rideshare network when they’re not using it, generating passive income. Tensor says that this consumer-owned, monetizable AV model creates a new category of vehicle ownership.