The Computer Vision Center (CVC) has launched the latest version of its open-source simulator for automated driving called CARLA (Car Learning to Act). Development directed by the CVC at the Universitat Autònoma de Barcelona in Spain has been supported by Intel, Toyota Research Institute (TRI), KPIT, and GM Research & Development. Back in June 2018, TRI announced its support for CVC’s efforts to accelerate its development of the simulator.

“Fostering the development of a common open simulation platform will allow TRI and its academic and industrial partners to better exchange code, information, and data,” Vangelis Kokkevis, Director of Driving Simulation at TRI, said at the time. (Kokkevis is now with Waymo.)

“CARLA is born to democratize research on automated driving, supporting training and testing of AI drivers beyond real-world limitations,” added Dr. Antonio López, responsible for the project at CVC. “The joint work of CARLA engineers, artists, and scientists is making this possible.

Hosted on Github, the non-profit, open-source simulator is for the development, training, and validation of automated urban driving systems. It is designed to ensure the stability of automated vehicles in situations that are not always testable in the real world.

CARLA has been developed from the ground up to support the development, training, and validation of autonomous driving systems. In addition to open-source code and protocols, it provides open digital assets (such as urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites, environmental conditions, full control of all static and dynamic actors, maps generation, among other features.

The latest release, CARLA 0.9.10, features a new semantic LIDAR sensor, automatic map generation with OpenStreetMap, and a brand-new repository for plug-ins made by contributors. Highlighted of the solution include scalability via a server multi-client architecture, so multiple clients in the same or in different nodes can control different actors. Its powerful API allows users to control all aspects related to the simulation, including traffic generation, pedestrian behaviors, weathers, and sensors.

Via an autonomous driving sensor suite, users can configure diverse sensor suites including LIDARs, multiple cameras, depth sensors, and GPS. A fast simulation for planning and control mode disables rendering to offer a fast execution of traffic simulation and road behaviors for which graphics are not required. The engine ScenarioRunner allows users to define and execute different traffic situations based on modular behaviors.

Among integrations, users can easily create their own maps following the ASAM (Association for Standardization of Automation and Measuring Systems) OpenDrive standard via tools like the MathWorks VectorZero RoadRunner. CARLA is integrated with the ROS (Robot Operating System) flexible framework for writing robot software via a ROS bridge. Autonomous driving baselines are provided as runnable agents from the Autoware Foundation and Conditional Imitation Learning.