What is Carla ? (on going)
CARLA (CAR Learning to Act) is an open source autonomous driving simulator. It is a realistic simulator that integrates physics and multiple sensors (GPS, LiDAR, Cameras, etc) used to train autonomous cars.CARLA
CARLA stands for Car Learning to Act.
At its core, CARLA is a simulation environment (essentially a game) where a car, an environment, and various interactive objects can be spawned and controlled through commands. The simulation engine processes these commands and returns updated information about the world state and sensor readings, allowing us to decide what actions to take next (what commands to send next).
CARLA itself operates as a server (often referred to as the CARLA server). It receives commands through the TCP protocol, following a specific message format defined by CARLA's developers. Each command sent to the server triggers a response containing sensor data and any other relevant simulation information.
To interact with the CARLA server, we use what's called a CARLA client. This client can communicate with the server either by manually sending TCP messages, or more conveniently through one of the provided APIs such as the Python client, C++ client, or any other compatible implementation.
In short, CARLA provides a flexible and realistic environment for developing, testing, and evaluating autonomous driving systems without the need for a physical car.
Sources
-
Carla Introduction Youtube Playlist: https://www.youtube.com/watch?v=L8ypSXwyBds Part on RL with DQN and automatic steering.
-
Carla Sensors Youtube Video: https://www.youtube.com/watch?v=om8klsBj4rc
-
[ ] To watch: https://www.youtube.com/watch?v=MNiqlHC6Kn4
Notes
client.set_timeout(<time-in-seconds>): this will make it so an exception is raised on the client if the server takes more than<time-in-seconds>to respond to a request.blueprint_library = world.get_blueprint_library(): will ask the simulator to return the collection of blueprints (all the templates for things you can spawn in the simulation, things that have been preconfigured with the server).- The blueprints can be later searched to find specifc car models, sensors, objects, etc.
vehicle.apply_control(
carla.VehicleControl(
throttle: float, # [0.0; 1.0] Accelerator pedal position
steer: float, # [-1.0; 1.0] Steering wheel, left (-) to right (+)
brake: float # [0.0; 1.0] Brake pedal
hand_brake: bool # True/False Engage handbrake
reverse: bool # True/False Drive backwards
manual_gear_shift: bool # True/False Use manual gear mode
gear: int # Gear number (if manual shifting)
)
)
Example of spawning a vehicle and a sensor:
car_blueprint = blueprint_library.filter("car.tesla.3")[0]
# car_blueprint.set_attribute("<name>", "<value>")
car_spawn_point = world.get_map().get_spawn_points()[0]
car = world.spawn_actor(
blueprint=car_blueprint,
spawn_point=car_spawn_point
)
sensor_blueprint = blueprint_library.filter("sensor.camera.rgb")
# sensor_blueprint.set_attribute("<name>", "<value>")
# NOTE: spawn point will be relative to the vehicle the sensor is attached to
sensor_spawn_point = carla.Transform(carla.Location(x=2.5, z=0.7))
sensor = sensor.spawn_actor(
blueprint=sensor_blueprint,
spawn_point=sensor_spawn_point,
attach_to=car
)
sensor.listen(lambda image: image.save_to_disk(f"output/{image.frame}.png"))
On this image can be run an image segmentation algorithm to know where the road limits are, an object detection algorithm to detect if any obstacles are on the road, etc.
Sensors
- Cameras.
- LIDAR.
- RADAR.
- Inertial Measurement.
- Collision / Obstacle detection.
- Lane Invasion.
- Satelite Location (GNSS).
The following code will print all the available sensors:
for blueprint in blueprint_library.filter("sensor"):
print("[blueprint.id]", blueprint.id)
sensor.other.gnss
sensor.lidar.ray_cast
sensor.camera.semantic_segmentation
sensor.other.radar
sensor.other.lane_invasion
# TODO: what is it ?
sensor.camera.instance_segmentation
sensor.camera.rgb
# TODO: what is it ?
sensor.other.rss
sensor.camera.optical_flow
sensor.lidar.ray_cast_semantic
sensor.other.obstacle
sensor.other.imu
# TODO: what is it ?
sensor.camera.depth
# TODO: what is it ?
sensor.camera.dvs
sensor.other.collision
Beside the RGB image provided by the camera, we also have access to a segmented version of the image were we know to what each pixel belongs to (car, road, line, etc). This is called a semantic view and each pixel's value is an integer between 0 and 21 each representing a specific class. Carla provides the following views: RGB, Semantic, Optical Flow, Monocular Depth, Dynamic Vision.
The instance segmentation is a more advanced version of semantic segmentation, rather than just telling that a given pixel belongs to a car, instance segmentation will tell that this pixel belongs to the card #3 for example.
- [ ] Responsibility Sensitive Safety (RSS) ?