How Do Self-Driving Cars Work?
SAE International (the Society of Automotive Engineers) defines various levels of vehicular autonomy, ranging from level zero (our daily-use utility vehicles) through level five (vehicles requiring no human interaction). Currently, we are far from a level five automation since there are many situations with which autonomous vehicles cannot copy.
That said, if the industry and academia can keep up the good work, we might be there sooner than we think. Can you even imagine a car without a steering wheel?
Let’s look at some of the enabling technologies that make a vehicle autonomous and how these technologies integrate to allow a car, truck or SUV navigate streets autonomously.
First, let’s imagine we have a car we want to make autonomous. There are three main elements this car will need:
Components of a Self-Driving Car
- HDMap (High Definition Map)
- State and geolocation estimator
- Motion manager
Before we jump into addressing these key aspects of autonomous vehicles there are a few background concepts we need to explore first, like the sensors we use.
Sensors in Autonomous Vehicles
LiDAR (light detection and ranging): a remote sensing method that uses light in the form of a pulsed laser to measure ranges (variable distances) to the earth. This technology scans roads and buildings. With a LiDAR scan, we generate a cloud point (literally a data set of points) which we load to represent the real world.
RADAR (radio detection and ranging): a detection system that uses radio waves to determine the range, angle or velocity of objects. RADARs are one of the simplest sensors we can have in an autonomous vehicle. They only reach short distances but they’re relatively cheap when compared to LiDAR. Currently, many vehicles already use RADAR technology for collision prevention during parking.
GPS (global positioning system): In simple terms we all know what GPS means. When you use your smartphone you might need to geolocate yourself on the planet. You activate your GPS and suddenly you haveGoogle Maps or any other geolocation-dependant functionality. Camera: These are important sensors in autonomous vehicles that allow cars to identify objects and people in the real world. . Thanks to the latest development in machine learning techniques, particularly in convolutional neural networks, autonomous vehicles can use cameras for object detection and object identification.
Now let’s jump into looking at the key aspects of autonomous vehicles.
HDMap (High Definition Map)
The very first thing the car needs is the ability to detect its location in the world. To do this, an automated vehicle needs to have an HDMap that includes a lot of data about the road and the surroundings. Building an HDMap requires a lot of effort; there are companies whose only purpose is to create and keep HDMaps up-to-date.
To create an HDMap, a combination of LiDAR and cameras scan the area surrounding the vehicle and this data is analyzed using computer vision in order to extract road signalization, nearby vehicles and lane objects. Autonomous vehicles must always know in which lane they are located throughout an established route, including all necessary lane changes. For this we can use LANENET, which is a widely used library in the world of autonomous vehicles.
State estimators coordinate the input from all the sensors in the autonomous vehicle and keep the geolocation of the vehicle within the HDMap up-to-date. The state estimator does this by receiving input and aggregating data from all different parts of the vehicle.
Different situations might favor different sensors. For example, if the vehicle is inside a tunnel the GPS signal might not be reliable and the state estimator might have to rely on other sensors such as LiDAR, RADAR and the tires’ motion to update the geolocation of the vehicle.
At the same time, on a highway (or a motorway for the United Kingdom) a truck might be in front of the vehicle blocking the LiDAR sensor from perceiving the whole world ahead of the vehicle. In this situation, our self-driving car will be blind. But with a reliable HDMap and GPS signal, our vehicle can have a very good idea of what lies ahead of it (whether it be the next next junction or exit).
Ultimately, a state estimator will receive and combine data from multiple sensors within the autonomous vehicle. Not all sensors send data at the same rate. A LiDAR system can provide many pulsations per millisecond, while GPS takes longer to update. The state estimator unifies values from various inputs.
A motion planner is a large algorithmic dataset which acts based on the vehicle’s route. The motion planner is in charge of the movement. If we intend to move a self-driving car from point A to B the first option might be going forward (or reversing or turning). The motion planner is in charge of determining which maneuvers are required for the vehicle to reach its destination. The state estimator helps the vehicle knows when an obstacle obstructs the vehicle’s route and the motion planner calls for an emergency stop. When it’s time for the vehicle to change lanes, the motion planner calls a maneuver for switching lanes.
* * *
These are the basic aspects to consider when diving into the world of autonomous vehicles. There are many more libraries, algorithms, and vehicle architectures to consider in the development of an autonomous vehicles, but you should now have a basic understanding of what makes a self-driving car drive itself.
This article was originally published on Towards Data Science.