How does an autonomous car see? Google’s GOOG driverless car prototype uses LiDAR, which is a system of lasers that spins on top of the car to create a 3D map of its surroundings. LiDAR is part of the car’s perception system, which determines where and what an object is, along with its speed and direction.
Despite the fact that LiDAR is a superior technology to competing camera-based systems, LiDAR products are currently much more expensive than cameras, impeding their adoption. The chart below shows the current differences between the two systems.
The most important difference between the two is that LiDAR offers more accurate object detection. All standalone camera systems available today require human supervision. While computer vision may be 99% accurate, even small fractions of a percent have enormous consequences. Brad Templeton, a consultant for the Google car and an autonomous technology expert, puts it simply, “Technologies that detect 99 out of 100 objects would be acceptable if one accepts the consequences of failing to detect 1 out of 100 pedestrians or bikers.”1
The Velodyne system in Google’s initial prototype costs roughly $75,000 per unit, while Mobileye MLBY sells a camera-based collision avoidance system that costs $850. Tesla’s TSLA Elon Musk has said, “The problem with Google’s current approach is that the sensor system is too expensive… It’s better to have an optical system, basically cameras with software that is able to figure out what’s going on just by looking at things.”
Luckily for automakers, the difference in price between LiDAR and cameras could become negligible. Quanergy promises to release a solid state LiDAR in the next couple of years that will be less than $100, bringing costs in line with cameras while offering better perceptive ability. Solid state LiDAR systems will be more reliable, lighter, smaller, and more energy efficient than Velodyne, a mechanical system.
Recently Mr. Musk announced that Tesla will release a fully autonomous car in five to six years, where previously he had planned to release only a semi-autonomous version. Perhaps the cost decline of LiDAR is what swayed him. Once driverless systems are ready for public use, new lower cost systems will expand the market for potential buyers of autonomous cars, and ultimately improve the economics of shared autonomous taxi networks.
NOTE: LiDAR will never be a standalone solution for autonomous driving. The system is superior at detecting objects, but cameras will most likely be used in conjunction with LiDAR to identify color. A full suite of sensors will likely be required for autonomous vehicle implementations as each sensor-type has its own strengths and weaknesses and real-time object detection and confirmation will be most reliable when more inputs and input-types are feeding into the process.
- Brad Templeton’s website was a key resource for this blog: http://www.templetons.com/brad/robocars/cameras-lasers.html ↩
ARK's statements are not an endorsement of any company or a recommendation to buy, sell or hold any security. For a list of all purchases and sales made by ARK for client accounts during the past year that could be considered by the SEC as recommendations, click here. It should not be assumed that recommendations made in the future will be profitable or will equal the performance of the securities in this list. For full disclosures, click here.