Search

Autonomous automotive, what, where, when? Focus on software and computing

When our grandparents talked to our parents about the future of the car, they described flying objects, man-driven, going at crazy speeds and uncertain altitudes. Our parents, not seeing the flying cars project materialize, were more measured in their predictions and imagined more powerful, faster, safer cars as well. Today, it is easier to imagine what the future of the car will be: autonomous. The generation that will reach adulthood in 2035 will probably not need to obtain a driver’s license. Two factors have made it possible to anchor this project of autonomous cars in reality. The first factor is the creation of the Internet at the end of the 90s and the second factor is its explosion in the 2000s. The web has indeed allowed faster transmission of knowledge, easier communication and increased creativity and innovation. The most important breakthrough, though, is the ability to store millions of gigabytes of data.

Data, fuel for AI

For fifteen years now, with the capability of having cheap and extremely powerful computing, artificial intelligence (AI), and more precisely the field of Machine Learning (which dates from the middle of the 20th century), has developed exponentially. Different Machine Learning techniques have been developed to create algorithms that can learn and improve independently. Amongst these techniques, some include artificial neural networks. These algorithms are the foundation of Deep Learning. To understand how Deep Learning works, let’s take a concrete example of image recognition as shown in Figure 1. Imagine that the neural network is used to recognize photos that contain at least one cat (yes, it’s always a cat). In order to be able to identify the cats in the photos, the algorithm must be able to distinguish different types of cats, and to recognize a cat whatever the angle at which it was photographed. In order to achieve this, the neural network must be trained. To do this, it is necessary to compile a set of training images to practice Deep Learning. This set will consist of hundreds of thousands of different pictures of cats, mixed with images of objects that are not cats. These images are then converted into data and transferred to the network. Artificial neurons then assign a weight to the different elements. The final layer of neurons will then gather the different information to deduce whether or not it recognizes a cat. The neural network will then compare this response to the correct answer as given by humans. If the answers match, the network keeps this success in memory and will use it later to recognize cats. On the contrary, if the answers do not match, the network will take note of its error and adjust the weight placed on the different neurons to correct its error. The process is repeated hundreds of thousands of times until the network is able to recognize a cat in a photo under all circumstances. This learning technique is called supervised learning. There are about ten different learning methods, but this is the best known and the first used and usable, precisely because there were enough pictures of cats on the web to be able to train the neural network! This is valid for everything, including speech recognition.

Deep learning - Courtesy of Yole Développement, 2019
Figure 1

As you have probably noticed, AI is slowly but surely invading more and more markets, and thus the daily movements of all of us. Smartphone, smart home, smart cities, smart “x”, all of these names mean that at some point, it will integrate some kind of AI algorithm. And what about smart automotive? That is an exception! We prefer to call it an ADAS car or robotic car, but do not miss the point: AI is there too. How? In two ways: autonomy and infotainment. Yole Développement’s report, AI Computing for Automotive, describes the impact of AI on the automotive ecosystem through autonomous cars and infotainment applications/systems.

An autonomous smart home

Autonomous vehicles - Courtesy of Yole Développement, 2019
Figure 2

Let’s summarize a bit. On the autonomy side, as shown in figure 2, two trends are moving in parallel: on one hand, the classic OEMs with the addition of functionalities (including deep learning algorithms for object recognition) for passenger cars that facilitate an increase in advanced driver-assistance systems (ADAS) levels; and on the other hand, startups and tech giants offering services based on robotic vehicles, i.e. shuttles and robot taxis. These different applications and systems have rapidly surrounded themselves with rich, diversified ecosystems, particularly in terms of sensors and computing. In the ADAS ecosystem, we find classic OEMs like GM, Ford, Toyota, BMW, Audi, and Mercedes, and new players such as Tesla and Nio. On the robotic vehicles side, tech giants including Google (Waymo), Uber, Yandex, Baidu, and Apple will offer the first robotic taxi services in targeted cities this year, surrounded by startups that also offer Maas (Mobility-as-a-service). Regarding robotic shuttles, busses, and commercial vehicles, we find an array of startups such as Navya, EasyMile, and Drive.ai offering transport services for people or goods in closed environments and at low-speed. Tier-1s like Continental are also investing in this promising market. With regard to the traditional automotive market, we could expect the first level 2+ and level 3 ADAS cars (with AI-based autonomy) to arrive this year.

Stop. What are these levels of autonomy? Figure 3 will explain it better than words.

From ADAS to AD - Courtesy of Yole Développement, 2019
Figure 3

Autonomous cars need information about their surrounding environment in order to safely navigate this environment. Numerous sensors are available to provide this information to autonomous cars, including cameras and GNSS sensors. However, there are only three modalities which provide direct distance measurements: ultrasonic sensing for short distance ranging, radar for object detection, and LiDAR for 3D perception. Today, these sensors are used in both passenger vehicles and robotic cars, but with differences. Passenger vehicles have limited autonomous capabilities whereas robotic cars have fully autonomous capabilities and rely on sensor redundancy. Due to the high attenuation of sound waves in air, ultrasonic sensors have a range limited to a few meters and are mainly used for parking assistance. Radars are commonly used in passenger vehicles for adaptive cruise control (ACC) and autonomous emergency braking (AEB) and, according to the Radar and Wireless for Automotive Report 2019, the automotive radar market is expected to reach more than $8B in 2024. In robotic cars, either 4D imaging radars with large antenna arrays enabling angular resolutions below 1°, or tens of tiny ultra-wide band high resolution radars are used. However, the current size of 4D imaging radar (larger than 10cm x 10cm) makes it unpopular to regular car designers, while the range of ultra-wide band radar is limited to few tens of meters. LiDAR, with a price of several thousands of dollars, is unattractive to car manufacturers. However, their high performance with range 200m and 0.1° angular resolution, have made them widely adopted by robotic car companies.

To treat this data coming from sensors for analysis, needs a huge amount of computing. This segment has also grown enormously around major players, like Intel and its Mobileye product or NVIDIA with its Xavier GPU. These now include units designed specifically for the calculation of deep learning algorithms. Other solutions offered in dedicated products from Renesas, Xilinx, and Kalray also show much potential.

AI enters the fray with speech and gesture recognition technologies. Smart home giants Google and Amazon are now in cars with their well-known speech recognition solutions – “Ok, Google!” and “Alexa”, respectively. And Google goes even further by integrating its Android operating system. In terms of gesture recognition, Sony SoftKinetic plays a central role with OEMs in developing these solutions. On the computing side, the players are not much different from those in autonomous driving, because it is necessary to develop powerful, power-consuming, solutions adapted for these specific application types.

The expected revolution

In 2018, only robotic vehicles could claim to possess in-car AI. The associated computing market, driven by computers equivalent to what is found in datacenters and associated with rather low volumes, brought the computing market to $156M in 2018. Over the next 10 years, with the development of robot taxis and shuttles, this market will remain the main revenue generator for AI in automotive, with $9B in total computing revenue expected in 2028. Is this surprising? A bit, yes, but the impact of volume is often overemphasized compare to the added-value of AI computing.

In 2019, the first cars qualified as “ADAS level 3” will hit the road, while AI will enter ADAS level 2 cars, replacing conventional computer-vision algorithms. Yole expects a $63M computing market for ADAS in 2019, reaching almost $3.7B in 2028. For infotainment, AI is already present in high-end BMW, Volvo, and Mercedes models as an option involving relatively low volumes. Also, embedded in-car computing remains quite inexpensive because the computing is done in the Cloud. However, as for the smart-home market, there is a willingness to bring intelligence to the Edge, implying the need to create powerful, more expensive computing. Yole foresees a fairly strong increase in infotainment computing revenue, rising from $18M in 2018 to $768M in 2028.

For a full understanding, all these numbers are described in Figure 4.

AI computing for automotive - Courtesy of Yole Développement, 2019
Figure 4

Who and where?

The stakes are high: the first company to be on the road with a mature technology in terms of security, autonomy, and service features will inevitably take the bulk of the market. Today, Google Waymo has a considerable lead technologically and at the service level – its first cars are already on the road, and a handful of users are already enjoying its services. On the ADAS side, Audi launches its first level 3 this year (the 2019 Audi A8), while behind Audi the majority of OEMs are offering level 2+ features and planning to release their high-end level 3 models this year and next. For security and marketing reasons, some OEMs, including Ford, Volvo, and Toyota, have decided to directly focus on level 4 skipping level 3 entirely, with results expected around 2025. Is this right or wrong? That is a debate that deserves its own article. Specific to infotainment, there are only a few players right now. Sony SoftKinetic leads the pack in gesture recognition, and Google, with its vast experience, leverages its innovative technologies in the speech recognition segment.

From sensors to service, the ecosystem is diversified, and rich, powerful players are there (Figure 5). If we take a quick look at the computing side, two giants appear to be leading the race: Intel Mobileye for the ADAS market, and NVIDIA for the robotic vehicles market. These companies’ solutions are powerful and they also offer extremely powerful software and software stacks dedicated to processing AI and computer vision algorithms and adapted to the automotive ecosystem. However, thanks to a slower cycle in the automotive world, other competitors like Renesas and Xilinx are close behind.

Automotive AI - Value chain and interactions - Courtesy of Yole Développement, 2019
Figure 5

Time to conclude. The autonomous driving revolution is fast-paced and road-worthy, but also full of obstacles and it will not happen tomorrow. The stakes are huge, with the prize being access to a market in the tens of billions of dollars. AI and its associated computing and sensor fields will play the crucial role of catalyst for companies wishing to access this market. Now, without reading again the whole article, how many times do you think you have seen Google? Actually, 6 times because they have hands everywhere (Figure 6). And for sure it will be one of the winners! First at home, soon first in cars. Finally, is getting a driver’s license, or not, what is of importance?

Google - The data winner  - Courtesy of Yole Développement, 2019
Figure 6

About the author:

As a Technology & Market Analyst, Yohann Tschudi, PhD is a member of the Semiconductor & Software division at Yole Développement (Yole). Yohann is daily working with Yole’s analysts to identify, understand and analyze the role of the software parts within any semiconductor products, from the machine code to the highest level of algorithms. Market segments especially analyzed by Yohann include big data analysis algorithms, deep/machine learning, genetic algorithms, all coming from Artificial Intelligence (IA) technologies.

After his thesis at CERN (Geneva, Switzerland) in particle physics, Yohann developed a dedicated software for fluid mechanics and thermodynamics applications. Afterwards, he served during 2 years at the University of Miami (FL, United-States) as a research scientist in the radiation oncology department. He was involved in cancer auto-detection and characterization projects using AI methods based on images from Magnetic Resonance Imaging (MRI). During his research career, Yohann has authored and co-authored more than 10 relevant papers.

Yohann has a PhD in High Energy Physics and a master degree in Physical Sciences from Claude Bernard University (Lyon, France).

Related webcast:

Artificial intelligence shortens the path to autonomy and brings the home into the car – Speakers: Yole Développement and Xilinx

Slowly but surely, artificial intelligence (AI) is invading more and more markets, and thus each person’s daily movements. At home, AI innovations like “Ok, Google” and “Alexa” are like members of the family, and now these systems are entering automobiles too, which will soon drive you to work fully autonomously. In fact, autonomous driving and infotainment are the two main automotive segments where AI has a huge impact…

Related report:

Artificial Intelligence Computing for Automotive report, Yole Développement, 2019

Artificial Intelligence for automotive: why you should care

up