Search

AI is moving to the edge – but which company will benefit most from its added value?

Artificial Intelligence (AI) is a major trend for multiple applications, disrupting industries in each one. But this brings key questions. One concerns the partitioning of whether AI hardware or software firms will benefit most from adding value.

Another is whether the hardware and semiconductor content of AI systems will be based in the cloud, in the system, or at the device level. Today there is a strong and established trend to reduce amount of the calculation done on cloud hardware, and do it instead directly on user devices, which is referred to as ‘at the edge’. This trend is mainly due to cost, but also latency and data confidentiality. However, bringing calculation to device hardware must also consider important constraints: low power consumption, always-on, low latency and high performance. 

Meanwhile, Moore’s law is slowing down. To further improve performance, it has become necessary to design hardware specialized to the software that runs on it. This is even truer for AI algorithms that necessitate billions of operations. While it was possible to calculate the weights of neurons in a huge network with a graphics processing unit (GPU), it was necessary to create accelerators, more commonly called neural engines, to make these millions of calculations possible on portable devices. We have moved from the decentralization of computing to centralization, especially with the smartphone explosion. We are now entering the specialization phase, as shown in figure 1. How will this trend unfold? What kind of hardware will it need? What are the new corresponding markets that will emerge? The new report ‘Hardware and Software for AI 2018 – Consumer focus’ from Yole Développement answers these questions.

The sound processor, the equivalent of the vision processor

Embedding heavy computing demands in the device is not new in itself. In imaging particularly the obvious constraints of security and availability of bandwidth made it necessary to design dedicated hardware. Vision Processors (VPs) first emerged in the automotive sector, through Mobileye’s EyeQ range for Advanced Driver Assistance Systems (ADAS). They were then embedded in smartphone Application Processors (APs) as Vision Processing Units (VPUs). Then Apple added the neural engine in addition to allow facial recognition through an inferent neural network. The VP/VPU market, initially solely covering automotive applications, will serve consumer products through drones or various systems included in smart homes such as surveillance cameras, future virtual personal assistants (VPAs), appliances, and robot companions. The future of AI in imaging is clear, open and promising. Numerous actors, whether in intellectual property companies like ARM, xPeri, Imagination and CEVA, in fabless companies like Qualcomm, nVidia and Ambarella, or integrated device manufacturers like Intel.

What about audio? In audio, today nothing really exists whether embedded in a system-on-chip (SoC) or standalone chip for AI. The reasons look more to the software side, where historically research in AI was initially focused on imaging. Development of a program that is not greedy for computing resources and intelligent enough to be accepted by consumers is more complex in audio. Even today, Siri’s capabilities are not necessarily convincing, even if the reasons for this are on the training side of the network, as Apple does not use private data, rather than the software itself. Speech recognition embedded in devices is a clear goal today and will soon be achieved. At Yole, we estimate the arrival of the first sound processor (SP) supporting AI shipped by players like Knowles by the end of 2019 or early in 2020. The first Sound Processing Unit (SPU) will appear in 2020, from companies like Qualcomm, Samsung and perhaps Apple.

As shown in image 2, we estimate that this will drive strong growth and have a definite impact on the semiconductor market. Our assumptions bring the hardware market for AI to almost $ 10 billion in 2020 and more than $23 billion in 2023. However, AI for audio and imaging will have different architectures. Audio will rely on specialized standalone chips and imaging will be more embedded. This difference is mainly due to the smartphone market. While imaging will be embedded in the AP, audio can be either embedded or computed on a chip according to smartphone producers’ choices and their respective technological advances.

Enter the giants

Whether in audio or imaging, there is intense competition between centralized computing and decentralization using a dedicated chip close to the sensor. This latter approach greatly increases the product’s value, which Sony is exploiting. Of course, everything depends on the system and the needs but value capture will not only happen at this level. Today, the very big companies, Google, Apple, Facebook, Baidu, and Xiaomi even integrate hardware design teams to create the architecture for their own chips or units, as shown in image 3. What is the reason for this? Essentially they want to control the chain of data from the sensor to the final information. Data today is the equivalent of a currency. Being able to process it in the desired way without an intermediary is very valuable. When that processing is embedded on the device it is even more valuable, because it is closer to the user. The better a company knows a user, the better it can sell them what they may or may not need. Critics will ask: Are we empowering a set of Big Brothers?

 

Author:

Yohann Tschudi

As a Technology & Market Analyst, Dr. Yohann Tschudi is a member of the Semiconductor & Software division at Yole Développement (Yole). Yohann is daily working with Yole’s analysts to identify, understand and analyze the role of the software parts within any semiconductor products, from the machine code to the highest level of algorithms. Market segments especially analyzed by Yohann include big data analysis algorithms, deep/machine learning, genetic algorithms, all coming from Artificial Intelligence (IA) technologies.

After his thesis at CERN (Geneva, Switzerland) in particle physics, Yohann developed a dedicated software for fluid mechanics and thermodynamics applications. Afterwards, he served during 2 years at the University of Miami (FL, United-States) as a research scientist in the radiation oncology department. He was involved in cancer auto-detection and characterization projects using AI methods based on images from Magnetic Resonance Imaging (MRI). During his research career, Yohann has authored and co-authored more than 10 relevant papers. 

Yohann has a PhD in High Energy Physics and a master degree in Physical Sciences from Claude Bernard University (Lyon, France).

 

Source: Yole Développement

 

RELATED REPORTS

Hardware and Software for AI 2018 – Consumer focus

How will AI impact the semiconductor market through consumer applications? – Get more

 

 

 

From Image Processing to Deep Learning, Introduction to Hardware and Software

From algorithms included in the image processing pipeline to neural networks running in vision processors, focus on the evolution of hardware in vision systems and how software disrupts this domain – Get more

 

 

up