Search

AI’s impact on 3D packaging: heterogeneous integration

An article by Santosh Kumar, Yole Développement, for Chip Scale Review  – Artificial intelligence (AI) has been in development for more than fifty years but recently it has emerged as one of the key drivers of semiconductor growth fueled by smartphones, personal assistants, social media and smart automotive. AI requires various computing hardware and high-end memories; and because of requirements of high bandwidth, low latency and low power consumption, AI has created opportunities for the advanced packaging business.

AI technology trends

AI is now widespread and has become an integral part of the technology industry. Whenever a machine mimics human cognitive function, we can say it is AI. In the AI field, some people begin to distinguish between the types of machine learning. Machine learning is the subset of AI that includes abstruse statistical techniques that enable machines to improve task performance with experience. The first goal of machine learning is to give the machine the ability to learn without being programmed. The next goal allows the machine to assess the data collected and make predictions. Besides academic research and military programs, there are machine learning flagship applications aimed at consumers. The most important applications are voice identification and language processing used for an intelligent personal assistant (e.g., Siri, Cortana, Alexa, etc.) and image recognition for autonomous driving.

There are several algorithmic approaches that enable enhancement and acceleration of machine learning—deep learning is one of them and it is gaining more and more interest. Deep learning is the subset of machine learning comprising algorithms that allow software to train itself to perform tasks, like speech and image recognition, by exposing multilayered neural networks to vast amounts of data. These new ways of processing heavy data, like video and photo, were made possible because of the availability of efficient data computing hardware, such as new large bandwidth memories, general processor units (GPUs), central processor units (CPUs), application specific integrated circuits (ASICs), and field-programmable gate arrays (FPGAs), etc.

Deep learning is made up of two phases: training and inference. What is called training in AI, is training a virtual machine to recognize objects and sounds. The training phase requires huge computing power and can be extremely long (hours, days, months) depending on the required precision. Currently, most of the training is done in the cloud where the computing capabilities are in-line with such operations. Nevertheless, some training can still be done on edge. An example would be for face detection systems on phones where once-off training of a couple of seconds is required to complete the neural network model to recognize the face of the phone’s owner. Inference can’t happen without training. Inference can occur on edge, and will give similar prediction accuracy, but simplified, compressed and optimized for runtime performance. Inference can also occur in the cloud. The act of using the trained neural network with new data on a device or server to identify something is known as inference. System on chips (SoCs) with GPUs and a CPU inside are used to do this computation on edge (on a phone for example). Inference requires less computational capabilities than training, as this was already performed in the cloud… Full article

up