Search

Depth sensing takes Machine Vision into another dimension

By Anne-Françoise Pelé for EETIMES Europe – What can human vision do that computer vision can’t? Humans perceive the world in three dimensions, and depth sensors are key to enabling next-level machine vision and unlocking autonomy. 

More and more machines are endowed with the ability to sense, act, and interact with their environment, supported by recent advances in sensing technologies. EE Times Europe scanned the 3D vision landscape to get a clearer picture of the market drivers, the opportunities and challenges for component suppliers, and the technologies emerging to enable higher levels of depth sensitivity.

Going deep

At the module level, the 3D sensing market is currently valued at US$6.8 billion and will grow at a 15% CAGR to US$15 billion by 2026, according to Yole Développement. 

“In the mobile and consumer markets, which are the driving force, there is a temporary hiatus in growth due to the Huawei ban as well as the fact that the Android camp has de facto abandoned 3D sensing,” Pierre Cambou, principal analyst in the photonics and sensing division at Yole Développement, told EE Times Europe. On the other hand, he added, “Apple is accelerating the trend by including LiDAR sensors in iPads and iPhones.”

Also accelerating is the use of 3D in the automotive context, said Cambou. LiDAR sensors and in-cabin 3D cameras are being adopted, and “we are very optimistic about 3D sensing in the automotive market, which should quadruple in the next five years.” 

At present, the prevalent 3D imaging technologies are stereo vision, structured light, and time of flight (ToF).

Stereo vision has been very strong for long-range sensing applications beyond 10 meters, such as consumer drones from companies like DJI and forward-looking ADAS cameras like those in Mercedes, Jaguar, and Subaru models, said Cambou. 

Structured light has been the preferred approach for short-range sensing below 1 meter, typically in the Apple iPhone for front-mounted Face ID but also in some industrial applications, addressed by companies such as Photoneo.

Time-of-flight systems are mainly used for medium ranges and currently come in two flavors, Cambou said. Indirect ToF was used in Android phones (from vendors such as Huawei, Samsung, and LG) on the rear side in 2019 and 2020 for photographic purposes. Direct ToF is used by Apple in its most advanced smartphones. “Direct time of flight is the technology being used for LiDARs [for example, by Velodyne, Innoviz, Ibeo, Hesai, and RoboSense], which may eventually use a matrix-shaped sensor on the receiver side,” said Cambou. “It is gaining ground due to the excitement around autonomy.” 

up