Can metamaterials revolutionize optical computing?

A new approach to optical computing using metamaterials could result in power-efficient AI inference accelerators for the data center, Neurophos CEO Patrick Bowen told EE Times. Optical computing based on silicon photonics has been commercialized by several AI accelerator startups in recent years, but the technology has yet to take off. “From my perspective, [other companies] were running toward a brick wall with optical compute, and that’s why most of them have either failed or pivoted,” Bowen said. “There’s a lot of disagreement about why they’ve failed or pivoted, but my take is really centered on the scalability of optical processors.” Bowen pointed out that the components used to build optical compute chips are relatively large—a Mach-Zehnder interferometer (MZI) based on a standard foundry process design kit (PDK) might be 200 × 20 microns—which he says severely limits compute density.

“It’s worse than it sounds because compute-in-memory technology is all about the scaling,” he said. “It can really only save on memory accesses if it can fit the entire matrix-matrix multiply inside [the chip]—many silicon photonic compute arrays that ended up getting built were much smaller than that, so even from a compute-in-memory standpoint, they weren’t able to beat their digital competitors.”

If whole matrices can fit in one chip, it means the memory access bottleneck can be reduced. Smaller compute arrays mean large matrices can’t be processed in one go—they need to be broken into chunks, with partial results sent back and forth to memory at intermediate stages, resulting in more memory accesses being required.

The inability to scale to large compute arrays because of the relatively large size of MZIs also means other optical technologies can’t take full advantage of the energy efficiencies or the speed promised by analog computing, Bowen said, as they both depend on scale.

Silicon photonics also has issues with even power distribution across big chips, he said.

“If you design some sort of 2D array such that the propagation of the light through it implements a matrix multiplication, there will always be multiple scattering between the modulators representing the matrix elements, and absorption, which is practically even worse,” Bowen said. “The end result will only be approximately a matrix multiplication, which severely limits both the size of the array and the effective bit precision.”

Neurophos is using metamaterials to tackle these issues.

Metamaterials and metasurfaces

Metamaterials, invented at Duke University 20 years ago, are arrays of tiny devices that can interact with incident electromagnetic radiation. Metamaterials have been used to create negative refractive index materials (holding the potential for more compact lenses) and even an invisibility cloak (where the metasurface bends light around the cloaked object without any reflection).

Several startups are already using metamaterials in a variety of applications. Lumotive is using the concept in LiDAR antennas without moving parts; Kymeta has developed a flat-panel satellite communications antenna; Echodyne’s solid-state radar antenna is used in UAVs and anti-drone systems; and Pivotal Commware uses metamaterials for holographic beamforming in 5G communications.

Neurophos is the first startup to apply metamaterials to optical computing and optical modulators, the components used in optical computing chips. The company’s metamaterials-based optical modulators are 8,000× smaller than silicon-photonics–based MZIs based on standard foundry PDKs.

“This means when we are talking about making an in-memory compute array, we are talking about megabytes, not kilobytes,” Bowen said.

Neurophos’s resonator-based modulator is much smaller than the wavelength of light, such that light responds to the device in the same way it would respond to an atom. Optical components like this are referred to as “meta atoms,” with their shape dictating the optical response. Metasurfaces use arrays of meta atoms to make surfaces that appear to have unusual material properties. Neurophos’s metasurface can control both the phase and amplitude of incident light.

While existing technologies like spatial light modulators are well-understood, they are several microns across, take several milliseconds to switch and generally control either phase or amplitude, not both.

“Reaching that lambda over two [half-wavelength] sampling point is very important from an optics perspective,” Bowen said. “There are electromagnetic uniqueness theorems that tell you that if you can sample things at lambda over two, you can basically do whatever you would like to with the electromagnetic field.”

Existing designs like micro-ring resonators have a high-quality factor (they can get a large modulation contrast with a small voltage) but require local heating for calibration, as they are extremely temperature-sensitive.

“Our metamaterials-based approach uses deeply sub-wavelength resonators that are aggregated together, each of which has a relatively low-quality factor such that they aren’t temperature-sensitive, but collectively, they provide good dynamic range,” he said.

2.5D packaging

Neurophos plans to package its metasurface die next to its silicon photonics die in a 2.5D package (the two are made on different process technologies; the metasurface die is on standard CMOS, compatible with 28 nm and below, while silicon photonics requires a specialist process). There will also be a third die in the package: a digital ASIC to control the system, including SRAM and a vector processing unit to handle things like non-linear activation functions.

Neurophos has invented a new way of getting data in and out of the metasurface die: An optical projection system projects light into free space from the silicon photonics die, where it is reflected by mirrors onto the surface of the adjacent metasurface die and vice versa.

“Using the third dimension is the best way to get even power distribution across the metasurface array and really the only way to do it that gives a single-scattering-event type of optical interaction,” Bowen said.

This is novel, to say the least.

“It sounds crazy because you’re talking about free space optics, and [it sounds like] you’ll have terrible issues with signal-to-noise ratio and other things,” Bowen said. “But we think we have found really good solutions to all these problems using an optical projection system. … This ends up circumventing the power distribution problem—you can evenly illuminate an entire several-thousand-by-several-thousand [compute element] array.”

Silicon photonics’ weakness is density, while its strength is speed—but metamaterials offer the opposite combination, excelling in density while offering modest speeds. Bowen said the switching speed of its metasurface is in the order of 1 MHz. Neurophos hopes to take advantage of the strengths of both technologies by combining them.

“It’s better to have the metasurface store one of your two matrices, then have the silicon photonics chip provide 1D input vectors … and whip through them at lightning speed to get the entire workload done,” he said. “That way you’re combining the strengths—the speed of silicon photonics with the density of the metasurface.”

Neurophos is a spinout from Duke University and metamaterials incubator Metacept. The company has raised a seed round of $7.2 million to develop its metamaterials-based optical AI accelerator chip and has joined the Silicon Catalyst incubator program. Neurophos’s product will be a data center inference accelerator (8-bit precision), targeting applications including LLMs.

The company plans to have test chips by summer, Bowen noted.