Two requirements are being addressed: an 800-gigabit dense wavelength division (DWDM) interface with a 80-120km span for data centre interconnect, and an unamplified single-channel fixed-wavelength 2-10km coherent link for campuses.
The need for 800 gigabit
“When we hit that 90 per cent mark on 400ZR, we had people stand up and say: ‘We are ready to start 800ZR’,” says Karl Gass, OIF, physical link layer working group – optical vice-chair.
But completing the work has taken time. “The first 90 per cent of a project takes about half the time and the last 10 per cent takes the other half,” says Gass.
So only in mid-2020 did the OIF’s attention turn to the new standard, starting with determining the use cases.
“For some time there has been a subset of folks that felt this was the next logical step after 400ZR and I think 800 gigabit in general has been building momentum,“ says Tad Hofmeister, technical lead, optical networking technologies at Google and OIF vice president. “That has helped reach the critical mass to take this formal next step.”
Recent developments for 800-gigabit include the maturing of 800-gigabit pluggable multi-source agreements (MSAs), the emergence of 25.6-terabit Ethernet switch-chips and network processing silicon using 100-gigabit electrical interfaces.
The IEEE has also started work on the next Ethernet standard after 400 Gigabit Ethernet (GbE).
With 400ZR, it took time to develop the required coherent digital signal processors (DSPs) and the optical components that could operate at the required symbol rate, says Hofmeister, so it is the right time to start the 800-gigabit coherent work.
The OIF’s 400ZR specification is known for its 80-120km DWDM interface but it also specified an unamplified single-channel fixed-wavelength 2-10km link.
“One reason there wasn’t nearly as much attention paid to that application was that there were at 400 gigabit, direct-detect solutions that go to 10km, at LR8 and LR4 now,” says Hofmeister.
For 800 gigabit, however, it is unclear the reach of a direct-detect solution, hence the interest in pursuing a coherent solution, says Hofmeister.
The two 800-gigabit applications are independent but the goal is to make the two designs as common as possible in terms of the components, DSPs and modules.
“The 2-10km application is going to be more cost-sensitive so there may be opportunities to pare down the specs,“ says Hofmeister.
A tunable laser is not needed for the 2-10km link, reducing significantly the module cost.
“In that case, somebody may choose to develop a modulator with a fixed laser that only meets that 2-10km application,” says Hofmeister. “Yet internally it may have the same DSP and the same transmitter optical subassembly and optical receiver as the DWDM variant.”
As for demand for each of the applications, it is too early to say, notes Hofmeister.
The OIF says its latest work will be similar to what was done for 400ZR in that the OIF will not specify the modules to be used.
400ZR uses QSFP-DD and OSPF pluggable form factors while 800-gigabit coherent will use the OSFP and QSFP-800DD modules.
The client-side rates supported will be 8×100-gigabit, 2×400-gigabit and 800-gigabit while the optical output will be a single 800-gigabit wavelength.
The 400ZR uses a 64 gigabaud (GBd) symbol rate and 16-ary quadrature amplitude modulation (16-QAM). The 800-gigabit coherent interfaces won’t necessarily double the symbol rate used for 400ZR.
Instead, the symbol rate may reside between 64GBd and 128GBd which will determine the modulation scheme used. The choice will depend on the state of the component technologies when the decision is made.
“This will be one of the early steps of the OIF discussion,” says Tom Williams, vice president of marketing at Acacia Communications. “My personal opinion is that doubling the baud rate is likely because it would be simpler from a link budget perspective.”
The forward error correction (FEC) scheme also needs to be determined. The signal gain achieved is dependent on the symbol rate and modulation scheme used; the higher the symbol rate, the less powerful the FEC needs to be for a given reach.
Also, the more complex the FEC scheme is, the higher the latency it introduces.
“On latency, it’s not as simple as higher gain means higher latency; the class of algorithm chosen can have a bigger effect,” says Acacia’s Williams. “Of course, the 800ZR application is very power-sensitive as well, so these decisions need to be discussed and worked out in the OIF.”
Gass says the DSP power consumption is one of the concerns with a higher-gain FEC.
The 2-10km 800-gigabit campus link will also require FEC but not as high-gain as the data centre interconnect interface.
“Most likely a higher-gain FEC will be needed than what Ethernet includes, even for the 2-10km application,” says Hofmeister. “If a different scheme is used, we could reduce latency and power consumption for the 2-10km application. For the shorter distance, the latency of the FEC is a larger impact as there is less latency on the fibre path.”
The symbol rate chosen also affects channel spacing.
“For 400ZR, the original effort used 100GHz channels, but there is active work in IEEE and OIF to support 75GHz channels,” says Williams. “Most people are assuming that 800ZR will utilise 150GHz channels.”
The OIF has not given a date as to when the 800-gigabit interfaces will be completed.
It took over three years to complete the 400ZR specification work which suggests it will be late 2023 at the earliest.
But Gass says OIF members now have more experience including issues such as interoperability.
“We have a maintenance effort for 400ZR to add the 75GHz grid spacing, but are also updating performance parameters that weren’t normative in the original 400ZR release,” says Gass.