LightTrends Newsletter


SC22 conference was full of contradictions

December 2022

A summary of the Supercomputing 2022 (SC22) event held in Dallas, Texas, November 13-18th, 2022.

High-performance computing may finally have its exascale machines. But increasing processing performance is now a huge challenge.

The last 14 years has seen a 1,000x increase in flops/s, but only a factor of 10 more growth is expected this decade; the most powerful supercomputers will execute fewer than 10 exaflops by 2030.

Yet in the two weeks before Supercomputing 22, AMD and Intel announced their latest processor designs. These central processing units (CPUs) and graphics processing units (GPUs) are remarkable feats of system-in-package engineering. These ICs are used for the compute nodes linked at scale for exascale systems.

Whereas the ratio of floating-point operations (flops) to data-word movement was 1:1 in computers 30-40 years ago, the ratio is now 100-200 flops per data-word movement. Processing performance is now massively out of step with data movement.

Perhaps the biggest contradiction at the conference was the low profile given to photonics (the exhibition floor was a somewhat different story) despite computing’s marked slowdown.

Silicon photonics has long been touted as a key technology to overcome the input-output bottleneck of compute elements and enable new architectures for high-performance computing and in the data center. Yet leading high-performance computing vendors - HPE and the like - continue to use CPU/GPU nodes in various topologies connected with copper and active optical cables.

LightCounting will address this topic in a new edition of “High Speed Cables, Embedded and Co-Packaged Optics” report to be released on December 21st.

Silicon photonics continues to progress but is yet to be adopted for high-performance computing and server architectures. That said, SC22 hosted two silicon photonics firsts:

  • Professor Keren Bergman of Columbia University reported a working 5Tbit/s transmitter optical chiplet implemented using 80 channels and 3D packaging. The accompanying receiver chip is working and is being lab-tested.
  • Ayar Labs demonstrated its 2Tbit/s TeraPHY chiplet in an end-to-end link, sending and receiving data.

During the panel discussion on high-performance computing and silicon photonics, Intel's Fabrizio Petrini addressed head-on why optics had such a low profile at the show. "The reality is there is a lot of skepticism about this technology. The adoption is not going to happen anytime soon," he said.

Systems designers don’t see the implications until they embrace this technology. But factors are aligning for change, and a transition point is being approached in how systems are built, he says; the implications for systems and disaggregation are enormous.

Optical switching is another technology that has been on the fringes of the market for decades. LightCounting reported on Google’s announcement in the summer that it had been using photonic circuit switching in its data centers for several years.

At SC22, a start-up, Drut Technologies, demonstrated its interface card working with a photonic switch at the top of a SuperMicro server rack. The system allows the server’s CPUs to dynamically configure the resources they need (memory, GPUs) tailored for workloads.

“What we are building is a networking bypass, a secondary fabric,” says William Koss, CEO of Drut Technologies.

Drut uses a third party’s low-loss MEMS-based non-blocking optical switch that can be as large as 384x384 ports. The start-up has developed an input-output card with an FPGA that supports Drut’s fabric control software, signaling, and 4 QSFP optical modules for up to 4x100Gbit/s interfaces. Drut’s card supports PCIe over an optical interface.

Full version of the research note is available to subscribers at: https://www.lightcounting.com/login

Ready to connect with LightCounting?

Enabling effective decision-making based on a unique combination of quantitative and qualitative analysis.
Reach us at info@lightcounting.com

Contact Us