IBM promotes an Open Memory Interface standard for server processors 

IBM promotes an Open Memory Interface standard for server processors 

IBM calls for a total rethinking of structuring memory subsystems in high-performance processors with its new serial Open Memory Interface (OMI) standard. The idea itself is not completely new though - there have already been attempts to introduce a serial memory standard instead of a traditional parallel one. Take, for instance, the FB-DIMM standard that didn’t last long due to the high energy consumption and heat dissipation of the buffer chip on each memory module. A similar scheme is currently utilized by the IBM’s POWER8 and POWER9 SU processors.

These chips’ memory controller is arranged differently than in the usual Intel Xeon or AMD EPYC. It has no part responsible for the PHY since a special Centaur memory buffer chip, which is already connected to the processor through serial interface at a speed of 28.8 GB/s, deals directly with the DIMMs.

POWER9 chips have eight such controllers. This provides a gain in bandwidth (230 GB/s of combined performance) and saves space on the die, reducing the costs per unit of capacity. The use of the Centaur buffer chips adds about 10 nanoseconds of latency, which, however, is not critical for memory reads and writes and is partially smoothed out by the L4 cache.

Unlike Centaur, the IBM’s recent development, OMI, uses an open protocol similar to the memory semantics subset of the OpenCAPI 3.1 standard and relies on the BlueLink bus (25 Gb/s) responsible for the operation of NVLink and OpenCAPI in current POWER chips. 

There are some other changes in terms of implementation as well. For example, OMI is much more compact and light-weight. This allows to make the chip more area-efficient and introduce obvious bandwidth/mm2 advantages - with serial access, enabling to reduce the pin-count from 300 to 75, the bandwidth density increases dramatically. In this scheme, the CPU sends only simple load and store address requests while ordering requirements, memory organization, conflicts, and electrical standards are all abstracted. In addition, compared to Centaur’s latency requirements, the new OMI controller offers less than 4 nanoseconds of added latency over standard integrated LRDIMM DDR controller. It can also have an additional cache for better capacity and performance.

In addition to reducing the pin-count, this design allows using almost any storage-class memories, including DDR, GDDR or NVDIMM. Currently, the main goal is support for DDR5. The OMI interface is unified, and the new slots are by default compatible with any modules meeting the standard.

The OMI chip buffer can be located on the system board or on the memory module, with the latter variant being the core of the new standard. It offers 84-pin Differential Dual-Inline Memory Modules (DDIMM) with capacities ranging from 16 GiB to 256 GiB. The new DIMMs will conform to the DDR4 and draft JEDEC DDR5 standards.

The POWER9 AIO has eight OMI channels with a data rate of 25 GT/s, allowing a peak theoretical memory bandwidth of 650 GB/s. In addition, the new chip introduces an upgraded Nvidia NVLink interface as well as support for OpenCAPI 4.0.

The new IBM POWER9 processors with advanced front-end capabilities will begin shipping next year. The package will also include an OMI ↔ DDR4 chip buffer with a peak performance of 410 GB/s, which is noticeably lower than the capabilities of the processor itself. This means there is yet room for upcoming modernization of POWER9 AIO systems that will presumably consist in replacing memory modules with the more efficient ones.

The forthcoming POWER10 processor is expected only in 2021. By this time, the OMI DIMM standard should become the core one for multiprocessor systems. In addition, IBM is currently readying to introduce new OpenCAPI  versions that are not tied to the POWER architecture, which will open up the way to OMI for other vendors as well. And with Power 10, IBM will be boosting the Bluelink ports to 32 Gb/sec and 50 Gb/sec, allowing it to compete with PCI-E 5.0. However, POWER10 will have support for this bus.


Are you ready to make the most of IT? Schedule a call with an expert today.