SK Hynix has announced and released pictures of modules with the chips that could be used in the new DDR5 modules that will be hitting the market, announcing the end of the longest-lived generation of memory, the DDR4. The next generation of DDR memory has been a hot topic in recent months as manufacturers have been showcasing a wide variety of tests ahead of a full product launch. The platforms that are expected to use the DDR5 memory is going to be around soon. DDR5 modules are first expected to appear on the enterprise side, and then make it’s way to the mainstream market.
DDR5 is the next stage of platform memory for use in the majority of mainstream computing platforms. As of the specifications released in July 2020, the modules are said to bring down the main voltage from 1.2 V to 1.1 V, increase the maximum silicon die density by a factor 4, double the maximum data rate, doubles the burst length, and doubles the number of bank groups. Considering the specifications provided by JEDEC, a module with a capacity of 128GB running at 6400Mhz speed is something that could be expected. RDIMMs and LRDIMMs should be able to go much higher, power permitting.
|Max Die Density||64 Gbit||32 Gbit|
|Max UDIMM Size||128 GB||N/A|
|Max Data Rate||6.4 Gbps||6.4Gbps|
|Channels per Module||2||1|
|Total Width (Non-ECC)||64-bits (2×32-bit)||16-bits|
|Banks (Per Group)||4||16|
There are generally four aspects on which the DDR specifications have changed since it’s inception, capacity being the obvious one, but also memory bandwidth plays a key role in performance scaling of common multi-core workloads in the large core-count servers we are seeing. The other two are power and latency, other key metrics for performance. One of the major changes that are going to be introduced with this generation is how the system sees the memory. With DDR5, one of the major changes to help drive this is the way the memory is seen by the system. Rather than being a single 64-bit data channel per module, DDR5 is seen as two 32-bit data channels per module. The burst length has doubled, meaning that each 32-bit channel will still deliver 64 bytes per operation, but can do so in a more interleaved fashion. That means the standard ‘two 64-bit channel DDR4’ system will morph into a ‘quad 32-bit channel DDR5’ arrangement, although each memory stick provides a total of 64-bits but in a more controllable way. A key element in increasing the peak bandwidth is doubling the data rate, which is expected to quite noticeably increase in DDR5, as well as a finer-grained bank refresh feature, which allows for asynchronous operations on the memory while it is in use, reducing latency.
Another major change is seen in the voltage regulation sector. Traditionally, the voltage regulation for memory modules was handled by the motherboard, but with DDR5 the voltage regulation is handled by the module. DDR4 already had a per chip Vdroop control, but in DDR5 the power control and management are tighter and overall better. It also puts power management in the hands of the module vendor rather than the motherboard manufacturer, allowing the module manufacturer to size up what is required for a faster memory.
SK Hynix announced on 6th October that they are ready to start shipping DDR5 ECC memory to module manufacturers – specifically 16 gigabit dies built on its 1Ynm process that supports DDR5-4800 to DDR5-5600 at 1.1 volts. With the right packaging technology (such as 3D TSV), SK Hynix says that partners can build 256 GB LRDIMMs. Additional binning of the chips for speeds better than what JEDEC could offer, will have to be done by the module manufacturers themselves. SK Hynix also appears to have its own modules, specifically 32GB and 64GB RDIMMs at DDR5-4800, and has previously promised to deliver memory up to DDR5 8400.
The JEDEC specification defines three different modes for DDR5-4800:
- DDR5-4800A: 34-34-34
- DDR5-4800B: 40-40-40
- DDR5-4800C: 42-42-42
It is unclear which one of these that SK Hynix is using. The module says ‘4800E’, however that appears to just be part of the module naming, as the JEDEC specification doesn’t go beyond a CL value of 42 for DDR5-4800.
Some memory manufacturers have quoted that for the theoretical 38.4 GB/s bandwidth that each module of DDR5-4800 can go up to since effective results are seen in the 32 GB/s bandwidth range. It is certainly a lot higher than the 25 GB/s bandwidth per channel for DDR4 3200. Other memory manufacturers have already announced that they are sampling DDR5 with customers since the beginning of the year.
Intel has committed to enabling DDR5 on its Sapphire rapid Xeon platforms, due for initial launch in late 2021/2022. AMD was not mentioned with the announcement, and neither were any Arm partners. Traditionally we expect a cost interception between old and new technology when they are equal in market share, however, the additional costs in voltage regulation that DDR5 requires is likely to drive up module costs. The cost will increase for the beefier overclocked modules as it will have a robust voltage regulation. The modules using standard JEDEC voltage regulation will cost a little less than the overclocked modules. Although, since the voltage regulation has been removed from the motherboards, DDR5 motherboard prices are likely to be cheaper. SK Hynix quotes that DDR5 is expected to be 10% of the global market in 2022, increasing to 43% in 2024.