Nvidia eyes SK Hynix’s next-gen HBM3E memory for future AI & HPC GPUs

 Nvidia eyes SK Hynix’s next-gen HBM3E memory for future AI & HPC GPUs

TechSpot is celebrating its twenty fifth anniversary. TechSpot intention tech prognosis and advice you can trust.

What suitable took keep of living? Nvidia has reportedly requested a sample of the next-generation Excessive Bandwidth Memory (HBM3E DRAM) from South Korean chipmaker SK Hynix, aiming to review its impact on GPU efficiency. If all the pieces proceeds as anticipated, the firm plans to yell the brand new abilities in its future GPUs for AI and HPC.

The document from Taiwan’s DigiTimes comes almost a month after SK Hynix announced that its 5th-generation 10nm process, 1bnm, has done the validation process and is interesting to energy next-gen DDR5 & HBM3E alternatives. The firm furthermore revealed that Intel’s Xeon Scalable platform has already been certified by the American chipmaker to spice up DDR5 products built on the 1bnm node. Relief in January, it purchased Intel certification for the 4th-gen 10nm (1anm) DDR5 server DRAM.

SK Hynix furthermore announced that in test runs, DDR5 products built on the 1bnm node ran at 4.8Gbps; the maximum velocity of DDR5 stipulated within the JEDEC standards is 8.8Gbps. Consistent with a document by Tom’s Hardware, the HMB3E memory will very much lengthen the records transfer fee from the contemporary 6.40 GT/s to eight.0 GT/s, thereby increasing the per-stack bandwidth from 819.2 GB/s to 1 TB/s. On the opposite hand, it be no longer without lengthen determined if the brand new abilities will be nicely matched with the present HBM3 controllers and interfaces.

SK Hynix is making interesting HBM3E product samples for the 2nd half of of this year with the target of getting it into mass production by the first half of of 2024. The firm already has a partnership with Nvidia, providing its HBM3 product for the latter’s H100 Hopper Compute GPU. With a 50 p.c share of the total world market, SK Hynix is currently the HBM market leader sooner than Samsung, which has a 40 p.c overall market share. If Nvidia approves its most contemporary abilities, this will motivate the firm lengthen its lead over Samsung within the HBM sector.

It remains dangerous which of Nvidia’s compute GPUs will form basically the most of SK Hynix’s HBM3E memory if the deal is profitable. On the opposite hand, speculation suggests they may per chance well very nicely be integrated into team inexperienced’s next-gen products, anticipated to be launched in 2024. There is furthermore speculation that Nvidia may per chance well per chance dump HBM3 fully in desire of HBM3E for its Hopper-Next GPUs, however this has yet to be officially announced by either occasion. Within the intervening time, Samsung is furthermore said to be working on its dangle Snowbolt HBM3P memory providing up to 5 TB/s bandwidth per stack, so this will even be attention-grabbing to peep how that compares to SK Hynix’s HBM3E in right-life efficiency.

Learn Extra

Digiqole Ad

Related post

Leave a Reply

Your email address will not be published. Required fields are marked *