Nvidia has reportedly cancelled its first-gen SOCAMM (System-on-Chip Attached Memory Module) rollout and is shifting development focus to a new version known as SOCAMM2. This is according to Korean outlet ETNews citing unnamed industry sources. Industry sources claims that SOCAMM1 was halted after technical setbacks and that SOCAMM2 sample testing is now underway with all three major memory vendors.
The abandonment of SOCAMM1, if accurate, resets what was expected to be a fast-tracked rollout of modular LPDDR-based memory in Nvidia’s data center stack. SOCAMM has been positioned as a new class of high-bandwidth, low-power memory for AI servers, delivering similar benefits to HBM but at a lower cost.
Nvidia itself has already listed SOCAMM in product documentation. Its GB300 NVL72 spec sheet confirms support for up to 18TB of LPDDR5X-based SOCAMM and bandwidths of 14.3 TB/s.
Micron announced that it was the “first and only memory company” shipping SOCAMM products for AI servers in the data center back in March. In contrast, Samsung and SK hynix had stated in conference calls that they were getting ready for mass production in Q3 2025. If SOCAMM1 has truly been shelved as the unnamed sources claim, then the timing could give Samsung and SK a second shot to close the gap with Micron.
ETNews suggests that SOCAMM2 will boost data rates from 8,533 MT/s to 9,600 MT/s and may support LPDDR6, though no vendor has confirmed that yet. Industry reports from earlier this summer forecasted 600,000 to 800,000 SOCAMM units shipping this year, suggesting that Nvidia was serious about deploying SOCAMM1 at scale. Whether those plans were paused or simply evolving remains unclear.
|