AI workloads generate and consume data at unprecedented scale. Training runs ingest petabytes; inference pipelines need fast retrieval; and the storage and memory stack in between determines how quickly data moves from disk to DRAM to GPU. This theme tracks the companies that sell memory chips, storage systems, and the interface IP that ties them together.
Why it matters
- HBM is the AI memory bottleneck. High-bandwidth memory is the scarce resource in every AI accelerator today. Memory suppliers with HBM capacity have exceptional pricing power for the first time in a decade.
- Storage demand correlates with compute demand. More training runs, more checkpoints, more inference logs — every additional GPU deployed creates proportional storage demand downstream.
- Memory interface complexity is rising. DDR5, CXL, and HBM all require more sophisticated controller and PHY IP, creating toll-booth economics for interface suppliers like Rambus.
Roster
- MU — Micron Technology — DRAM and NAND manufacturer with growing HBM production. One of three companies in the world that can make HBM at scale.
- SNDK — Sandisk (Western Digital) — NAND flash memory and SSD storage, including enterprise drives for data center workloads.
- PSTG — Pure Storage — all-flash storage arrays and software-defined storage for enterprise and AI data pipelines.
- NTAP — NetApp — enterprise hybrid-cloud storage, data management, and AI-ready infrastructure solutions.
- RMBS — Rambus — DDR5 memory interface chips, high-speed silicon IP, and security technology. Toll-booth on memory bandwidth.
What to watch
- HBM pricing and allocation — MU's HBM revenue growth and margin trajectory are the leading indicators for memory profitability.
- DRAM cycle inflection — conventional DRAM pricing recovery extends beyond HBM to the broader P&L for MU and peers.
- Enterprise SSD attach rates — each AI server deploys far more SSD capacity than a traditional server.
- Pure Storage subscription mix — recurring revenue share signals durable growth vs. hardware lumpiness.
- DDR5/CXL adoption — RMBS interface chip revenue growth validates the upgrade-cycle thesis.