Who breaks first?  

 

An Elasticity Lens for 2026–2028 

 

The semiconductor memory market is once again in an upcycle, but it doesn’t look like the familiar boomandbust pattern veterans expect. Prices for DRAM and NAND have surged on tight wafers, capital discipline, and the gravitational pull of AI infrastructure.

 

Unlike prior cycles, price escalation in DRAM and NAND no longer spreads uniformly across end markets. What we are witnessing is a structurally asymmetric supercycle in which memory’s share of the bill of materials (BOM) and an application’s reliance on capacity and bandwidth now determine who absorbs price shocks and who blinks first. In other words, elasticity has become an applicationlevel variable, not a commoditylevel constant.

 

By early 2026, DRAM pricing had climbed approximately 80 percent quarteronquarter, while NAND and storage pricing rose by roughly 50 percent. These moves were fueled by supply constraints, cautious capex from suppliers, and sustained demand from AI accelerators and datacentric workloads. But the “rising tide” has not lifted all boats equally. The divergence across segments exposes the limits of traditional commodity analysis and makes a strong case for a BOMcentric elasticity framework to forecast behavior through 2028.

 

 

From Commodity Lens to BOMCentric Elasticity

 

The core of the framework is straightforward: quantify the memory share of system BOM, gauge performance sensitivity to memory capacity or bandwidth, and assess the room to modify specs without breaking the product’s value proposition or qualification envelope.

 

These three axes sort applications into low-, medium-, and high-elasticity tiers—each with distinct pricing tolerance, redesign timelines, and cancellation risks.  

 

Low Elasticity: AI Infrastructure and Servers

AI and enterprise servers, along with select highend platforms like advanced medical imaging, sit at the inelastic end. Here, memory is architecturally inseparable from performance and monetization: High-bandwidth memory (HBM) stacks and large DDR5 footprints directly dictate throughput, latency, and accelerator utilization.

 

Even when memory exceeds 40–50 percent of the BOM, cutting capacity undermines platform economics more than it saves cost. Typical 2026 AI nodes deploy between 192 GB and 288 GB of HBM per system, with additional DDR5 and 20–30 TB of NVMe, pushing memory content into fivedigit dollars per system. Yet elasticity remains low because any reduction directly degrades accelerator utilization and total cost of ownership. Through 2026–2028, availability rather than price is expected to remain the dominant constraint.

 

Medium Elasticity: Industrial, Automotive, and Telecom

Industrial automation, automotive domain controllers, and telecom RAN compute live in the middle. Memory is important but not singularly defining. These markets are governed by long qualification cycles, safety cases, and reliability regimes.

 

These systems operate under long qualification cycles and strict reliability constraints, limiting rapid redesign but allowing gradual adaptation. At the same time, allows measured adaptation: capacity rightsizing, phased rollouts, and targeted platform delays.

 

Typical configurations range from 32GB to 64GB of DDR4 or DDR5 memory paired with moderate storage capacities. Under continued price pressure, OEMs pursue capacity right-sizing, staggered deployments, and selective platform delays rather than immediate cancellation.

 

 

High Elasticity: Consumer and CostDriven Electronics

Consumer platforms such as TVs, set-top boxes, and home gateways treat memory as a cost line. While memory has a meaningful share of BOM, it provides limited differentiation payoff.

 

Typical configurations include 1GB –2 GB DRAM and 8–32 GB NAND or eMMC storage. Even modest memory price increases trigger immediate decontenting, launch delays, or program cancellations. These are the segments that break first when memory inflation exceeds perceived enduser value.

 

 

What the Elasticity Lens Changes in Practice

Under a moderate additional price shock of approximately +20 percent, low-elasticity segments are expected to continue absorbing cost increases, while medium-elasticity segments slow deployments and high-elasticity segments reduce memory content. Under a more severe +40 percent scenario, even medium-elasticity platforms face material program delays, while consumer platforms experience pronounced volume contraction.

 

Pricing strategy, allocation, and customer engagement now require segmentspecific calibration. Suppliers benefit from prioritizing lowelasticity demand to stabilize revenue while reducing exposure to highelasticity customers. OEMs use the same lens to determine when to prebuy, rightsize, or redesign systems.

 

The Takeaway: Elasticity Will Decide the Winners

This supercycle is not a uniform tide. It is an elasticitysegmented market where memory’s BOM share, performance coupling, and redesign latitude determine who pays, who adapts, and who pauses. BOMcentric analysis is now the most predictive compass for both design and commercial decisions—and will remain so through 2028.

 

Author: Nikolaos Florous, Global Director of Product Marketing, Memphis Electronic

 

This article was originally published at eetimes