1. Home >
  2. Computing

Micron sees strong yield, high speeds on GDDR5X, but will anyone use it?

Micron's GDDR5X is reportedly yielding well and hitting excellent frequencies -- but will that be enough to win it adoption from AMD, Nvidia, or third parties?
By Joel Hruska
wafer-saw

Last fall, Micron debuted GDDR5X as an evolutionary approach to next-generation GPU memory. Today, Micron is announcing that its GDDR5X pilot program is already up and running, with mass production expected by the summer. Yields are reportedly excellent, with bandwidth already hitting as much as 13Gbps on early hardware. Mainstream GDDR5, in contrast, topped out around 7Gbps. Unofficially, it was possible to hit around 8Gbps, but neither AMD nor Nvidia ever shipped a card with memory clocked that fast.

We spoke with Kris Kido, director of Micron's global memory graphics business, about the GDDR5X ramp and why Micron is pushing forward with the new standard. According to Krido, Micron's goal is to offer choice and flexibility in the market, with GDDR5X and HBM ultimately complementing each other.

Note: Micron has yet to announce that it's producing HBM, unlike both Samsung and SK Hynix. The company announced a major new initiative with Intel last year, 3D XPoint, and a third generation of its Hybrid Memory Cube (HMC) technology. Like HBM, HMC offers substantially improved memory bandwidth, but the two approaches are not compatible.

Kido believes that GDDR5X's strongest market would be at the top of the midrange / bottom of the high-end space. That's roughly analogous to where AMD's R9 390X or Nvidia's GTX 970 currently sit (between $300 - $380), though those are own price points, not Micron's. One of the advantages of GDDR5X is that it uses a mandatory lower operating voltage of 1.35v (GDDR5 supported both 1.35v and 1.50v, but high-speed GDDR5 always used 1.5v). The theoretical GDDR5X efficiency is higher; Micron told us that GDDR5X uses roughly 70% as much energy as GDDR5 per bit of information in typical use-cases.

There are two caveats to this, however. First, the ratings we have are only for memory power, not the total power consumed by the memory controller and the entire VRAM subsystem. If GDDR5X's higher bandwidth requires a power-hungry memory controller, the RAM's improved efficiency rating could evaporate. Second, there's the fact that RAM power consumption is typically given in picojoules per bit. GDDR5X improves efficiency significantly compared to GDDR5, but it also moves far more bits per clock cycle.

NV-HB

This slide is from an Nvidia presentation and shows maximum power consumption on the left, bandwidth as a line across the graph, and maximum theoretical bandwidth to the right. Higher is better when considering bandwidth to the right, but lower is better when comparing against power consumption to the left. GDDR5X may deliver some significant power efficiencies, but HBM and HBM2 slash I/O power dramatically.

AMD and Nvidia both balance bus width against clock rate to optimize the power consumption of the memory subsystem, but neither company is saying whether it intends to use GDDR5X in its 14/16nm products. We know AMD will launch some mobile GPUs with GDDR5 this cycle, and expect Nvidia will as well -- the two manufacturers may use different strategies for specific parts, but there's going to be a point at which HBM/HBM2 don't make sense and GDDR5 is the better option. This is particularly true on lower-end parts with smaller memory buffers and less need for huge bandwidths.

HBM and GDDR5X are sufficiently different that there's probably no practical way for either company to support both standards on the same silicon. AMD and Nvidia typically design a high-end SKU that they sell into 1-2 lower market segments by disabling parts of the chip. In order to use GDDR5X in the target market, the GPU vendors would need to design separate parts with a different memory controller.

We're not going to declare the standard dead; Micron's early yields and high bandwidths are impressive. Right now, however, the industry appears set to adopt HBM2 as the premiere interface, with GDDR5 filling in at the midrange and below.

Tagged In

VRAM HMC AMD Micron Nvidia

More from Computing

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up