It feels like we’ve been talking about High-bandwidth memory (or HBM) for ages now, and there’s a good reason for that. HBM is going to change the face of desktop and laptop GPU computing – and AMD is desperately eager to be the first out the gate with their new Fiji series of cards. But just what is HBM? And, more importantly, how is it going to affect you?
The roadmap to HBM is long and winding, but it essentially comes down to this. DDR memory has been living off exploits from the late 90’s, slicing down the internal clock to increase clock signals and transfer rates. Even the new, DDR4 RAM that Intel will support with Skylake is dependant on this technology – and unsurprisingly the suspected performance gains are rather minimal.
DDR in general is about to hit a memory wall, where the power required to achieve minimal speed differences will hold back computational power that actually puts it to use. This is even evident in SDRAM, or what’s traditionally known as GDDR5 memory on GPUs. With texture sizes increasing, especially as we near 4K resolutions, the aim to to increase the width of the memory bus, and really drive the speeds sky-high. Since GDDR5 is, again, dependant on old technology, achieving this is drawing away power from what your GPU can really do in terms of computing – hence making more advances mostly moot.
HBM changes this entirely, primarily because of how it’s structured on the GPU die itself. In a traditional setup, DRAM chips are placed side by side on a processing unit, similar to the way Intel has done so with the implementation of their Crystalwell Iris Pro graphics. HBM, instead, stack these chips on top of one another, which achieves two things. It decreases the overall footprint the chips have on the die itself, while also reducing power consumption due to the proximity of them all. Lower power, obviously, means far more juice for actual computing on the die itself.
What this means for you is interesting as well. Memory buses can now be thousands of bits wide, rather than hundreds. In fact, the first card AMD is prepping with HBM will sport a 1024-bit bus width, with 100GB/sec of bandwidth per memory stack. With four stacks and 1000MHz effective clock speed, that essentially translates to a total memory bandwidth of 512GB/sec – the types of numbers that are going to allow for faster streaming of massive data packages in future games.
It also means AMD cards will be more power efficient, with the 1.5V draw dropping to 1.3V per stack. The architecture also allows for much smaller form factors – which means Fiji might actually require far less space in your chassis than you expect. The only downside is that AMDs very first implementation of HBM, which should debut in the next few months, will be restrained to 4GB of memory – which really doesn’t take full advantage of the technology it’s going to be paired with.
Nvidia is suspected to debut their own version of HBM with their Pascal cards next year, and I suspect AMD will follow that with much larger cards at around the same time. That’s when we’re really going to see some single-card, 4K capable computing – although the first look at HBM for desktops is something to get really excited about. It’s also something that AMD really needs to nail, given the market crisis they find themselves in currently.
Last Updated: May 20, 2015