The Games Developers Conference is still in full swing in San Francisco, and it’s already given us peeks at what the future of games might look like in the future. If you’ve ever delved deep into computer-generated graphics processing for films, you might have come across the term Raytracing. It’s a technique that allows films to look the way they do – a computational heavy process that calculates lighting and shadows in a way that the human eye perceives them. It’s why CGI heavy films need farms of processing cards to render out small scenes, and why it’s been the holy grail of games for years now.
At GDC, Microsoft announced that it would be adding an API to DirectX 12 that would allow developers to start adding GPU accelerated Raytracing to games. That doesn’t mean it’s around the corner, but it has got some companies excited already. Just afterwards Alan Wake developers remedy showcased a demo of what they were able to produce using the technology, but nothing encapsulates it better than this recent Star Wars demo from Epic Games in the Unreal Engine. This is a short scene running at 24 FPS, but it’s all computer generated in real time. It’s staggering.
So how does it work? Currently, games depend on a method called Rasterization. It’s been that way for years, and you can think of it like a painting. First the artist fills the canvas with the background, then layers it with details like houses and figures that overlap. The background that is overlapped is no longer visible, but it still exists, affecting the colour of the object above it. Games fudge this by telling the GPU not to worry about what can’t be seen. If it’s outside the view of the player, or obscured by something in the distance, don’t render it. It saves processing and allows for games to look as good as they can, but it severely limits effects like lighting.
Although there are ways around it, games struggle now to properly convey lighting coming from outside of the players view. Say there’s a red balloon outside of the culling window. Light that bounces off that balloon should still affect a reflection that the player can see. But because the object “technically” doesn’t exist, that’s not possible. Rasterization works on the principle of players sending out “beams” of light that then reflect off game objects and return back to them, which is the opposite of how real vision works. Raytracing is exactly that.
Raytracing creates its own “beams” of sorts that bounce off objects within a scene are calculate correct reflections and refractions on a per-pixel basis. This “light” so to speak then returns to the player camera – your eyes – and is calculated to display correctly, in the same way your brain receives light bounced off surfaces and objects and composes them into your vision.
This allows for much more detailed reflections and lighting, but it’s extremely expensive. The demo above needed four Nvidia DGX-1 GPUs, which puts it at over $150 000. But with Microsoft adding to DirectX 12 and cutting a lot of the overhead, you could see this copping up in games eventually. Eventually.
Last Updated: March 22, 2018