Sunday 17 July 2016

Doom: The Secret of 60 FPS

Digital Foundry did an in-depth technical analysis of one of the best games of this year (maybe decade) – DOOM (2016). While doing the research they found out that id Software once again created a miracle: the game with incredible visuals ran at a smooth 60 FPS. Journalists talked with id Software’s team of programmers and found out how they actually did it. You can find the full interview at Eurogamer and here’s just a couple of fragments from the article.

DOOM_20160514184131

The answers are provided by Tiago Sousa – Lead Rendering Programmer at iD Software. Tiago is a well known rendering magician, having worked almost all his professional life at Crytek, working on Cryengine. He also worked at True Dimensions Entertainment – his own company – back in 1999, where he developed his own game engine.

About idTech 6

DOOM_20160525223424

From the start, one of our goals for the idTech 6 renderer was to have a performant and as much unified design as possible, to allow lighting, shadowing and details to work seamlessly across different surfaces types; while keeping in mind scalability and things like consoles, MSAA/good image quality and MGPU [multi-GPU] scalability.

DOOM_20160524234812

The current renderer is a hybrid forward and deferred renderer. With such a design we try and get the best from both worlds: the simplicity of a forward renderer and the flexibility of deferred to be able to approximate certain techniques efficiently. Another goal from the start was to improve iteration times for the art team, and things like disk space consumption. We wanted to move away from the stamping approach from idTech5 – essentially how detail was applied to textures. In the past, it relied on pre-baking texture results into mega-texture and so on – on this iteration we’ve translated this process into a real-time GPU approach, with no draw calls added.

DOOM_20160524234212

As for parameterising all the input data for feeding the GPU, “Clustered Deferred and Forward Shading” from Ola Olson et al and its derivative “Practical Clustered Shading” from Emil Person caught my eye early on during the research phase due to its relative simplicity and elegance, so we expanded from that research. All volume data required for shading the world is essentially fed via a camera frustum shaped voxel structure, where all such volumes are registered. It allows for a fairly generous amount of lights, image-based light volumes, decals, and so on.

Asset Creation With New Rendering Setup

One of our big goals was to transition idTech 6 into a physically plausible rendering model. This started with transitioning the entire team from an LDR/linear agnostic rendering into high dynamic range rendering and linear correct rendering, then after this step we introduced the team to physically-based shading.

DOOM_20160523001841

This was a fairly big adjustment, particularly for the art team, as they had to get used to things like tone-mapping, image exposure, linear correctness, physically plausible texture parameterisation, asset creation in a consistent manner, and so on. Even for the engineering team this was a big transition; getting everyone up and running understanding all of the relevant nuances – eg transitioning all inputs to linear correct, HDR lightmap, no magic multipliers and such – all required for consistent and high quality rendering.

Physically-Based Shading

Our lighting approach is a mix of real-time approximations and pre-computed components. For the indirect lighting, idTech 6 uses pre-baked indirect lighting for static geometry, mixed with an irradiance volumes approximation for dynamics. For indirect specular bounce we used an image based lighting approach.

DOOM_20160522234211

The real-time components use a state-of-the-art analytical lighting model for the direct lighting together with shading anti-aliasing, mixed with real-time directional occlusion and reflections approximation. Skin sub-surface scattering is actually approximated via texture lookups and baked translucency data. It’s fairly efficient – particularly compared to the usual costly screen-space approximations.

Our biggest achievement here is how well it performs and its consistency across different surface types, though we’re always looking for way to improve even further.

Optimisation

DOOM_20160514165129

I like to keep things simple. Usually I tackle things from a minimalistic – both data and code – and algorithmically perspective, while taking into account target hardware and a grain of futurology. Eg does it make sense to process all this amount of data, or can we just process a sub-set? Is this the minimal data-set? If the solution is a bit on the rocket science/insane side, what can we do to make it as simple as possible? How would such run well on the slower platforms and how well would it scale? And so on. And of course the usual profile guided micro-optimisations.

There’s a bunch of other info in the original article like the question of resolution scaling, motion blur, rendering modes, the usage of Vulkan/DX12 (you should definitely start using it) and optimisation on consoles. It’s a technical read, but it has a lot of insights on the creation of high quality rendering in games. Go to Eurogamer now and check it out!

Screenshots are taken from the PS4 version of DOOM (2016).

Source: eurogamer.net

Share This!
Share On Facebook
Share On Twitter
Share On Google Plus
Share On Linkdin
Share On Pinterest
Contact us

© Admin for 80lvl, 2016. | Permalink | No comment | Add to del.icio.us
Post tags: , , , , , , , , , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl

No comments: