PBR Rendering Rewrite

A quieter space for design discussion of long-term projects
Post Reply
sturnclaw
Posts: 79
Joined: Tue Sep 18, 2018 12:20 am

PBR Rendering Rewrite

Post by sturnclaw »

After doing a significant amount of evaluation and reading, I think I'm ready to lay down some specifics on the planned rendering rewrite. I've identified the new workflow, figured out how to implement it, and I think I've got a fairly good grip on how I want to implement the higher-level C++ API for interacting with the renderer.

Just a note before I go any further: this is going to be a long project, potentially spanning multiple months to implement. I'd like to incorporate Fluffy's terrain texturing rewrite as well as nozmajner's decal workflow refactor, which will introduce some necessary slowdowns and dependency-waiting. However, I am confident that I can actually implement the planned features on my own, so there should be no significant danger of this falling by the wayside or going unfinished for two years. (cough)

We've talked for quite a while about moving Pioneer to a physically-based rendering workflow, or 'PBR'. Various resources and methods have been proposed, as well as "nice-to-have" features like a Vulkan or DX11 backend, etc. Recently, I've come across a set of comprehensive resources describing in great detail the implementation of a PBR rendering workflow, as well as these (res 1, res 2) lovely resources on the anatomy of a (comparatively) lightweight clustered-forward renderer that we can implement without significantly altering our hardware compatibility.

Armed with these resources, I have a much better understanding of how to architect and structure the new renderer, as well as a comprehensive understanding of our current needs, in part due to my work on the scenegraph/ code while optimizing startup time. I'll be outlining what we will (and won't) be implementing in the first iteration on the new renderer.

We WILL be implementing:
- A clustered-forward rendering pipeline, capable of handling a large number of lights per frame (no more over-reliance on NavLights!).
- A Metallic-Roughness PBR authoring workflow, requiring a minimum number of textures per material.
- A unified weathering system for ships and other geometry, based on nozmajner's excellent prototype work.
- A mesh-decal based detailing system for ships, with LOD transitions and normal-contribution blending.
- Point, Spot, and Directional lights, in high enough quantities to allow lighting contributions from planetary reflections and multiple stars per system
- HDR lighting and Tonemapping

We MIGHT be implementing (may get bumped to second pass):
- A frame-graph based representation for efficient and intelligent rendering.
- Support for transparent meshes.
- A bloom postprocessing filter
- Screen-Space Ambient Occlusion post-processing support.
- GLTF asset import and storage. This will mean replacing SGM with GLTF + collision cache files.

We WON'T be implementing (on the first pass):
- Shadows beyond a simple ray-sphere ring / planet shadow
- Additional anti-aliasing modes (TAA etc.).
- Deferred (projector) decals.
- Light Probes (realtime or precomputed)
- Screen-Space Reflection post-processing support.

FAQ:

Will this mean we can't support older devices any more?
- I'm not sure yet. Anything supporting openGL 3.3 / 4.0 will likely be fine, as I'm intentionally keeping the renderer as simple as possible. Everything in the WILL and MIGHT categories is achievable under openGL 3, though it might be more efficient if we were to kick the requirement up to 4.0.

Will this impact performance significantly?
- Pessimistically, yes. A PBR shader is a much more complex beast than our current simple shading pipeline, but honestly I don't think we're anywhere near GPU-bound currently, even on the lowest-spec hardware we support. My estimation is that we'll get a full 60FPS on cards >= GTX 760, and drop to 30FPS on Skylake-and-earlier integrated GPUs.
- Additionally, I intend to define a number of configurable parameters to control the performance impact.

Why clustered-forward? Why not a deferred shading pipeline?
- Performance. A deferred pipeline takes a significant amount of bandwidth, more than an integrated GPU can comfortably provide. A clustered-forward solution is designed to mitigate performance costs as much as possible while not compromising on quality, and (as DOOM and others have proved) can deliver very high quality results on median hardware.
- Also simplicity. A forward pipeline is miles less complex to implement than a deferred pipeline, and will just as adequately satisfy all our requirements for Pioneer.

Are you implementing Vulkan support?
- No. This is already challenging enough on openGL, no need to throw another graphics API in the ring. When we ditch support for openGL 3 and non-Vulkan-capable devices, I'd be more than happy to port the whole shebang to Vulkan (preferably using a boilerplate-avoidance middleware), but until then, I'm not maintaining two different backends with their own sets of bugs.

Why aren't you doing shadows on the first pass?
- Shadows are hard, man. More to the point, shadowing requires a tricky culling implementation to avoid massive performance overheads, and requires a complex change-tracking system if we want more than 4 shadow-casting lights per frame.

Anything else?
- Please hit me with any questions you might have. Also words of encouragement. Those are pretty nice too.
impaktor
Posts: 993
Joined: Fri Dec 20, 2013 9:54 am
Location: Tellus
Contact:

Re: PBR Rendering Rewrite

Post by impaktor »

sturnclaw wrote:Just a note before I go any further: this is going to be a long project, potentially spanning multiple months to implement.
Haha! "months" is not long. The "move from OldUI/C++" has been going on for +5 years. If this will be measured in "months", then that would be most amazing!
However, I am confident that I can actually implement the planned features on my own, so there should be no significant danger of this falling by the wayside or going unfinished for two years. (cough)
That's very good. However, historically, what usually happens is that the one person working on "large project X" tires/real life gets in between. Sort of what happened with robn and the OldUI->NewUI/Lua migration.

Although I don't know anything about rendering, I applaud your efforts and the time you put into it, and am looking forward to see the progress.

So the reason for doing this overhaul to begin with, is for PBR, decals, and better textures?

Does "no shadows" mean the eclipse in Hades system will be broken?

I think, in a space game like pioneer, shadows (from planets, moons, space stations, ships), is more noticeable/important than ship textures & decals, which you only see in rare occasions, like when you buy your ship, I guess? (I hope I'm not stepping on anyone's toes now)

Also, the biggest performance hit now, is rendering cities, so optimizations on anything involving buildings and such would be a benefit. Something to keep in mind.

Anyway, these were some thoughts of mine, but fluffy is the one who has to give input on this, as this is his area of expertise.
Keep up the good work!
sturnclaw
Posts: 79
Joined: Tue Sep 18, 2018 12:20 am

Re: PBR Rendering Rewrite

Post by sturnclaw »

impaktor wrote: Mon Jun 24, 2019 11:17 am
Does "no shadows" mean the eclipse in Hades system will be broken?

I think, in a space game like pioneer, shadows (from planets, moons, space stations, ships), is more noticeable/important than ship textures & decals, which you only see in rare occasions, like when you buy your ship, I guess? (I hope I'm not stepping on anyone's toes now)
For the first iteration, I'll be implementing simple ray-sphere test planetary shadowing for directional lights, similar to what we have now for planetary rings and whatnot. It's not "true" shadowing, but it's cheap and currently implemented. Again, you can see my reasoning as to why I'm not tackling shadows on the first pass in the above post.
FluffyFreak
Posts: 1342
Joined: Tue Jul 02, 2013 1:49 pm
Location: Beeston, Nottinghamshire, GB
Contact:

Re: PBR Rendering Rewrite

Post by FluffyFreak »

We WILL be implementing:
- A clustered-forward rendering pipeline, capable of handling a large number of lights per frame (no more over-reliance on NavLights!).
- A Metallic-Roughness PBR authoring workflow, requiring a minimum number of textures per material.
- A unified weathering system for ships and other geometry, based on nozmajner's excellent prototype work.
- A mesh-decal based detailing system for ships, with LOD transitions and normal-contribution blending.
- Point, Spot, and Directional lights, in high enough quantities to allow lighting contributions from planetary reflections and multiple stars per system
- HDR lighting and Tonemapping
  • Clustered forward is cool, you might need to be aware that we do some crazy things with scale and position for planets/stars because we don't have an infinite z-depth (despite using logarithmic z-buffer) which can be seen here and here that mgiht affect using z-partitioning schemes.
  • Sounds cool
  • Yay!
  • Also yay! This decal system is also based on Noz's exploration work?
  • More light type is good, we do already support upto 4 stars by the way.
  • HDR is going to be interesting.
We MIGHT be implementing (may get bumped to second pass):
- Support for transparent meshes.
- GLTF asset import and storage. This will mean replacing SGM with GLTF + collision cache files.
  • We do have some transparent rendering already, halo's, (planet/gas giant) rings, debris, navlights etc so thi smight be required. Or do you mean for ships/stations/models etc?
  • I don't see how GLTF is a reaplcement for SGM, for the Assimp lib perhaps but SGM is supposed to be lower level, processed data in our engines custom format for better loading. Whereas glTF is a JSON format for program interchange like COLLADA, OBJ, etc.
My overall questions would be:
  • Can we apply this in stages? I.e. instead of one giant PR at some point in the future can we begin transitioning the existing renderer in stages?
  • Will there be a post-processing system?
sturnclaw
Posts: 79
Joined: Tue Sep 18, 2018 12:20 am

Re: PBR Rendering Rewrite

Post by sturnclaw »

FluffyFreak wrote: Thu Jun 27, 2019 9:33 pm
  • Clustered forward is cool, you might need to be aware that we do some crazy things with scale and position for planets/stars because we don't have an infinite z-depth (despite using logarithmic z-buffer) which can be seen here and here that mgiht affect using z-partitioning schemes.
  • Sounds cool
  • Yay!
  • Also yay! This decal system is also based on Noz's exploration work?
  • More light type is good, we do already support upto 4 stars by the way.
  • HDR is going to be interesting.
Yeah, I'm aware of the fun stuff we do with logarithmic-Z. There's a very simple workaround though... planets beyond about 10k are almost guaranteed to never receive any lighting contributions from anything but full-scene directional lights. As such, we can render them in a separate pass without the usual clustered-forward approach, as there will only be a limited number of light-casting stars in a scene.

Additionally, most of the time a planet doesn't have a perceptible change between frames unless you're really close, due to the distance involved. We can render background planets to a texture and only update ever so often. This isn't quite as useful unless we're in situations where e.g. multiple moons are visible from a planet, but it might be something worth looking into.

...Also clustered-forward uses exponential depth slicing anyways, so this should *save* us a couple exp2/log2 calls.
  • We do have some transparent rendering already, halo's, (planet/gas giant) rings, debris, navlights etc so thi smight be required. Or do you mean for ships/stations/models etc?
  • I don't see how GLTF is a reaplcement for SGM, for the Assimp lib perhaps but SGM is supposed to be lower level, processed data in our engines custom format for better loading. Whereas glTF is a JSON format for program interchange like COLLADA, OBJ, etc.
Yeah, I'm talking about transparency for cockpits, windows, etc. Things that need reflections and proper sorting instead of planetary rendering.

Regarding glTF, it does almost everything we do now. It stores scene hierarchy information, stores binary vertex data in the exact same format as SGM, and we can, with an extension, store arbitrary data like compiled colliders in it as well. Additionally, it allows us to decompress and process only the scene hierarchy information, so we can asynchronously load models when they are needed, cutting back on loading times and VRAM.

Over all, it's a well-defined format with numerous loaders/savers, it supports assigning materials and textures directly in the model file (so no more .model text file), and it's easier to debug than our current SGM files.

We can take this in stages, but there's almost no downside to using it.
My overall questions would be:
  • Can we apply this in stages? I.e. instead of one giant PR at some point in the future can we begin transitioning the existing renderer in stages?
  • Will there be a post-processing system?
Yes, we'll be taking this in stages. For the first stage, I'm setting up a simple, forward-compatible rendering abstraction (that we can plug a Vulkan backend into later) and getting IMGUI rendering. The next stage will be to scaffold the clustered-forward renderer and plug a simple test-case into it to make sure it works.

I'm not going to be upgrading the existing renderer, for two reasons. The first is that it's fairly married to the old way of doing things and I'd rather make a clean break and build the renderer around a Vulkan-compatible framegraph structure, and the second is that I'd like to be able to re-use the work I'm doing (and the new code I'm writing) for other, non-GPL-compatible projects of my own.

I had a chat with basically everyone on IRC about code licensing, and the general solution I went with is to separate the parts of this project that don't involve existing code and put them into an MIT-licensed module. I do not mean any ill by this, but this is – under any approach – a fairly large undertaking involving writing a bunch of new code, and I am unable to make the tradeoff of my time if the result is being GPL-enumbered.

Also yes, there will be a post-processing system. Not sure of the details yet, but I'll work them out as I get closer.
impaktor
Posts: 993
Joined: Fri Dec 20, 2013 9:54 am
Location: Tellus
Contact:

Re: PBR Rendering Rewrite

Post by impaktor »

Didn't we say that GPL allows the original author to re-use their code under other licenses? I.e. there's no problem you writing code in GPL for pioneer, and then re-use that code in your own non-free project, under some other license?
FluffyFreak
Posts: 1342
Joined: Tue Jul 02, 2013 1:49 pm
Location: Beeston, Nottinghamshire, GB
Contact:

Re: PBR Rendering Rewrite

Post by FluffyFreak »

Dual Licensing is possible, with caveats:
https://softwareengineering.stackexchan ... ce-license
https://en.wikipedia.org/wiki/Multi-licensing

I think the problem might arise where it goes into Pioneer, one of us adds too it (GPL), then you'd want to use that code in your non-GPL projects... that wouldn't be allowed as it? I'm not sure, we'd need a lawyer!
sturnclaw
Posts: 79
Joined: Tue Sep 18, 2018 12:20 am

Re: PBR Rendering Rewrite

Post by sturnclaw »

FluffyFreak wrote: Sun Jul 07, 2019 3:42 pm Dual Licensing is possible, with caveats:
https://softwareengineering.stackexchan ... ce-license
https://en.wikipedia.org/wiki/Multi-licensing

I think the problem might arise where it goes into Pioneer, one of us adds too it (GPL), then you'd want to use that code in your non-GPL projects... that wouldn't be allowed as it? I'm not sure, we'd need a lawyer!
Yeah, that's specifically why I want to separate the code in question into its own module (src/core/ or even contrib/trinity/) and singly-license the module under MIT. Any contributions to that module specifically will fall under MIT, while contributions to the rest of the codebase, even if they reference that code, will fall under GPL.

Anyways, progress report. I didn't get much chance this week to work on the new architecture due to some real-life concerns, but I got the command buffer API scaffolded out. This API closely mimics a modern (e.g. Vulkan) approach to rendering commands, and under the hood will assemble a graph of rendering operations that will be used to minimize openGL state changes and can be introspected into for profiling and debugging. Or, that's the plan at least.

An extra upside to this approach is that we can record commands on any thread we want, store them as long as we want, and then later send the command buffer to the rendering thread for execution. Now... SDL's limitations mean that the rendering thread probably has to be the main thread, which isn't so great. I'll look into ways to work around that later.
Post Reply