The UI has a concept of layers, which are full-screen (or full context-area, but thats a technicality) overlaid widgets. All input goes to the top layer only, but the back layers all receive update and draw calls every frame. Its only used very sparingly right now, for the bulletin board forms. They look like a dialog box, but they're actually a 3x3 grid container with content in the centre cell placed on top of a semi-transparent black background to produced the "greyed out" effect.
I'm intending to layer the entire display as follows, from back to front:
- World (ie raw camera)
- World effects (speed lines)
- Cockpit (including instruments)
- HUD #1 (projected elements eg targets)
- HUD #2 (flat elements eg scanner, contact list)
- Menus, station screens, etc
There will need to be some shared state between them, most notably camera orientation & position for drawing the 3D elements and the camera frustum for projections. I think these can be held in a separate shared structure calculated right at the start.
Keeping them separate makes it really easy to produce different effects too. We can destroy the HUD independently. We can just use the camera widget to implement missile cams, ground surveillance, drones, whatever - these are all standard UI widgets and so can be embedded anywhere. It also allows/forces some good code structure, like the physics and lighting stuff that the cockpit will need.
There's a change required for input here - it needs to be possible for a layer to pass an event down to a lower layer, rather than the current situation where only the top layer can ever receive events. Once we have that then any unhandled event on HUD #2 (like a background click) gets passed to HUD #1, where maybe a ship was clicked or something. Or to the cockpit instruments from there.
There's still some stuff up in the air around shortcut keys and "catch-all" event handling for flight control. jpab and I have talked about these a lot in the last year and I think we're pretty closing to knowing what we need to do. I'm not going to dwell on it further here, at least not yet - input is a fair way off in this stuff anyway.
Slight aside, possible bikeshed argument: I'm not sure if projected elements should be behind or in front of the cockpit. The argument for in front is that you want to see eg the targeted ship even if its behind the cockpit (and its probably projected onto your helmet/eyeball anyway). Against is the fact that you don't want crap obscuring your cockpit when you're trying to click something. But the beauty of this arrangement is that we'll be able to swap the order just by swapping lines of Lua around. So we'll be able to experiment.
I started some experiments today with a UI camera widget, see robn/new-ui-worldview. But I think I won't go much further until I've pulled a good amount of the state out of the core Camera class so I can build on top of it.
That's all I've got for now!