On the other hand, most programs use standard widgets. text fields, list views, buttons, which would be relativly easy to draw on a different output device. but even here optical customisations would fall under the table. on a TUI the layout would change because of the coarse grid and even because of the line-lengths of monospace vs. proportional text
So, there has to be an element of graceful degradation. there are widgets (ui-elements, entities, whatever we call them in the end) that translate semanticly, but loose their styling, there are widgets that dont translate at all... i think, this would look a lot like css in the end (maybe simpler and without the cruft...). which wouldn't be the worst thing, considering websites scale from mobile to desktop and even to text browsers and screen readers
to me it ticks some very interesting boxes: It seems to leverage decent existing libraries (e.g. webrender for rendering, yoga for layout, etc), provides a declarative way to design the UI and separate state from rendering, and it has support for multiple backends.
very early goings of course, but I’d definitely recommend you take a look at what’s going on there.
i didn't onw RSX. it looks interesting
(btw, sorry for being so unresponsive the last few days after starting this gitter... i'm finalizing a thesis at the moment, it should get better in a few days ;) )
honestly the idea of multiple frontends feels like a distraction
we should probably have a state management paradigm that makes it straightforward to use the same app state across frontends, but that's probably the extent of what we should try to do
I'm starting to wonder if there are two GUI libs
one of them is a very universal look-and-feel for all platforms. It's able to handle multiple resolutions easily, as well as different form factors
the other tries as much as possible to use the native look-and-feel. It has different frontends that can support a fairly universal programming model, though no doubt the users do have to do a bit more work. We could probably help testing these apps
yes, fair enough regarding the multiple frontends. having a one-size-fits-all approach to UI development always leads to tradeoffs and you may get more of them once you try to support multiple paradigms. having said that, tools like QML are handling embedded, mobile and desktop use-cases and they’re fairly successful in that regard.
doesn’t mean you need to design all of them upfront, certainly.
When i was doing QT for a job i was stuck with 4.xx, so i haven't actually used QML
re state management: for me this has always the main pain point when using UI libraries. for instance, if you want to have a custom TreeView in Qt you need to do a custom implementation of QAbstractTreeModel and you’re basically asked to implement methods like “give me the Nth child of this specific item”, “what’s the tooltip of this item”, “what is the parent of this item”. additionally you need to emit the right messages at the right time, so that all UI components understand when an item has been inserted in a specific position. this is surprisingly tricky to get right, and Qt even ships a “ModelTester” class that you can use to unit test your custom model. I wrote a custom TreeModel at one point that worked in a “React”-style approach using diffs between previous and current state, and would emit the appropriate events based on that, but this was of course relatively slow for large trees.
so for a good state management crate you’d probably have to start with persistent data structures (like trees), that understand the concept of “previous state” and “current state”.
if you have that working, undo/redo also flows naturally from that, without having to write tons of custom undo/redo command implementations.
I'd rather not tie anyone to a specific data structure either
that'll just lead to the same problem as qt's abstract models, but instead of herding events you're just herding mutations
one thing I've been considering is something like QAbstractTreeModel but where it doesn't use events to update the UI
and instead it just unconditionally repaints everything visible whenever anything has changed
which gives a react/vue-like flow but without any diffing
the event loop might look something like this:
receive OS event(s)
run capture+bubble triggering based on cursor and focus, filtering OS events to more semantic events (but not changing any state)
pass the semantic events off to "components" and actually update state in response (this includes both UI state like scrolling/text selection/etc and user state like text/radio selection/etc)
re-run styling, layout, and paint, then go back to waiting on OS events
with that default probably-doing-too-much flow you could imagine compartmentalizing things in step 3 so that you know which sub-sections of the UI need layout/etc
but that could be approached as an optimization, rather than something the user has to get right or risk dropping stuff on the floor
the key API data structure there, I think, would be a kind of "template" that looks a lot like the DOM but doesn't contain any content, only handles into the state which could then be stored any way the user likes
@rpjohnst like a scene graph in a video game?
but scene graphs often include the actual data as well
i was thinking more of how its done with entity component system thingies
the rendering approach is basically the same
yeah, modern game architecture should be pretty applicable to GUIs, at least partly
Gerald E Butler
@madmalik Yeah, sorry, haven't been around. The "software" was Progress 4GL RAD Development. It's a fairly sophisticated 4GL database language that's been around since the '80's that ran on everything from DOS up to large Unix boxes and Mainframes. It was kind of ahead of its time. It's still around, but, Open Source, Java, C#, etc have really cut into its market.