omg, I hadn't realised what OrbTk was. When I first saw the name I thought it was a tcl/tk thing (guess that dates me)
so - re: native components, I've been having this itching idea that we could really leverage the Rust trait system. I'd be awesome if there was a way to "mixin" native widgets where you needed them. Maybe there's a default UI, but if you need the native text input area, you could get an optional component for your platform and plug it into your system
the thing I'm worried about there is that native controls severely restrict what you can do with them
because they all have their own way of rendering and getting data in and out
and occasionally they even care who owns the event loop :scream:
well, in the ideal world, the impl of the widget trait for whatever native wrapper you have would translate between the two systems i guess
would probably need some way to inject events from the widget back into the rust framework's event system
Gerald E Butler
I had the following thoughts on this recently. Wouldn't it be nice to have a UI Toolkit that supported (well) seemless UI in either a character-based (curses-like, TUI/CUI) and graphical UI (GUI) where the TUI/CUI would work across OSs for console/terminal applications (with the caveat that a sufficiently powerful console capabilities were available) and the GUI could target Vulkan/OpenGL/Framebuffer/OS-specific render API's all equally (within reason) well.
I have some experience with working with proprietary software that had this capability and there is a lot to be said for such a capability.
It sounds preposterous at first, but, if you've ever worked with something like this, you can see it is feasible.
@gbutler69 agreed. I was just playing around with Xi, which had frontends for both text and different GUIs
and it'd be nice to not to have to pick, if you happened to be just logging into a remote system
it'd be extremely cool... @gbutler69 can you name that proprietary piece of software that does that?
@madmalik - yeah, I'm starting to wonder if the declarative approach is the right one
you seemingly have less control over the layout, but perhaps the layout can come from a platform-native styling?
vulkan isn't cross-platform, but we have things like the gfx crate (https://github.com/gfx-rs/gfx) that we could potentially use since we're doing it in Rust
some thoughts on @gbutler69 idea: the goal itself is just extremly cool. in combination with good ui state handling this would allow to switch seamlessly between front-ends. For example, using the same text editor (and even the same running instance) on the terminal and the gui would be just awesome
But there are also some programs where this cannot work. for example an image manipulation program or an audio workstation where one is editing waveforms. or simply a pdf viewer...
On the other hand, most programs use standard widgets. text fields, list views, buttons, which would be relativly easy to draw on a different output device. but even here optical customisations would fall under the table. on a TUI the layout would change because of the coarse grid and even because of the line-lengths of monospace vs. proportional text
So, there has to be an element of graceful degradation. there are widgets (ui-elements, entities, whatever we call them in the end) that translate semanticly, but loose their styling, there are widgets that dont translate at all... i think, this would look a lot like css in the end (maybe simpler and without the cruft...). which wouldn't be the worst thing, considering websites scale from mobile to desktop and even to text browsers and screen readers
to me it ticks some very interesting boxes: It seems to leverage decent existing libraries (e.g. webrender for rendering, yoga for layout, etc), provides a declarative way to design the UI and separate state from rendering, and it has support for multiple backends.
very early goings of course, but I’d definitely recommend you take a look at what’s going on there.
i didn't onw RSX. it looks interesting
(btw, sorry for being so unresponsive the last few days after starting this gitter... i'm finalizing a thesis at the moment, it should get better in a few days ;) )
honestly the idea of multiple frontends feels like a distraction
we should probably have a state management paradigm that makes it straightforward to use the same app state across frontends, but that's probably the extent of what we should try to do
I'm starting to wonder if there are two GUI libs
one of them is a very universal look-and-feel for all platforms. It's able to handle multiple resolutions easily, as well as different form factors
the other tries as much as possible to use the native look-and-feel. It has different frontends that can support a fairly universal programming model, though no doubt the users do have to do a bit more work. We could probably help testing these apps
yes, fair enough regarding the multiple frontends. having a one-size-fits-all approach to UI development always leads to tradeoffs and you may get more of them once you try to support multiple paradigms. having said that, tools like QML are handling embedded, mobile and desktop use-cases and they’re fairly successful in that regard.
doesn’t mean you need to design all of them upfront, certainly.
When i was doing QT for a job i was stuck with 4.xx, so i haven't actually used QML
re state management: for me this has always the main pain point when using UI libraries. for instance, if you want to have a custom TreeView in Qt you need to do a custom implementation of QAbstractTreeModel and you’re basically asked to implement methods like “give me the Nth child of this specific item”, “what’s the tooltip of this item”, “what is the parent of this item”. additionally you need to emit the right messages at the right time, so that all UI components understand when an item has been inserted in a specific position. this is surprisingly tricky to get right, and Qt even ships a “ModelTester” class that you can use to unit test your custom model. I wrote a custom TreeModel at one point that worked in a “React”-style approach using diffs between previous and current state, and would emit the appropriate events based on that, but this was of course relatively slow for large trees.
so for a good state management crate you’d probably have to start with persistent data structures (like trees), that understand the concept of “previous state” and “current state”.
if you have that working, undo/redo also flows naturally from that, without having to write tons of custom undo/redo command implementations.
I'd rather not tie anyone to a specific data structure either
that'll just lead to the same problem as qt's abstract models, but instead of herding events you're just herding mutations
one thing I've been considering is something like QAbstractTreeModel but where it doesn't use events to update the UI
and instead it just unconditionally repaints everything visible whenever anything has changed
which gives a react/vue-like flow but without any diffing
the event loop might look something like this:
receive OS event(s)
run capture+bubble triggering based on cursor and focus, filtering OS events to more semantic events (but not changing any state)
pass the semantic events off to "components" and actually update state in response (this includes both UI state like scrolling/text selection/etc and user state like text/radio selection/etc)
re-run styling, layout, and paint, then go back to waiting on OS events
with that default probably-doing-too-much flow you could imagine compartmentalizing things in step 3 so that you know which sub-sections of the UI need layout/etc
but that could be approached as an optimization, rather than something the user has to get right or risk dropping stuff on the floor
the key API data structure there, I think, would be a kind of "template" that looks a lot like the DOM but doesn't contain any content, only handles into the state which could then be stored any way the user likes