Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Andrew
@walpolea
I could have a sort of kitchen sink project that is separate from mfm.rocks
Luke Wilson
@l2wilson94
That makes sense! How easy/hard do you think it will be to move around that code? Any potential hurdles in the way?
Andrew
@walpolea
I think the biggest things will be to migrate the elements directory out of the core directory, and then once I actually try it out do my best to mitigate it being a pain, which I'm hoping just means adding some helpers to core that make using it easier. Also possible something to refactor around how the renderer (outside of core) will interface with running events on the grid. Right now that's pretty tightly coupled.
Andrew
@walpolea
Decided I will start slow with the refactor, I'm just going to fully pull elements directory out of core and see how I like that. And also try to separate the Pixi.js-based MFMRenderer out from the main website.
Andrew
@walpolea
@DaveAckley I'm curious about how input would work on MFM... I feel like there has been some care in maintaining the notion that the Tile shouldn't be a major dependency of the grid's function, but with regard to input it seems like the tile has to be the primary actor here in being a bridge from outside world to the grid. I can't imagine what else there could be, but curious if you can.
my thought process is > well if not the Tile, maybe a site > but that's just semantics really, a site handling input is technically a 1x1 Tile > so maybe it's fine that we acknowledge the Tile more.
at that point I'm also wondering if you think Atoms should be requesting to care about input events, or if the tile should be broadcasting them to all sites all the time, or if there is some other set of rules that make sense around when and where input gets fed to the grid.
Dennis Lucero
@striderssoftware
Hello, for me I ended up with the tile as the entry from "outside" (input) to the element. Coming from element to "outside" I added a call to the EventWindow that is called from the Ulam element class in Behavior. Check out this big comment right at that point:
// TODO VDT - Should an Element ever get Audio Input directly i.e. - EventWindow::GetAudio(), I dont know.                        
//            maybe - the Element could get a Notice of an Audio Event occurance on the Containing Tile.                          
//            BUT, it should be up to the Element currently being processed if it cares about audio input.                        
//            maybe - Whatever the thoughts were for processing changes in LIGHT should apply to Audio       
so I have basically the same questions :)
Andrew
@walpolea

been thinking about some options for handling input, likely many I'm leaving out:

  • The Tile gets an input event and randomly (or specifically) seeds it into the grid

    • this could be as an element on the main layer, or in the base layer, or in a separate designated input/output layer
      • if it were in the IO layer, I could imagine it forkbombing across that layer until something picks it up (Where it then super-friendly-forkbomb deletes all instances of itself)
  • The Tile could be a bit more godly and broadcast an event to all Sites within itself, it could even spread the word tile-to-tile as an intertile input mechanism.

  • Elements on the MFM could specifically subscribe/listen for events on the Tile they are in through something (but what is it?)

    • the tile could run some designated behavior/reaction function immediately (bad?)
    • Or just set a flag of some sort on the Atom/Site to be picked up in the next event window
Dave Ackley
@DaveAckley
@walpolea Well, there isn't a lot of design as yet. What there is is on a per-site basis. For example, there's getTouch() in the SiteUtils.ulam standard library, which reports on recent hover/click/touch activity near the center of the event window.
Since the T2s have a touch screen, resolving input spatially to the site level is certainly possible.
Keyboard input is challenging since there isn't one..
One could imagine regions of the screen containing atoms that act as dedicated buttons. When they sense a touch in their area, they start creating InputEvent type US atoms which stream towards whereever that information is desired.
Dave Ackley
@DaveAckley
Doing any kind of 'real time' interaction would be challenging given the current who-knows-how-low milliAER T2 speeds, but in principle or in simulation one could explore such approaches.
Andrew
@walpolea
yeah, I guess I was thinking about non-direct spatial input like a button on the tile (off the screen), or a keyboard or other sensor, but touch makes it a lot easier, and I feel like touch hints at the idea that input events should maybe have some spatial binding. Though for ease I sort of lean toward the idea that non-spatial input on a tile could easily be broadcast to all sites for any atom to pick up, sort of like a ringing phone waiting to be answered by an atom, through the site... I guess the main thing I was looking for is if it's cool to acknowledge the tile itself, and I think the answer is a more firm yes than I had thought before.
Dave Ackley
@DaveAckley
Well, I'm not sure that's my answer :). Or at least not my only answer. There's one touch screen per tile, but touches easily stay per site.
Partly it's just about what level of abstraction/concreteness are we at, in any particular design decision.
If a game is going to be designed for one tile, for good and all, then in the end it doesn't make sense not to take advantage of the tile level, explicitly letting go of scalability at that point.
If whatever it is is supposed to be movable, growable, etc-able, then the forces are different.
Andrew
@walpolea
right, as a wormhole from world to world, it seems like breaking with scalability is feeling ok, do we need wormholes to scale? Even so, you could scale across the tile level and allow tiles to pass along incoming inputs from one tile to many others like you're doing with the messaging and file sharing stuff.
Dave Ackley
@DaveAckley
I was imagining what if touches on the dungeon grid screen could cause a cloud of 'flow particles' to release from the point of the touch, heading towards the 'center' of the grid (whether that's one tile, N tiles, or some growable movable configurable thing.) Then beins that passed near the flow particles would be bent in the direction of the flow.
So one could steer beins in a clockwise direction by like touching the upper left or lower right of the dungeon, causing opposing flow in opposite directions top and bottom.
At first I was hesitant to talk about the 'plumbing' of the tiles, from CDM to P2P and so on, because I didn't want people thinking that any of that stuff was meant to be architectural at the main programming level.
Andrew
@walpolea
oh yeah, I can totally see that... the touch site could send out Directors in all directions that when they encounter a Directable atom, that atom is directed in the direction toward the touch point
Dave Ackley
@DaveAckley
Kept trying to emphasize being for debugging purposes and such.
Right, like that. So still spatially coupled, and in a pretty natural way.
But since the plumbing is taking so long for me to do.. couldn't really not talk about it..
Andrew
@walpolea
so the input layer could still be something totally different then... I'm imagining a hardware component that even translates keypresses into physical touches on the screen, or maybe even some yet-to-be-defined site-to-world interface that, sure the tile has to implement, but really it's just the messenger, something outside of the Tile should be defining (in some way) where to put the event into the grid.
Dave Ackley
@DaveAckley
I'm not sure where the keypresses actually take place in the context of a grid, but yeah, coupling whatever passes for input events into the sites somehow, and then programming from there, is how I see it.
Andrew
@walpolea
the world where the event occurs should decide where to enter the other world (as much as it knows about doing that)... just thinking in the sense of output, that seems like it would hold true going the other way, where the grid is looking to output some information (not via the screen), it should choose which site would be doing the outputting.
But now I'm coming back to the idea of primary output from the grid being a visual screen, it could be just the same in the reverse... input could be visual - the ocular layer that maps a camera from our world to grid sites.
sort of like Close Encounters of the Third Kind... we could learn to speak grid to the grid
Dave Ackley
@DaveAckley
I can see that. Like imagining a grid interfacing with a bunch of motors and sensors that make up a robot or system that the grid is controlling -- the grid output to drive a motor has to be somewhere, and a sensor feeding signals to the grid has to be somewhere, and the computation works around those 'edge' conditions.
I think of that 'sensorimotor homunculus' picture/map of the human brain; imagine something similar along edges or portions of a grid.
Andrew
@walpolea
that makes sense... and I guess as real-world events choose sites, sites to output choose something real-world, establishing the connection between X site/region of sites and Motor A.
Andrew
@walpolea
that ocular layer still feels like a pretty clean concept though... the grid just needs to act, and on our end, at any scale we available to us, we can interpret... reverse, we can act (in a manner of speaking grid) and the grid can interpret
Dave Ackley
@DaveAckley
Yeah, I feel like the screens are like octopus skin or whatever, that can change appearance dramatically for whatever purpose the octopus wants.
Dennis Lucero
@striderssoftware
I was thinking that at least at the element end that they would get a virtual function like getColor ( handleEvent(Event e) ) and it would be up to an individual element to respond or not. And only when they have the EW.
Dave Ackley
@DaveAckley
Adding a method to UrSelf.ulam is like the highest bar to clear in the whole ULAM code base, but that would provide it to every element . Another approach would be have some kind of more narrow base class, like ULAM/share/ulam/stdlib/EventAware.ulam or whatever, that ulam classes could choose to inherit from, and override methods therein.
Although with Event already associated with the EventWindow, perhaps some other name -- Signal? -- would be better for I/O related stuff.
Andrew
@walpolea
I have Quarks/Multiple Inheritance working! Not ready to merge into the main branch, but you can preview Director N/S/E/W and Fly and Mosquito (which are both QDirectional elements) here:
https://tinyurl.com/y6qypg6o
Andrew
@walpolea
The directors push QDirectionals inward, essentially trapping them with their opposing forces.
Orb2.gif
Dave Ackley
@DaveAckley
Cool! Looks like magnetic containment for fusion!
Andrew
@walpolea
yeah!
feeling powerful with Quarks
Dave Ackley
@DaveAckley
Sweet.