Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo As @kbastani indicates
    we currently don't have a k8s operator in the works. We can bump the priority if you would like to engage us for this. Let me know what you would like to do.
    vaughn at kalele dot io
    shafqatevo
    @shafqatevo
    Hi @kbastani and thanks Vaughn, we're at least 6-9 months away from production-grade deployment. So nothing is urgent. My first step is to fully understand vlingo. Been following since its start but never really dived deep. Is there any webinar or talk providing a good overview and/or explaining vlingo in good detail? The vlingo big picture is not immediately clear to newcomers.
    shafqatevo
    @shafqatevo
    We're looking at xoom (first briefly looking at Micronaut which we never used in the past).
    shafqatevo
    @shafqatevo
    I think if there's a dedicated page in the docs comparing the pros and cons of Akka vs vlingo/actors, that will be great. I've already read the differences and advantages alluded in this page: https://docs.vlingo.io/vlingo-actors
    shafqatevo
    @shafqatevo
    As a background, our team comes from a Spring / Firebase background for backends but we've been reorienting ourselves towards DDD-CQRS-ES-Actors for more than a year now. We're also experimentally exploring few design ideas in this space on a framework/platform on top of Akka (assuming clients will prefer Akka for its 10+ yrs maturity and also Akka Streams plus Alpakka).
    (BTW, Gitter "Delete" message option doesn't have any confirmation and is easily clicked instead of Edit!)
    shafqatevo
    @shafqatevo
    Another issue I would like to ask is, in general, what's the impact of garbage collection pauses on vlingo components/processes falling in the critical round-trip path of end user experience? Pronghorn, an actor framework by creators of Micronaut, has a strategic focus to eliminate GC pauses as much as possible (https://oci-pronghorn.gitbook.io/pronghorn/chapter-0-what-is-pronghorn/home). "Pronghorn is almost completely garbage-free. This is partially accomplished by using static memory allocations. There’s no need to release memory and no garbage collector slowing down your application."
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo (0) http://vlingo.io (1) http://docs.vlingo.io (2) vlingo/xoom is our Boot (3) There are few GCs in a request round-trip because actors are either new or cached. We don't use static memory for that, but it's an option. Depending on how you maintain an actor's state you should probably always have some GC because requests will cause replacement of various attributes. I prefer immutable whole state replacement. (a) vlingo/http: near-zero GC, except always request-response strings. ByteBuffers are pooled. (b) vlingo/lattice: near-zero GC, except always domain object mutations. (c) vlingo/symbio: near-zero GC except temporary (young generation) serializations.
    @shafqatevo I was unaware of Pronghorn, but it appears to be dead. No changes in 9 months, skeleton docs, and the github link on OCI's website is broken. https://github.com/objectcomputing/Pronghorn
    shafqatevo
    @shafqatevo
    Those are very good information @VaughnVernon! Thanks!
    They had put serious efforts into Pronghorn. I think they may have shifted priorities and may resurrect it again. 2020s will be the decade of actors.
    What's the per-actor-instance overhead in vlingo? In Akka it is about 300 bytes. In Erlang, per-process overhead is little over 600 bytes.
    shafqatevo
    @shafqatevo
    I guess such overhead gets reflected as number of maximum possible actor instances in identical VMs as benchmarked by the paper I had shared alt
    Vaughn Vernon
    @VaughnVernon
    By default a vlingo/actors actor is less than 300 bytes (~278 bytes). If the actor has a name you can add a character size the length of the name, x2. That's because the name is both in the address and in the definition of the actor. However, some/many strings will not have 2-byte characters, but 1-char each. Even if a name of 32 characters would most likely add only 64 bytes. So the size of an actor with a very large name would be 342 bytes. If an actor has no name then the name is set to null, which occupies no extra space.

    I want to point out that the above does not account for Java byte/word alignment in memory. Since our actors do not use unmanaged memory we are subject to how Java decides to align variables in memory. This could add another several bytes, but probably not many. I'd estimate that this could push the default size above 300 bytes, possibly 320 bytes or so. Adding to a large name string could push 384 bytes, or just short of 400 bytes.

    We have not made any attempt to improve this. I have felt that my use of variables has been somewhat wasteful, but I don't want to try to tune the memory usage unless I know that it will make a big difference. There are several things we can do to improve that if we decide to.

    shafqatevo
    @shafqatevo
    Thanks, Vaughn! That ballpark should be adequate for most apps.
    Any thoughts on leveraging Project Loom Fibers?
    There are already experimental actor libs using fibers.
    shafqatevo
    @shafqatevo
    Vaughn Vernon
    @VaughnVernon
    I have already committed to implementing FiberMailbox. This would be experimental until fibers are production worthy. I was told by a loom committer that perf would not be good for some time so I haven't bothered going to the trouble. Plus I don't want unnecessay dependencies.
    shafqatevo
    @shafqatevo
    Great!!
    Vaughn Vernon
    @VaughnVernon

    @/all Today we released v1.1.0 GA of the vlingo/PLATFORM. This release includes vlingo/streams, our implementation of Reactive Streams. The docs are available now as are the binary artifacts. This is a growing component and will be enhanced with new features, specifically around types of Source and Sink implementations, as well as typical functional sugar.

    https://docs.vlingo.io/vlingo-streams

    Jakub Zalas
    @jakzal
    Congrats!
    shafqatevo
    @shafqatevo
    That's great!
    shafqatevo
    @shafqatevo

    @VaughnVernon couple more questions:

    1. How will RSocket support be incorporated with streams (and wire) - use case I am looking towards is creating RSocket channels end to end between actors residing in remote client-side and server-side nodes? The client-side may use another RSocket library in any supported platform. Client-side actor > RSocket Reactive Streams > Server-side actor - thereby allowing actor to actor actor-messaging between different client-server actor lib through standard reactive streams.

    2. Is thread-starvation a possibility in vlingo/actors like it is a problem in Akka if any actor blocks a thread due to some computational task or use of any blocking library?

    shafqatevo
    @shafqatevo
    3 . Is there any built-in mechanism to deal with out-of-order messages between actors?
    4 . Any mechanism like the stash concept of Akka?
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo 1. We are adding streaming feature ongoing. Stay tuned. 2. Yes, any long-running task or blocking of threads obviously takes threads away from fair usage. This is an application design flaw, not an actor model problem. 3. The order of messages between two actors is guaranteed; introducing a third actor where two send messages to a third has undefined ordering. I have some recorded talks on modeling uncertainty. 4. It's called stowage, and used by lattice entities; use with caution.
    shafqatevo
    @shafqatevo
    Thanks @VaughnVernon . Asked #2 because Pronghorn had claimed they address this problem (https://oci-pronghorn.gitbook.io/pronghorn/chapter-0-what-is-pronghorn/home#why-pronghorn) vs Akka, probably by Stage SLAs / rates / marking stages as HEAVY_COMPUTE / ISOLATE etc. (https://oci-pronghorn.gitbook.io/pronghorn/chapter-3-stages/notas). Also, fair scheduling is a key advantage of Erlang processes - so I assumed it is possible for the actor framework to have such Schedulers.
    Which class in vlingo does the actual actor execution scheduling onto threads? Is it the Scheduler class?
    shafqatevo
    @shafqatevo
    (couldn't determine from docs or code)
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo The current approach leaves the dispatching to the Mailbox implementation. The concept used by the queueMailbox is a Dispatcher based on a thread pool executor. You set the maximum number of threads or provide a factor (e.g. 0.5 or 1.5) to multiple with the total number of processor--hyper thread--to determine the pool size. There are a few different dispatcher algorithms that could be used and we will add these based on user need (not nice-to-have suggestions that are never used). The arrayQueueMailbox is a MPSC ring buffer with a single dedicated Java thread meant for every high throughput, generally 18-20 million messages per second. You choose the ring size. Note this uses pre-allocated message elements, and thus can be memory intensive, but also greatly reduces GC overhead on high throughput.
    You will find configuration examples here: vlingo-actors/src/test/resources/vlingo-actors.properties
    Vaughn Vernon
    @VaughnVernon
    BTW, I don't know where the Pronghorn author gets off saying that Akka is not non-blocking, because it is, the same that vlingo-actors are non-blocking. Yet, there is no magic available with physical threads. If you are using a physical CPU thread, which you are whenever your code is executing, and you make a blocking I/O call to use disk or network, that physical thread is blocked. Java threads are backed by physical CPU threads. Just because you can create thousands of Java threads doesn't increase the number of physical CPU threads. Every Java thread is temporary assigned a physical thread as available. If you block in any way the physical thread is blocked and so is the Java thread currently associated with it. No other Java thread grabs the physical thread while the Java thread sits idle waiting for the I/O request to return. The chart you see in the link is misleading at best.
    Also I personally asked for the status of Pronghorn. I was informed that the development is on hold and the single original developer who worked on it left OCI. They have no current plans to resume development.
    shafqatevo
    @shafqatevo
    Thanks @VaughnVernon I had assumed from those configuration parameters that they're trying to emulate fairer scheduling in the face of compute-intensive tasks. That's at least better than no such scheme. I assume BEAM-type fair scheduling is not possible with JVM? I understood an actor Scheduler should be per OS thread - CPU core instead of mailboxes being actor schedulers.
    Vaughn Vernon
    @VaughnVernon
    The term Scheduler in both vlingo-actors and Akka is used for timers that schedule one or continuous future signals to an actor. The term used for delivering messages to actors on threads is called dispatching. Erlang/BEAM doesn't assign O/S threads to processes in the way that Java does. It sort of implements a virtual O/S of its own, which enables all kinds of different ways to control fairness. Think of how Un*x and Windows operating systems how by literally interrupting execution of a process on a thread and giving that thread to another process. So a JVM is interrupted by the O/S in mid execution of some/many code paths. That's what the BEAM does to it's own processes (and yes, the O/S still interrupts the BEAM).
    Vaughn Vernon
    @VaughnVernon
    So the BEAM uses preemptive multitasking via its scheduler by giving processes time slices, while Java uses cooperative multitasking which relies on code to give up a thread (complete a message reaction) quickly.
    Vaughn Vernon
    @VaughnVernon
    @/all We just gave a demo of vlingo/schemata in the January Reactive Foundation meeting. We had a great response and feedback. This component is so important to the future of #DDDesign #EventDriven #MessageDriven #Microservices #FaaS.

    @/all We have updated our support plans. Sorry if you missed our introductory pricing.

    • We offer per developer support plans
    • We offer production support plans
    • We can customize your teams and production support

    https://vlingo.io/support-pricing/

    shafqatevo
    @shafqatevo
    Thanks, Vaughn. Hope to pitch vlingo to our customers soon. Planning to do a small PoC project first.
    shafqatevo
    @shafqatevo
    Had asked this before, when to expect any webinar/presentation/video tutorial covering all aspects of Vlingo? That would help a lot - particularly for our customers. We already have a hard time educating them on the underlying approaches (DDD, ES, CQRS, etc.).
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo The videos are in the works. We could hold a webinar toward the latter half of February. What is your timing for consumption?
    shafqatevo
    @shafqatevo
    That would work for us. Thanks!
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo BTW, here are some updates in docs regarding mailboxes, their configuration, and dispatching: https://docs.vlingo.io/vlingo-actors#mailbox
    Here is a cleaned up FAQ answering your questions about Java thread dispatching vs BEAM process scheduling: https://docs.vlingo.io/faq#q-how-do-java-based-vlingo-actors-and-erlang-beam-processes-differ-in-how-fairness-of-message-processing-is-managed
    shafqatevo
    @shafqatevo
    Really appreciate this @VaughnVernon !
    Vaughn Vernon
    @VaughnVernon
    @/all We have released the platform version 1.2.7. The vlingo/xoom is unavailable due to issues with the packaging and the conflicts it causes for Bintray and Sonatype replication. We are trying to resolve this by 1.3.0 soon to be released.
    Vaughn Vernon
    @VaughnVernon

    @/all Watch our @vlingo_io webinar from Thursday on YouTube.

    "Safely Exchanging Information Across Microservices"

    Subscribe to my YouTube channel to get updates when new content arrives.

    https://www.youtube.com/watch?v=-VbzBaXR2K8&t=650s