Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Vaughn Vernon
    @VaughnVernon
    I'm going to strongly suggest that you not use Kafka as an Event Sourcing aggregate store.
    Shafqat Ullah
    @shafqatevo
    Yes, we already realized that
    I have seen your talk where you discussed Geode
    Vaughn Vernon
    @VaughnVernon
    Not sure then about your Kafka comment on limited topics.
    Shafqat Ullah
    @shafqatevo
    I was just giving an example of how infrastructure components may pose scalability limits
    Vaughn Vernon
    @VaughnVernon
    Geode is also not good as an event store. @davemuirhead has gone to great pains to get transactional support across regions and it doesn't work well. Geode is good for scaled k-v and compute fabric.
    Shafqat Ullah
    @shafqatevo
    Thanks for sharing that
    What is your current recommendation for event store?
    Vaughn Vernon
    @VaughnVernon
    If the Kafka topics limitation is a real case I suggest that you look at
    Shafqat Ullah
    @shafqatevo
    I see Postgres in Vlingo docs
    Vaughn Vernon
    @VaughnVernon
    Yes. You can get really big Postgres tables on AWS.
    Shafqat Ullah
    @shafqatevo
    yes, managed Postgres is available in both AWS and GCP
    Keeping an eye on Pulsar and Kafka's recent efforts for increasing topic scalability
    Vaughn Vernon
    @VaughnVernon
    I meant to say check into Apache Pulsar over Kafka. As far as I can tell Pulsar is far superior to Kafka and most people will never know.
    They still need some work to get to the scalability levels of typical actor instances of couple millions per VM
    Vaughn Vernon
    @VaughnVernon
    Still no secondary index on Pulsar, although the near/here transaction support may leave room for that.
    Shafqat Ullah
    @shafqatevo
    I see
    As a pure event store, I guess the main requirement is efficient event sourcing during actor reactivations
    I mean besides message delivery
    Vaughn Vernon
    @VaughnVernon
    If you are interested we will soon have a vlingo/symbio Journal on cloud native storage.
    Shafqat Ullah
    @shafqatevo
    sounds great! you mean, like backed by S3?
    Vaughn Vernon
    @VaughnVernon
    Like that's currently confidential IP.
    Shafqat Ullah
    @shafqatevo
    ok, noted
    Vaughn Vernon
    @VaughnVernon
    The how, that is.
    Shafqat Ullah
    @shafqatevo
    ok
    Vaughn Vernon
    @VaughnVernon
    The biggest gain is keeping hot actors cached so the don't require reconstitution from storage.
    That's in the vlingo/lattice grid
    It's late for me. TTYS.
    Shafqat Ullah
    @shafqatevo
    Thanks a lot for your time @VaughnVernon ! Really appreciate this...
    Kenny Bastani
    @kbastani
    @shafqatevo Hello. If you have any questions about Xoom or Kubernetes integration, please forward my way. I can help you with that. Right now we are considering a Helm Operator example but have not yet reached the critical point of prioritizing it. If this is something you feel is absolutely needed, please let me know.
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo As @kbastani indicates
    we currently don't have a k8s operator in the works. We can bump the priority if you would like to engage us for this. Let me know what you would like to do.
    vaughn at kalele dot io
    Shafqat Ullah
    @shafqatevo
    Hi @kbastani and thanks Vaughn, we're at least 6-9 months away from production-grade deployment. So nothing is urgent. My first step is to fully understand vlingo. Been following since its start but never really dived deep. Is there any webinar or talk providing a good overview and/or explaining vlingo in good detail? The vlingo big picture is not immediately clear to newcomers.
    Shafqat Ullah
    @shafqatevo
    We're looking at xoom (first briefly looking at Micronaut which we never used in the past).
    Shafqat Ullah
    @shafqatevo
    I think if there's a dedicated page in the docs comparing the pros and cons of Akka vs vlingo/actors, that will be great. I've already read the differences and advantages alluded in this page: https://docs.vlingo.io/vlingo-actors
    Shafqat Ullah
    @shafqatevo
    As a background, our team comes from a Spring / Firebase background for backends but we've been reorienting ourselves towards DDD-CQRS-ES-Actors for more than a year now. We're also experimentally exploring few design ideas in this space on a framework/platform on top of Akka (assuming clients will prefer Akka for its 10+ yrs maturity and also Akka Streams plus Alpakka).
    (BTW, Gitter "Delete" message option doesn't have any confirmation and is easily clicked instead of Edit!)
    Shafqat Ullah
    @shafqatevo
    Another issue I would like to ask is, in general, what's the impact of garbage collection pauses on vlingo components/processes falling in the critical round-trip path of end user experience? Pronghorn, an actor framework by creators of Micronaut, has a strategic focus to eliminate GC pauses as much as possible (https://oci-pronghorn.gitbook.io/pronghorn/chapter-0-what-is-pronghorn/home). "Pronghorn is almost completely garbage-free. This is partially accomplished by using static memory allocations. There’s no need to release memory and no garbage collector slowing down your application."
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo (0) http://vlingo.io (1) http://docs.vlingo.io (2) vlingo/xoom is our Boot (3) There are few GCs in a request round-trip because actors are either new or cached. We don't use static memory for that, but it's an option. Depending on how you maintain an actor's state you should probably always have some GC because requests will cause replacement of various attributes. I prefer immutable whole state replacement. (a) vlingo/http: near-zero GC, except always request-response strings. ByteBuffers are pooled. (b) vlingo/lattice: near-zero GC, except always domain object mutations. (c) vlingo/symbio: near-zero GC except temporary (young generation) serializations.
    @shafqatevo I was unaware of Pronghorn, but it appears to be dead. No changes in 9 months, skeleton docs, and the github link on OCI's website is broken. https://github.com/objectcomputing/Pronghorn
    Shafqat Ullah
    @shafqatevo
    Those are very good information @VaughnVernon! Thanks!
    They had put serious efforts into Pronghorn. I think they may have shifted priorities and may resurrect it again. 2020s will be the decade of actors.
    What's the per-actor-instance overhead in vlingo? In Akka it is about 300 bytes. In Erlang, per-process overhead is little over 600 bytes.
    Shafqat Ullah
    @shafqatevo
    I guess such overhead gets reflected as number of maximum possible actor instances in identical VMs as benchmarked by the paper I had shared alt
    Vaughn Vernon
    @VaughnVernon
    By default a vlingo/actors actor is less than 300 bytes (~278 bytes). If the actor has a name you can add a character size the length of the name, x2. That's because the name is both in the address and in the definition of the actor. However, some/many strings will not have 2-byte characters, but 1-char each. Even if a name of 32 characters would most likely add only 64 bytes. So the size of an actor with a very large name would be 342 bytes. If an actor has no name then the name is set to null, which occupies no extra space.

    I want to point out that the above does not account for Java byte/word alignment in memory. Since our actors do not use unmanaged memory we are subject to how Java decides to align variables in memory. This could add another several bytes, but probably not many. I'd estimate that this could push the default size above 300 bytes, possibly 320 bytes or so. Adding to a large name string could push 384 bytes, or just short of 400 bytes.

    We have not made any attempt to improve this. I have felt that my use of variables has been somewhat wasteful, but I don't want to try to tune the memory usage unless I know that it will make a big difference. There are several things we can do to improve that if we decide to.

    Shafqat Ullah
    @shafqatevo
    Thanks, Vaughn! That ballpark should be adequate for most apps.
    Any thoughts on leveraging Project Loom Fibers?