Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Shafqat Ullah
    @shafqatevo
    For the immediate client project for which we're preferring Vlingo, it is a patient-centric health care platform that will also make use of IoTs in future. Not really in need of anything much high scalability - but clients always keep asking questions on scalability and expect convincing answers. For Akka, there are few such benchmarks which we can show to client. Then there is the question of scalability characteristics of the entire platform not just the actor toolkit.
    Another factor for us is a Kubernetes Operator - hopefully this will be initiated soon in Vlingo. The more comprehensive the operator is, the better.
    Shafqat Ullah
    @shafqatevo
    Yes, we're using DDD and aggregates will be actors. IoTs will have digital twins. CQRS projections will feed an ML system. In fact, there is an AI component that currently uses an agent library, which can potentially run on actors (agent = actor), too.
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo We had a sustained benchmark running for many hours. IIRC it showed more than 280K requests per minute (not extreme compared to 12K/second) against vlingo/http backed by Event Sourced appends
    Probably @kmruiz will have better recollection of this since he created the benchmark.
    Vaughn Vernon
    @VaughnVernon
    There is a JMH benchmark on our fastest mailbox. It shows consistently 18-20 million messages per second through an actor. See our vlingo-examples repo.
    Vaughn Vernon
    @VaughnVernon
    I have asked @kbastani to reply to your inquiry regarding k8s support, which is already in vlingo/zoom. You would use vlingo/xoom as the vlingo microframework and microservice boot. Our boot times are often in 30ms and far less. We have no runtime annotation wiring. The support of and minimal use of annotations is "just right" and resolved at compile time.
    Vaughn Vernon
    @VaughnVernon
    I think that it's safe to say that the entire vlingo/PLATFORM is comfortably scalable for most services/apps. If your medical domain needs to support a few million entities spread across thousands of users, no problem. If that needs to double and then triple, no problem. Given your current scale requirements it seems unlikely that this will ever be an issue, but if so it's one that we look forward to solving for you.
    Shafqat Ullah
    @shafqatevo
    Thanks for sharing all these insights @VaughnVernon ! The numbers are encouraging and will be sufficient for now I hope.
    Minimizing annotations should help with running Vlingo on GraalVM (assuming other uses of reflections are minimized too).
    Startup time on Graal native image should be even lower.
    Vaughn Vernon
    @VaughnVernon
    Regarding the "RealWorld" apps, it sounds like a great idea to try when we can dedicate a few developers. We are currently fully tasked in vlingo development on 1.1.0 - 1.3.0 and implementing real real world apps with vlingo.
    Shafqat Ullah
    @shafqatevo
    Yes, we're looking into xoom. It was a pleasant surprise to see Kenny join Vlingo. Learned a lot from his articles and talks.
    Fully understand that @VaughnVernon . We look forward to contribute to Vlingo where we can, after understanding it fully.
    One example of how essential infrastructure components could constrain scalability is Kafka's limits to support dedicated topics for millions of DDD aggregate instances.
    Vaughn Vernon
    @VaughnVernon
    We already support GraalVM. This is an effort that @kmruiz is driving and that I supported by fully eliminate the need for reflection (optional based on actor instantiation style). Kevin can explain the details.
    Shafqat Ullah
    @shafqatevo
    That's great to learn!
    None of the currently available message queues actually are designed for such scale otherwise needed to support actor mailboxes or publish channels.
    Vaughn Vernon
    @VaughnVernon
    I'm going to strongly suggest that you not use Kafka as an Event Sourcing aggregate store.
    Shafqat Ullah
    @shafqatevo
    Yes, we already realized that
    I have seen your talk where you discussed Geode
    Vaughn Vernon
    @VaughnVernon
    Not sure then about your Kafka comment on limited topics.
    Shafqat Ullah
    @shafqatevo
    I was just giving an example of how infrastructure components may pose scalability limits
    Vaughn Vernon
    @VaughnVernon
    Geode is also not good as an event store. @davemuirhead has gone to great pains to get transactional support across regions and it doesn't work well. Geode is good for scaled k-v and compute fabric.
    Shafqat Ullah
    @shafqatevo
    Thanks for sharing that
    What is your current recommendation for event store?
    Vaughn Vernon
    @VaughnVernon
    If the Kafka topics limitation is a real case I suggest that you look at
    Shafqat Ullah
    @shafqatevo
    I see Postgres in Vlingo docs
    Vaughn Vernon
    @VaughnVernon
    Yes. You can get really big Postgres tables on AWS.
    Shafqat Ullah
    @shafqatevo
    yes, managed Postgres is available in both AWS and GCP
    Keeping an eye on Pulsar and Kafka's recent efforts for increasing topic scalability
    Vaughn Vernon
    @VaughnVernon
    I meant to say check into Apache Pulsar over Kafka. As far as I can tell Pulsar is far superior to Kafka and most people will never know.
    They still need some work to get to the scalability levels of typical actor instances of couple millions per VM
    Vaughn Vernon
    @VaughnVernon
    Still no secondary index on Pulsar, although the near/here transaction support may leave room for that.
    Shafqat Ullah
    @shafqatevo
    I see
    As a pure event store, I guess the main requirement is efficient event sourcing during actor reactivations
    I mean besides message delivery
    Vaughn Vernon
    @VaughnVernon
    If you are interested we will soon have a vlingo/symbio Journal on cloud native storage.
    Shafqat Ullah
    @shafqatevo
    sounds great! you mean, like backed by S3?
    Vaughn Vernon
    @VaughnVernon
    Like that's currently confidential IP.
    Shafqat Ullah
    @shafqatevo
    ok, noted
    Vaughn Vernon
    @VaughnVernon
    The how, that is.
    Shafqat Ullah
    @shafqatevo
    ok
    Vaughn Vernon
    @VaughnVernon
    The biggest gain is keeping hot actors cached so the don't require reconstitution from storage.
    That's in the vlingo/lattice grid
    It's late for me. TTYS.
    Shafqat Ullah
    @shafqatevo
    Thanks a lot for your time @VaughnVernon ! Really appreciate this...
    Kenny Bastani
    @kbastani
    @shafqatevo Hello. If you have any questions about Xoom or Kubernetes integration, please forward my way. I can help you with that. Right now we are considering a Helm Operator example but have not yet reached the critical point of prioritizing it. If this is something you feel is absolutely needed, please let me know.
    Vaughn Vernon
    @VaughnVernon
    @shafqatevo As @kbastani indicates