Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    ErenForce
    @ErenForce
    Okay well this is getting confusing. I'm going through https://microservices.io/ and they say database per service has a lot of positives for projects that need scaling (which I'm working on) but they require the saga pattern for transactions that span multiple service (which I require) but event sourcing can be used for atomically updating state and publishing messages.
    The steps are database per service => sagas => event sourcing, but by definition an event store is a single database so event sourcing violates the first pattern
    ErenForce
    @ErenForce
    I know this isn't strictly related to your bootstrap but it is related to event sourcing which your project has the most documentation on :p
    Oskar Dudycz
    @oskardudycz
    Ok, it may sound a bit blatant, but I think that Event Sourcing is taught wrongly in general ;)
    also messaging and microservices :P
    My perspective on that is that the most important is the careful design considering logical and technical split.
    So e.g. you can have good logical split into modules and have good monolithic modular app
    you can have a mixture (e.g. 2 modules deployed at the same machine, 1 on the other, etc.)
    Oskar Dudycz
    @oskardudycz
    I don't mind distributing the solution if this appears after the design that's needed (I wrote about that in https://event-driven.io/en/how_to_cut_microservices/)
    I'm not a huge fan of microservices.io - it has some good content, but it lacks context and more detailed considerations
    Sorry for this intro :P
    So I think that, I'd personally try to avoid distributed processes unless they're needed
    So not do saga just because we distributed databases, and we distributed because we want to do microservices, etc.
    In general Event Sourcing is also not a system-wide pattern
    it's more of a module-pattern
    So understanding of events should be the inner module understanding. Then you can have granular events as much you need.
    But if you try to use those events as they are also for cross-module integration then you'll have a leaking abstraction and first step to distributed monolith. Which has all the cons of monolith and all cons of microservices - without pros ;)
    I've been there, don't recommend :P
    Oskar Dudycz
    @oskardudycz
    This example shows how potentially you could integrate different microservices https://github.com/oskardudycz/EventSourcing.NetCore/tree/main/Workshops/PracticalEventSourcing
    It uses Kafka and one module is classical CRUD - Shipment
    But still, you could do exactly the same without Kafka or other bus using just in memory and having monolithic solution
    Coming back to internal vs external events (or domain vs integration how DDD community calls them...) - https://github.com/oskardudycz/EventSourcing.NetCore/blob/main/Workshops/PracticalEventSourcing/Carts/Carts/Carts/Cart.cs
    modules like payment, shipment etc. doesn't care about all the Cart events
    It's just inner Carts module business logic that e.g. produc item was added or removed to Cart
    Other modules care only about confirmed card
    So that's why you can publish the event and then map it to external one - that's "fatter"
    It will be published externally https://github.com/oskardudycz/EventSourcing.NetCore/blob/main/Core/Events/EventBus.cs#L29 if it has the interface marker (you could use other criteria if you prefer).
    Oskar Dudycz
    @oskardudycz
    I think that quite a lot can be achieved by load balancing etc.
    So I'd try to keep the monolith as much as possible and don't go distributed unless it's needed
    But designing the modules and aggregates to have proper boundaries and keeping in mind that at some point you may want to split it into microservice
    I think that's it's safe to start at first with the database schema per module approach.
    Regarding the async commands - I mean published with some delay by background process e.g. using the Outbox Pattern (https://event-driven.io/en/outbox_inbox_patterns_and_delivery_guarantees_explained/)
    or through e.g. some external message bus as Kafka, RabbitMQ etc.
    In general command send with pub/sub semantic - so you're publishing command and expecting to get event at some point with success or failure.
    I'm not sure about your particular business case, so it's hard to suggest you which would be the best for you. But imho starting with proper boundaries, load analysis etc. is always good starting point ;)
    I appologize for a lecture :P
    Feel free to ask more questions if that wasn't clear answer ;)
    ErenForce
    @ErenForce
    No, no need to apologize this has been wonderful and I appreciate your time spent on me. I feel like I've become a struggling student that sat down with a professor :p
    I was not even aware that a microservice lives in its own process and spun up with docker or even systemd
    Oskar Dudycz
    @oskardudycz
    I got few scars on that by myself :D
    I think that microservice doesn't have to be on docker
    it could be a set of VMs
    But the common practice is use docker or kubernetes
    as it allows easier orchestration
    ErenForce
    @ErenForce
    I feel more comfortable with systemd however, is that useable
    Oskar Dudycz
    @oskardudycz
    :+1:
    I'm not sure if you're planning to use AWS but Aurora DB is an interesting concept
    It allows different engines - e.g. Postgres, MySQL
    and provides scallability around that
    ErenForce
    @ErenForce
    Cool, however I feel like starting with a microservice project is like hitting the ground running, and also traveling 30 miles and hour while doing so. My question is, is your GoldenEye bootstrap monolithic? Because I might actually use that so I don't shoot my foot aiming for the ground