Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 30 2019 14:49
    slashdotdash edited #169
  • Jan 30 2019 14:48
    Freyskeyd commented #249
  • Jan 30 2019 14:48
    Freyskeyd closed #249
  • Jan 30 2019 10:28
    slashdotdash commented #249
  • Jan 30 2019 09:08
    Freyskeyd commented #249
  • Jan 29 2019 11:37
    Freyskeyd commented #187
  • Jan 29 2019 11:34
    Freyskeyd synchronize #249
  • Jan 29 2019 10:50
    Freyskeyd synchronize #249
  • Jan 29 2019 10:16
    slashdotdash closed #231
  • Jan 29 2019 10:16
    slashdotdash commented #231
  • Jan 29 2019 09:48
    Freyskeyd commented #231
  • Jan 29 2019 09:10
    Freyskeyd synchronize #249
  • Jan 28 2019 15:49
    Freyskeyd edited #249
  • Jan 28 2019 15:32
    Freyskeyd synchronize #249
  • Jan 28 2019 15:04
    Freyskeyd opened #249
  • Jan 28 2019 11:20
    imetallica closed #119
  • Jan 28 2019 11:20
    imetallica commented #119
  • Jan 28 2019 10:40
    Freyskeyd commented #119
  • Jan 28 2019 10:40
    slashdotdash commented #184
  • Jan 28 2019 10:39
    Freyskeyd commented #52
Ben Smith
@slashdotdash
The default snapshot uuid would be what is currently used
Luca Zulian
@lucazulian
Yes, I'm doing something like that!
Ben Smith
@slashdotdash
Nice one
Thanks
Luca Zulian
@lucazulian
Hi, I was implementing a specific use case using ProcessManager and I faced a problem that I don't know how to solve. I have this situation:
  def interested?(%CalendarEventCreated{id: id}),
    do: {:start, id}

  def interested?(%SharedDocumentFolderCreated{id: id}),
    do: {:start, id}

  def interested?(%CalendarEventDeleted{id: id}),
    do: {:stop, id}

  def interested?(%CalendarEventCreationFailed{id: id}),
    do: {:stop, id}

  def interested?(%CalendarEventDeletionFailed{id: id}),
    do: {:stop, id}

  def interested?(%SharedDocumentFolderCreationFailed{id: id}),
    do: {:stop, id}

  def interested?(%AppointmentAttachmentsAdded{id: id}),
    do: {:stop, id}

  def interested?(_event), do: false
The failed events are emitted after some retry so I can have race conditions where I can start the process manager and eventually never stop or restart after stopped. Is there a way to avoid these cases? Maybe a process span can solve some of these race conditions?
Ben Smith
@slashdotdash

You can use the idle_timeout configuration option to have a process manager instance stopped after a period of inactivity.

https://hexdocs.pm/commanded/process-managers.html#configuration-options

That will help to reduce memory usage, similar to using :stop to stop the process. However it won’t remove the process manager state snapshot. You could either not worry about that or have some other scheduled process which deletes old snapshots from the snapshots table.
Luca Zulian
@lucazulian
So with idle_timeout you can stop a process manager without removing its state from snapshot table and resume in some ways later?
Ben Smith
@slashdotdash
Yes, exactly
Luca Zulian
@lucazulian
ok, but the "normal" stop instead removes the snapshot from the table right?
Ben Smith
@slashdotdash
Yes
Luca Zulian
@lucazulian
great, thank you!
Scott Ming
@scottming
Hello, everyone, I want to ask a question about design.

When my business evolves and it turns out that I don't have enough information stored in my previous events, what is the appropriate way I should use at this time?

For example, let's say I'm in the business of making an order. The company's finance people ask me to give them the people and amounts in the paid orders. But my order payment event only has the amount, not the person. Do I need to add the person's information to this event and update the previous events in bulk at this time? Or do I just need to check the readstore db and that's it.

I understand the latter is simpler, but I think it's a bit of a circular dependency.

Benjamin Moss
@drteeth
@scottming there’s a book on that: https://leanpub.com/esversioning
There’s no super clear easy way but the options are reasonable.
Scott Ming
@scottming
Thanks, I'll go read the book first
Scott Ming
@scottming
@drteeth Hello, Benjamin, In addition to the book, have you seen some videos about that? any recommendations?
Marcus Bergstrom
@mquickform
Hi everyone, I'm posting this here (with permission) if anyone is interested in working part-time on a project for Hallstein Water. We use almost exclusively Elixir (and Commanded). Here is the job posting: https://www.linkedin.com/jobs/view/2490871601/.
Marcus Bergstrom
@mquickform
Günter Glück
@gugl_twitter
Can anybody help me out to understand why the Eventstore has original_stream_id and original_stream_version for every entry in stream_events? From my understanding every event can (and because of the $all stream there are at least two) be part of multiple streams. Why do we need to know the original_stream and original_stream_version? I ask this to have a better understanding as I have to migrate events from one aggregate to a new one due to an initial design mistake.
Ben Smith
@slashdotdash

@gugl_twitter EventStore allows an event to be linked to multiple streams. The original_stream_id and original_stream_version are used to indicate the position of the event in its source stream. Each event is linked to its original stream and the global $all stream. You could also link the event to any other stream that you choose by using the EventStore.link_to_stream/4 function.

When reading events in the global $all stream you can use the original_* fields to know what was the source stream and version of the event as well as position of the event in the linked stream.

Günter Glück
@gugl_twitter
@slashdotdash Thanks for the insight. I guess the source stream would always be the Aggregate that handled the command and generated the event, right? What would be some use cases that would need to know the source stream/version?
Ben Smith
@slashdotdash
When you use functions as EventStore.read_all_streams_forward/3 and EventStore.stream_all_forward/2 the events are returned in the order they were appended to the global $all stream but the stream uuid and stream version fields are populated using the values from the original stream (e.g. originating aggregate’s stream).
Linking events from a source stream into another stream allows you to build your own streams like the $all stream. For example you could build a stream per aggregate type by using a subscription to all events and then linking the received events into an appropriate stream based on the aggregate type. This then allows further subscribers to the "events by type” stream (e.g. $all-users, $all-foo, etc.).
Günter Glück
@gugl_twitter
I see. So do you think the following would be the best approach to migrate some events from one aggregate type represented by two identities to another with a way more finegrained identity: I would first read all events and link those events to the new aggregates that I would have to build before (?) and then would soft delete the two original aggregate streams. Is there a way to rewrite the original_stream_id to the new aggregates or do i miss a easier way of doing such a migration?
Ben Smith
@slashdotdash

Have you looked at Greg Young’s book on versioning event sourced systems.

https://leanpub.com/esversioning

One strategy you could use is “copy & transform” where you read the events from the source streams and then write a copy of the event to a new stream.
If you link events from one stream to another and then hard delete the source stream you will also delete the events from the destination stream.
Soft delete would be ok as the events will remain.
Günter Glück
@gugl_twitter
Thanks. At the moment I look in the direction of using soft delete. From my understanding I would either copy and transform all events to a new events store or otherwise (only one type or stream) I would append the copied events and therefore change the order of events relative to the non-copied events. I will have a look at the esversioning book again though. My problem with soft delete would be that the original stream id and original stream version would not change as I guess they would still be linked “originally” to the original source that gets soft deleted. Have to do some experiments. Somehow I thought I would find more articles or discussion about that topic. Once I found a good solution for our case I will try to write a post or wiki article about that to put some learnings out there.
Bruno Castro
@brunohkbx

Hey everyone. I'm getting this error after I deployed my app on heroku:
Fri Apr 23 2021 12:58:18 Consistency timeout waiting for aggregate "9a0f98c8-f1c8-4883-86f2-615979315de9" at version 21 metadata

any idea of how to solve that?

Günter Glück
@gugl_twitter
Hi @brunohkbx It seems that the aggregate handled the command will but one of your event handler is not able to process the event. Are you sure that there is no error thrown by one of the event handlers involved?
Scott Ming
@scottming
If you deploy this application with two nodes, but subscribed to the same eventstrore, then you will also encounter this problem, do some configuration can solve it.
@brunohkbx
You can refer to the Commanded documentation on deployment
Ben Smith
@slashdotdash
@brunohkbx To use strong dispatch consistency in a multi-node deployment you either need to use distributed Erlang OR use Redis as the Commanded Application’s pubsub adapter
This is to ensure the event handler acknowlegdements are broadcast to all nodes in the cluster
Bruno Castro
@brunohkbx

Thanks. I've been trying to set up Phoenix.PubSub.Redis as the adapter. but no luck so far.
It works locally but on heroku its throwing this error everytime:

Mon Apr 26 2021 16:19:54 unable to establish initial redis connection. Attempting to reconnect... metadata
Mon Apr 26 2021 16:19:57 failed to publish broadcast due to closed redis connection metadata

Its coming from this process: CommandedApp.PhoenixPubSub.Tracker_shard0

Config:

pubsub: [
      phoenix_pubsub: [
        url: System.fetch_env!("REDIS_URL"),
        node_name: System.get_env("HEROKU_DYNO_ID", "default"),
        pool_size: String.to_integer(System.get_env("REDIS_POOL_SIZE", "10"))
      ]
    ]

Thanks in advance :smile:

Bruno Castro
@brunohkbx
I'm also starting another Phoenix.PubSub.Redis in the application. I don't know if its conflicting with commanded but it works locally.
  {Phoenix.PubSub,
   name: MyApp.PubSub,
   adapter: Phoenix.PubSub.Redis,
   url: redis_config[:url],
   node_name: redis_config[:node_name]},
Bruno Castro
@brunohkbx
just in case someone wonders why the problem was about the runtime configuration. moving everything under the init/1 function fixed it.
Tomek Rybczyński
@rybex
Hello everyone :) Quick question. Is there any option to load manually all events event store? I have events list in a file and I would import all of them to my commanded app. I will be grateful for any hints
Günter Glück
@gugl_twitter
@rybex That recipe could be a good starting point for you: commanded/recipes#14
Tomek Rybczyński
@rybex
@gugl_twitter thx. I will try that :+1:
Jonathan Stiansen
@jonathanstiansen
Hey guys! probably a question that is painfully obvious for most people who understand the in's and out's of commanded - can you share bounded contexts across projects? How do you go about doing that? My initial thought is that it's just the router you'd need to add, and the rest is all in one sub context
Also does anyone use property based testing with commanded? Anything that stands in your way to doing so?
Ben Smith
@slashdotdash
@jonathanstiansen I’ve written an article outlining the various ways you can structure an Elixir application using Commanded: https://10consulting.com/2021/03/18/commanded-application-architecture/