Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 31 2019 19:32

    prolic on v1.10.3

    (compare)

  • Jan 31 2019 19:32

    prolic on master

    update changelog (compare)

  • Jan 31 2019 19:29
    prolic closed #176
  • Jan 31 2019 19:28
    prolic commented #186
  • Jan 31 2019 19:28
    prolic closed #186
  • Jan 31 2019 19:28

    prolic on master

    fix restarting projection durinโ€ฆ respect lock in memory resetting projection and 12 more (compare)

  • Jan 31 2019 16:26
    fritz-gerneth commented #189
  • Jan 31 2019 16:21
    prolic commented #189
  • Jan 31 2019 16:06
    sandrokeil commented #189
  • Jan 31 2019 15:31
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 15:23
    kamil-nawrotkiewicz edited #76
  • Jan 31 2019 15:22
    kamil-nawrotkiewicz edited #76
  • Jan 31 2019 15:22
    kamil-nawrotkiewicz opened #76
  • Jan 31 2019 15:19
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 15:09
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 13:54
    coveralls commented #186
  • Jan 31 2019 13:48
    mrook commented #189
  • Jan 31 2019 13:41
    prolic commented #189
  • Jan 31 2019 13:40
    prolic commented #189
  • Jan 31 2019 13:31
    fritz-gerneth commented #189
Zacharias
@netiul
Happy new year! :)
Alexander Miertsch
@codeliner

๐ŸŽ‰ The new #proophboard wiki is online ๐ŸŽ‰

๐Ÿ‘‰ https://wiki.prooph-board.com

๐—–๐—ต๐—ฎ๐—ป๐—ด๐—ฒ๐—น๐—ผ๐—ด

๐Ÿ’„New layout supporting light and dark mode as well as different font sizes

๐Ÿงญ๏ธ New sidebar with better structure and much easier navigation

๐Ÿ’กMore content incl. #eventstorming basics + guideline

๐Ÿ™ Please have a look and tell me what you think. Feedback is very much appreciated.

2 replies
Andrew Magri
@andrew.magri_gitlab

Good afternoon, our application is currently implemented using the PostgresSingleStreamStrategy.
So far we have around 15k players and in the Player stream we have around 8 million events (related to the 15k players)

When we have a huge amount of events at the same time (sometimes because of huge backlog), we find that the query that loads the actual player aggregate is taking much more time (around 1/2 second) that in normal scenarios.

SELECT * FROM "_889c6853a117aca83ef9d6523335dc065213ae86" WHERE metadata->>? = $1 AND metadata->>? = $2 AND CAST(metadata->>? AS INT) > $3 AND no >= $4 ORDER BY no ASC LIMIT $5

Technically its not an issue per-se for our product as its still fast, but we were wondering if moving to a different strategy might actually solve it and make it more future proof

So we opted to see how it would behave by moving to AggregateStreamStrategy by changing the oneStreamPerAggregate to true in the player AggregateRepository. We also did a projection to migrate the events on the into the different streams (tables) therefore ended up with 15k tables and in itself it helps now with querying the aggregate repository because each stream is much smaller since its per player

Having said that projections deteriorate drastically since they have to consume over 15k tables to be updated. We know that this could be possible as it is stated in the documentation of the AggregateStreamStrategy (http://docs.getprooph.org/event-store/implementations/pdo_event_store/variants.html#3-2-1-2-5-1)

PS we did update the projections to use fromCategory instead of fromStream

anyone has managed to overcome such scenario

Alexander Miertsch
@codeliner

@andrew.magri_gitlab Did you consider using snapshots to speed up aggregate loading?
8 million events in a single stream shouldn't be a problem.

Regarding AggregateStreamStrategy:
The idea would be to write an event store plugin that hooks into the transaction to write events (or references) also into a category stream and let projections only subscribe to the category and not the aggregate streams.

Another option is to switch to prooph/event-store v8 and use EventStoreDB ;) That would be the safest path for the future even with a lot more players and events in the system.

Andrew Magri
@andrew.magri_gitlab

yes we are using snapshots. I guess on a daily basis its around 50ms to load but when we have this issue that the source is halted and we get huge batches of data it struggles a bit. kind off we find that sometimes takes half a seconds... with data of around 1 week sent together

so its reasonable but still

Another option is to switch to prooph/event-store v8 and use EventStoreDB ;) That would be the safest path for the future even with a lot more players and events in the system

so with EventStoreDB I would have better performance for AggregateStreamStrategy

we are currently using prooph/event-store v7.5.6

Alexander Miertsch
@codeliner
EventStoreDB organizes streams this way, yes. And then you can have category streams or even an "all" stream with internal links to the events. Stream subscriptions are also very powerful.
Andrew Magri
@andrew.magri_gitlab
good morning @codeliner
thank you very much for your assistance. We will definitely look into the EventStoreDB
Alexander Miertsch
@codeliner
cool! sounds good @andrew.magri_gitlab
zodimo
@zodimo
Hello
Alexander Miertsch
@codeliner
Hi!
Alexander Miertsch
@codeliner
Hi, I've started a new video series "From Online Event Storming to amazing software" with two videos already published. If you're interested in some details about prooph board and our coding bot Cody, check them out: https://youtu.be/0FAgsPNqUV4
zodimo
@zodimo
I will have a look. Thank you for sharing
George
@gdsmith
Hello all. Are people using event driven microservices here? If so what are you using as the transport for those messages that your service is producing? Do you have a long lived store of those external events or are they only available for anything listening at the time they happen? Thanks in advance
Alexander Miertsch
@codeliner
@gdsmith One of our clients uses prooph/event-store as a transport. They wanted to use Kafka two years ago but were not able to set it up in their Kubernetes cluster. Postgres event-store was meant to be an interim solution, but until today it's not replaced.
Every service has it's own local event store and then there is a shared event store with each service having a public stream in that shared store. There was some trouble with projections consuming too much resources but this was mainly caused by the set up of the docker containers and K8s cluster.
For prooph board we use AWS SQS as an event-driven transport.
George
@gdsmith
@codeliner thanks for the info. For prooph board are those external events in SQS just dropped when youโ€™re finished then?
I guess that would makes sense if all apps are ES as you can just persist everything from remote to local after events arrive.
But say you wanted to bring up a new service that relied on one of those streams/queues how would you handle that?
Zacharias
@netiul
When creating a complex aggregate id, based on other values, how do you people approach it when you want to be able to extract those values from the id?
I guess this can not be satisfied with uuid v5 as it is a 1 way sha1 hashing
Fritz Gerneth
@fritz-gerneth
@netiul don't create artifical IDs. if your aggregate already has IDs (i.e. from the business logic), use these. UUID is great for a lot of things, but there is no 'must use' for it. but just a convenient way for aggregates that have no ID on their own
as an example: we have a service that scraps a REST API every X minutes, calculates changes, and triggers these changes as events (i.e. turn a REST API to event-based). The aggregates on our side are not UUIDs but the IDs the REST API provides
on anohter scenario we have composite IDs like DS-UUID1[xxxx]
which works fine too :)
Fritz Gerneth
@fritz-gerneth
@gdsmith there are various means to replay an event-stream, to various sources. I.e. you can you a projector to re-send all messages to a SQS queue. we on our end use RabbitMQ for communications.
Alternatively you can query the event store directly from your service too
Zacharias
@netiul
@fritz-gerneth makes sense!
Zacharias
@netiul
On 2nd thought, do you take database performance in consideration when using composite ids? databases like postgres perform better with the uuid data type
Fritz Gerneth
@fritz-gerneth
as long as you are indexing the key properly (i.e. PK) that should not be too much of an issue. Same limitations/issue as with PKs in general apply. True enough UUIDs can be index-optimized but imho that's a performance optimization you'd only need in very large instances
Generally we do not take database performance into consideration at all. With read models / CQRS you don't care about database schemas etc in general but just optimize each read model for a single query (other way around,, create a read model for each query)
and if performance really becomes a problem we just delete the non-performing read model and create a new one from scratch
Alexander Miertsch
@codeliner
@gdsmith it's exactly how @fritz-gerneth described it. But of course if you have multiple teams it requires some coordination. At the moment that's not a problem for prooph board since only one team is working on the product, but it could change in the future :)
For our client this was more of a problem and one reason they decided to use persistent public streams. A team can set up a new service without coordinating that activity with other teams. They just consume all public streams they need and that's it. Definitely a huge advantage, but it's not for free.
Also all teams use prooph board to design their services and document the events available in the public streams.
They have bi-weekly architecture meetings with all team leads on prooph board to discuss public event changes in the streams: like new event versions, new available event types, new consumers and sometimes even new stream versions.
They found a very good way (with our help ;)) to handle their architecture, but without proper collaboration and documentation you can end up in a mess.
Sascha-Oliver Prolic
@prolic
can we please have one more star on https://github.com/prooph/pdo-event-store/ ? thanks
Zacharias
@netiul
Done
Alexander Miertsch
@codeliner
:clap:
Sascha-Oliver Prolic
@prolic
nice, more than 100 stars now! :fireworks:
Sascha-Oliver Prolic
@prolic
prooph pdo-event-store 1.14 released, supporting PHP 8.1
https://github.com/prooph/pdo-event-store/releases/tag/v1.14.0
1 reply
Alexander Miertsch
@codeliner
awesome! thx @prolic
Zacharias
@netiul
Nice ๐Ÿ‘
Sascha-Oliver Prolic
@prolic
I don't have much time to improve the event store client at the moment. I am working on some other open source project these days (not in PHP).
But I'll get back to it some time.
Alexander Miertsch
@codeliner

Hey, I'm thinking about a revival of prooph/event-sourcing but in a totally new way and it would be nice to get some feedback regarding the idea:

@sandrokeil had the idea to create a plugin for Cody (https://youtu.be/0FAgsPNqUV4) that generates framework agnostic domain model code: aggregates, value objects, messages, handlers, ... from a prooph board event map.
One reason to abandon prooph/event-sourcing was that it creates a framework dependency in your business logic. Something you don't really want. So instead you should use the package as inspiration only. But what if you could have a code generator, that generates all the glue code for you in the namespace of your choice?

  • Value Objects
  • Event Sourced Aggregates
  • Commands
  • Events
  • Repository Interface

One can then connect the result with the framework of their choice. We think about prooph/event-store and Ecotone packages that can be used in combination to also generate framework related glue code (e.g. repository implementation, PHP8 attributes for Ecotone, ...)

Would you be interested in such a package?

Sascha-Oliver Prolic
@prolic
me personally, no
but if that's useful for others, I have nothing against
Fritz Gerneth
@fritz-gerneth
I personally don't use generators either. in particular with constructor promotion & readonly creating VO/Events/Commands has become very easy
Alexander Miertsch
@codeliner
@fritz-gerneth well, creating the pure structure might be easy, but how do you handle from/to plain data conversion, validation, public contracts, ...
You can derive all of that from a single sticky note on an event map + it serves as documentation as well
Sandro Keil
@sandrokeil
We are not talking about old code generation :smile: we generate sophisticated code via PHP AST which can be changed by the developer and will not be overwritten. You just implement the business logic and donโ€˜t waste time anymore to implement boring (glue) code. The time saving is amazing. You can also write your own code generator or extend an specific implementation. Cody is your Copilot and does all the monkey work. :smile:
Fritz Gerneth
@fritz-gerneth
consider me a skeptic - a fan of no magic in my code:) (too much bad experience :)) but that really is preference / my 2c :) . One thing I am sure you already have considered is backwards compatibility within the model (i.e. new event properties, removed event properties, not everything needs upcasters, but even upcasters want to be generated :))
Darryl Hein
@darrylhein
my 2ยข: the number of times I've changed a major piece of architecture (ie, framework, db, etc) on a whole project is near 0 so the benefit to me of having something decoupled from frameworks is low. Because I maintain dozens of smaller projects, to me having it coupled to a framework or library so I can update it to get the new features is better than trying to update code within the project. If it is possible to have a generator that can update the code with my changes mixed in later, I'd probably be more interested, but the reliability of that concerns me (it'd have to be 99.9% reliable or give a few clear instructions on what to check). There are parts that make sense to have decoupled (like a payment processor or email/message provider), but the number of times I've used a decoupling are so few the extra effort has almost never been worth it. In this case, it seems like I'd be at least partially tied to Cody โ€“ not in the same way as a framework, but still reliant on it for future updates. It's always a tough call to find the balance between coupled and decoupled...I don't know that I have ever found the perfect spot.
Alexander Miertsch
@codeliner
@fritz-gerneth that's the reason why I'm asking for feedback. Many developers seem to be skeptic. We're looking for ways to change that ;) "Magic" is a good keyword here. We somehow need to communicate clearly, that Cody is not magic. I'm still not sure about the best way to do that.
Let's try it like this:
Usually developers are lazy. They try to automate repeating tasks and make their life easier. That's the key idea behind Cody as well.
Once you've found rules and conventions how to organize your code base, you'll do the same coding over and over again:
Create this class here, configure message routing there, set up an API endpoint, map data to VOs, define a contract (e.g. OpenAPI/AsyncAPI files) and so on. Isn't that boring?
Why not automate the boring parts? Writing the automation logic is a purely technical and depending on how crazy you want it to be also a challenging task. Something developers usually like. If you write and maintain the logic yourself would you still say it's magic?
Alexander Miertsch
@codeliner

@darrylhein good point. The context is super important. For companies with multiple teams all working with the same basic set up and maintaining software over a long period of time, framework decoupling is much more important than for smaller single team projects. A framework often influence the way how problems are solved in code. what works good today, can become a huge problem tomorrow when you want to scale (technically and in teams size)
Another good reason for decoupling is much cleaner business logic that is easier to understand and change over time.
But a framework agnostic domain model means:
more glue code required to make it work with your current framework of choice. But if you automate glue code generation as well as parts of the domain model code itself then you have a win-win situation.

if it is possible to have a generator that can update the code with my changes mixed in later, I'd probably be more interested

that's actually the case. It works like rector or a code sniffer. In rector you have those rules to upgrade parts of the code to a new PHP version. It also modifies your code without destroying it. Cody can do the same. If a VO exists already, Cody does not touch it! But if you want to add a new interface and/or method to all VOs, Cody can easily do that while keeping the original implementation in place.

Fritz Gerneth
@fritz-gerneth

we are maintaining a single low-time project (ever since starting to use prooph :)), so no need to upgrade multiple things, just making sure things keep on running / evolve with least effort :) ( Also, no hostility meant, I am being honest but not judging, there absolutely is a value of what you suggest)

when I talk about magic, I mean magic in code. I like stupid code. I like obvious code. I spent way too much time of my life debugging / figuring out why something magically does not work (any kind of magic routing / loading / ... ). I prefer repetitive over magic.

When we (team) implement new features, the basic setup is the stuff that takes the least time. Setting up read models / projections is what takes the most time (although repetitive but way more time consuming). That's the place where I personally would like to cut effort (esp since we use json functions and these are a PITA).