prolic on v1.10.3
prolic on master
update changelog (compare)
prolic on master
fix restarting projection durin… respect lock in memory resetting projection and 12 more (compare)
🎉 The new #proophboard wiki is online 🎉
💄New layout supporting light and dark mode as well as different font sizes
🧭️ New sidebar with better structure and much easier navigation
💡More content incl. #eventstorming basics + guideline
🙏 Please have a look and tell me what you think. Feedback is very much appreciated.
Good afternoon, our application is currently implemented using the
So far we have around 15k players and in the
Player stream we have around 8 million events (related to the 15k players)
When we have a huge amount of events at the same time (sometimes because of huge backlog), we find that the query that loads the actual player aggregate is taking much more time (around 1/2 second) that in normal scenarios.
SELECT * FROM "_889c6853a117aca83ef9d6523335dc065213ae86" WHERE metadata->>? = $1 AND metadata->>? = $2 AND CAST(metadata->>? AS INT) > $3 AND no >= $4 ORDER BY no ASC LIMIT $5
Technically its not an issue per-se for our product as its still fast, but we were wondering if moving to a different strategy might actually solve it and make it more future proof
So we opted to see how it would behave by moving to
AggregateStreamStrategy by changing the
true in the player
AggregateRepository. We also did a projection to migrate the events on the into the different streams (tables) therefore ended up with 15k tables and in itself it helps now with querying the aggregate repository because each stream is much smaller since its per player
Having said that projections deteriorate drastically since they have to consume over 15k tables to be updated. We know that this could be possible as it is stated in the documentation of the AggregateStreamStrategy (http://docs.getprooph.org/event-store/implementations/pdo_event_store/variants.html#3-2-1-2-5-1)
PS we did update the projections to use
fromCategory instead of
anyone has managed to overcome such scenario
@andrew.magri_gitlab Did you consider using snapshots to speed up aggregate loading?
8 million events in a single stream shouldn't be a problem.
The idea would be to write an event store plugin that hooks into the transaction to write events (or references) also into a category stream and let projections only subscribe to the category and not the aggregate streams.
Another option is to switch to prooph/event-store v8 and use EventStoreDB ;) That would be the safest path for the future even with a lot more players and events in the system.
yes we are using snapshots. I guess on a daily basis its around 50ms to load but when we have this issue that the source is halted and we get huge batches of data it struggles a bit. kind off we find that sometimes takes half a seconds... with data of around 1 week sent together
so its reasonable but still
Another option is to switch to prooph/event-store v8 and use EventStoreDB ;) That would be the safest path for the future even with a lot more players and events in the system
so with EventStoreDB I would have better performance for
we are currently using
Hey, I'm thinking about a revival of prooph/event-sourcing but in a totally new way and it would be nice to get some feedback regarding the idea:
@sandrokeil had the idea to create a plugin for Cody (https://youtu.be/0FAgsPNqUV4) that generates framework agnostic domain model code: aggregates, value objects, messages, handlers, ... from a prooph board event map.
One reason to abandon prooph/event-sourcing was that it creates a framework dependency in your business logic. Something you don't really want. So instead you should use the package as inspiration only. But what if you could have a code generator, that generates all the glue code for you in the namespace of your choice?
One can then connect the result with the framework of their choice. We think about prooph/event-store and Ecotone packages that can be used in combination to also generate framework related glue code (e.g. repository implementation, PHP8 attributes for Ecotone, ...)
Would you be interested in such a package?
@darrylhein good point. The context is super important. For companies with multiple teams all working with the same basic set up and maintaining software over a long period of time, framework decoupling is much more important than for smaller single team projects. A framework often influence the way how problems are solved in code. what works good today, can become a huge problem tomorrow when you want to scale (technically and in teams size)
Another good reason for decoupling is much cleaner business logic that is easier to understand and change over time.
But a framework agnostic domain model means:
more glue code required to make it work with your current framework of choice. But if you automate glue code generation as well as parts of the domain model code itself then you have a win-win situation.
if it is possible to have a generator that can update the code with my changes mixed in later, I'd probably be more interested
that's actually the case. It works like rector or a code sniffer. In rector you have those rules to upgrade parts of the code to a new PHP version. It also modifies your code without destroying it. Cody can do the same. If a VO exists already, Cody does not touch it! But if you want to add a new interface and/or method to all VOs, Cody can easily do that while keeping the original implementation in place.
we are maintaining a single low-time project (ever since starting to use prooph :)), so no need to upgrade multiple things, just making sure things keep on running / evolve with least effort :) ( Also, no hostility meant, I am being honest but not judging, there absolutely is a value of what you suggest)
when I talk about magic, I mean magic in code. I like stupid code. I like obvious code. I spent way too much time of my life debugging / figuring out why something magically does not work (any kind of magic routing / loading / ... ). I prefer repetitive over magic.
When we (team) implement new features, the basic setup is the stuff that takes the least time. Setting up read models / projections is what takes the most time (although repetitive but way more time consuming). That's the place where I personally would like to cut effort (esp since we use json functions and these are a PITA).