Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 31 2019 19:32

    prolic on v1.10.3

    (compare)

  • Jan 31 2019 19:32

    prolic on master

    update changelog (compare)

  • Jan 31 2019 19:29
    prolic closed #176
  • Jan 31 2019 19:28
    prolic commented #186
  • Jan 31 2019 19:28
    prolic closed #186
  • Jan 31 2019 19:28

    prolic on master

    fix restarting projection durin… respect lock in memory resetting projection and 12 more (compare)

  • Jan 31 2019 16:26
    fritz-gerneth commented #189
  • Jan 31 2019 16:21
    prolic commented #189
  • Jan 31 2019 16:06
    sandrokeil commented #189
  • Jan 31 2019 15:31
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 15:23
    kamil-nawrotkiewicz edited #76
  • Jan 31 2019 15:22
    kamil-nawrotkiewicz edited #76
  • Jan 31 2019 15:22
    kamil-nawrotkiewicz opened #76
  • Jan 31 2019 15:19
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 15:09
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 13:54
    coveralls commented #186
  • Jan 31 2019 13:48
    mrook commented #189
  • Jan 31 2019 13:41
    prolic commented #189
  • Jan 31 2019 13:40
    prolic commented #189
  • Jan 31 2019 13:31
    fritz-gerneth commented #189
Sandro Keil
@sandrokeil
If someone faces some memory issues with PDO event store projections, here is a PR prooph/pdo-event-store#234
Alexander Miertsch
@codeliner
you're welcome @mikemilano
Alexander Miertsch
@codeliner
Anyone here with experience to migrate from Travis to Github Actions? Our event store test suite no longer runs on Travis (I guess, they shut down OSS support some time ago)
Sascha-Oliver Prolic
@prolic
I can migrate this, when I have time.
webDEVILopers
@webdevilopers

This is my first implementation of an upcaster. The new event now has a new nullable property spotId. For old events null is added.

final class Upcaster extends SingleEventUpcaster
{
    public function upcast(Message $message): array
    {
        if (! $this->canUpcast($message)) {
            return [$message];
        }

        return $this->doUpcast($message);
    }

    protected function canUpcast(Message $message): bool
    {
        return $message instanceof TestResultReported;
    }

    protected function doUpcast(Message $message): array
    {
        if (array_key_exists('spotId', $message->payload())) {
            return [$message];
        }

        /** @var TestResultReported $message */
        $newMessage = TestResultReported::with(
            $message->testId(),
            $message->placeId(),
            $message->placeName(),
            null,
            false,
            $message->testResult(),
            $message->testType(),
            $message->testedAt(),
            $message->guestId(),
            $message->guestContactInformation(),
            $message->acceptPrivacyPolicy(),
            $message->reportedAt()
        );

        return [$newMessage];
    }
}

Is this the way to go?

Should I add a separate upcaster per event / aggregate root?

Fritz Gerneth
@fritz-gerneth
@webdevilopers I'd follow the pattern of 'do only one thing' - one upcasting transformation. I.e. do not base it on the event but on the transformation.
Sascha-Oliver Prolic
@prolic
As @fritz-gerneth said
webDEVILopers
@webdevilopers

Thanks for the feedback. Could you elaborate on "do not base it on the event"?
For instance use a single upcaster for each transformation?

In our case - using the symfony bundles:

    Prooph\EventStore\Plugin\UpcastingPlugin:
        arguments:
            - '@Trexxon\Common\Infrastructure\Prooph\EventStore\TestResultSpotUpcaster'
            - '@Trexxon\Common\Infrastructure\Prooph\EventStore\TestResultMaybeV2Upcaster'
        tags:
            - { name: 'prooph_event_store.default.plugin' }
Fritz Gerneth
@fritz-gerneth
@webdevilopers yes. i.e. follow / keep the single responsibility pattern: do one thing (transformation). it's not so much a question of for whom you do it (what events) as about what you do it (what kind of transformation)
webDEVILopers
@webdevilopers
Thanks for clearing this up. Sure, following SRP and DDDesign patterns the final "Upcaster" will receive a better naming. :)
webDEVILopers
@webdevilopers

Following this tweet by @gquemener I tried to implement an in-memory read model of a view model:

final class PgsqlEventStorePlaceDetailsFinder
{
    private ProjectionManager $projectionManager;

    public function __construct(ProjectionManager $projectionManager)
    {
        $this->projectionManager = $projectionManager;
    }

    public function detailsOfId(PlaceId $id): Details
    {
        $metadataMatcher = (new MetadataMatcher())
            ->withMetadataMatch('_aggregate_id', Operator::EQUALS(), $id->toString());

        $query = $this->projectionManager->createQuery();
        $query
            ->fromStream('place_stream', $metadataMatcher)
            ->when([
                PlaceAdded::class => function ($state, PlaceAdded $event) {
                    $state = [
                        'accountId'      => $event->accountId()->toString(),
                        'placeId'        => $event->placeId()->toString(),
                        'name'           => $event->name()->toString(),
                        'retentionTime'  => RetentionTime::withDefaultDays()->toDays(),
                        'addedAt'        => $event->createdAt()->format(DATE_ATOM),
                    ];

                    return $state;
                },
                PlaceChanged::class => function ($state, PlaceChanged $event) {
                    $state['name'] = $event->name()->toString();

                    return $state;
                },
            ])
            ->run()
        ;

        $placeState = $query->getState();

        return Details::fromArray($placeState);
    }
}

Thoughts?

Gildas Quéméner
@gquemener
Hello !
I guess that would work.
You could also directly communicate with your event store (which is what the projection manager is most-likely doing) and remove some extra layers : https://github.com/gquemener/repositoring/blob/main/src/Infrastructure/Repository/Prooph/ProophEventStoreTodoRepository.php#L76-L101.
The idea being that in-memory synchronous implementation is enough until you encounter performance issue (I would advise to setup mecanisms to track this issue, if not already done).
The projection manager is reponsible of tracking the position of each projectors, in order to be able to resume projection. As your implementation doesn't use such capability, that's why I suggest to by-pass it.
webDEVILopers
@webdevilopers

You could also directly communicate with your event store

This is an older version of prooph where the query comes from the projection manager.
Here is an example of a newer version:

$eventStore
    ->createQuery()
    ->fromAll()

http://docs.getprooph.org/event-store/standard_projections/overview.html

Is that what you suggested?

Gildas Quéméner
@gquemener

I'm suggesting to inject your Prooph\EventStore\EventStore service into your finder and use EventStore::load directly (check the link I shared for an example), instead of using the projection manager that is aimed to track the status of your projectors.
The reason being that your projector does not need to be tracked, because it reads all the relevant events from the start, every time.

That being said, what you did is probably working fine (however you're adding an unecessary overhead by querying the "projections" table, aren't you?).

Matthias Breddin
@lunetics
hey, just curious. event-engine.io is offline, still a thing?
Sandro Keil
@sandrokeil
The domain is suspended. We have to move the docs to a new domain
Sascha-Oliver Prolic
@prolic
Merry Christmas
Alexander Miertsch
@codeliner
Happy new Year :)
Sascha-Oliver Prolic
@prolic
:fireworks:
Zacharias
@netiul
Happy new year! :)
Alexander Miertsch
@codeliner

🎉 The new #proophboard wiki is online 🎉

👉 https://wiki.prooph-board.com

𝗖𝗵𝗮𝗻𝗴𝗲𝗹𝗼𝗴

💄New layout supporting light and dark mode as well as different font sizes

🧭️ New sidebar with better structure and much easier navigation

💡More content incl. #eventstorming basics + guideline

🙏 Please have a look and tell me what you think. Feedback is very much appreciated.

2 replies
Andrew Magri
@andrew.magri_gitlab

Good afternoon, our application is currently implemented using the PostgresSingleStreamStrategy.
So far we have around 15k players and in the Player stream we have around 8 million events (related to the 15k players)

When we have a huge amount of events at the same time (sometimes because of huge backlog), we find that the query that loads the actual player aggregate is taking much more time (around 1/2 second) that in normal scenarios.

SELECT * FROM "_889c6853a117aca83ef9d6523335dc065213ae86" WHERE metadata->>? = $1 AND metadata->>? = $2 AND CAST(metadata->>? AS INT) > $3 AND no >= $4 ORDER BY no ASC LIMIT $5

Technically its not an issue per-se for our product as its still fast, but we were wondering if moving to a different strategy might actually solve it and make it more future proof

So we opted to see how it would behave by moving to AggregateStreamStrategy by changing the oneStreamPerAggregate to true in the player AggregateRepository. We also did a projection to migrate the events on the into the different streams (tables) therefore ended up with 15k tables and in itself it helps now with querying the aggregate repository because each stream is much smaller since its per player

Having said that projections deteriorate drastically since they have to consume over 15k tables to be updated. We know that this could be possible as it is stated in the documentation of the AggregateStreamStrategy (http://docs.getprooph.org/event-store/implementations/pdo_event_store/variants.html#3-2-1-2-5-1)

PS we did update the projections to use fromCategory instead of fromStream

anyone has managed to overcome such scenario

Alexander Miertsch
@codeliner

@andrew.magri_gitlab Did you consider using snapshots to speed up aggregate loading?
8 million events in a single stream shouldn't be a problem.

Regarding AggregateStreamStrategy:
The idea would be to write an event store plugin that hooks into the transaction to write events (or references) also into a category stream and let projections only subscribe to the category and not the aggregate streams.

Another option is to switch to prooph/event-store v8 and use EventStoreDB ;) That would be the safest path for the future even with a lot more players and events in the system.

Andrew Magri
@andrew.magri_gitlab

yes we are using snapshots. I guess on a daily basis its around 50ms to load but when we have this issue that the source is halted and we get huge batches of data it struggles a bit. kind off we find that sometimes takes half a seconds... with data of around 1 week sent together

so its reasonable but still

Another option is to switch to prooph/event-store v8 and use EventStoreDB ;) That would be the safest path for the future even with a lot more players and events in the system

so with EventStoreDB I would have better performance for AggregateStreamStrategy

we are currently using prooph/event-store v7.5.6

Alexander Miertsch
@codeliner
EventStoreDB organizes streams this way, yes. And then you can have category streams or even an "all" stream with internal links to the events. Stream subscriptions are also very powerful.
Andrew Magri
@andrew.magri_gitlab
good morning @codeliner
thank you very much for your assistance. We will definitely look into the EventStoreDB
Alexander Miertsch
@codeliner
cool! sounds good @andrew.magri_gitlab
zodimo
@zodimo
Hello
Alexander Miertsch
@codeliner
Hi!
Alexander Miertsch
@codeliner
Hi, I've started a new video series "From Online Event Storming to amazing software" with two videos already published. If you're interested in some details about prooph board and our coding bot Cody, check them out: https://youtu.be/0FAgsPNqUV4
zodimo
@zodimo
I will have a look. Thank you for sharing
George
@gdsmith
Hello all. Are people using event driven microservices here? If so what are you using as the transport for those messages that your service is producing? Do you have a long lived store of those external events or are they only available for anything listening at the time they happen? Thanks in advance
Alexander Miertsch
@codeliner
@gdsmith One of our clients uses prooph/event-store as a transport. They wanted to use Kafka two years ago but were not able to set it up in their Kubernetes cluster. Postgres event-store was meant to be an interim solution, but until today it's not replaced.
Every service has it's own local event store and then there is a shared event store with each service having a public stream in that shared store. There was some trouble with projections consuming too much resources but this was mainly caused by the set up of the docker containers and K8s cluster.
For prooph board we use AWS SQS as an event-driven transport.
George
@gdsmith
@codeliner thanks for the info. For prooph board are those external events in SQS just dropped when you’re finished then?
I guess that would makes sense if all apps are ES as you can just persist everything from remote to local after events arrive.
But say you wanted to bring up a new service that relied on one of those streams/queues how would you handle that?
Zacharias
@netiul
When creating a complex aggregate id, based on other values, how do you people approach it when you want to be able to extract those values from the id?
I guess this can not be satisfied with uuid v5 as it is a 1 way sha1 hashing
Fritz Gerneth
@fritz-gerneth
@netiul don't create artifical IDs. if your aggregate already has IDs (i.e. from the business logic), use these. UUID is great for a lot of things, but there is no 'must use' for it. but just a convenient way for aggregates that have no ID on their own
as an example: we have a service that scraps a REST API every X minutes, calculates changes, and triggers these changes as events (i.e. turn a REST API to event-based). The aggregates on our side are not UUIDs but the IDs the REST API provides
on anohter scenario we have composite IDs like DS-UUID1[xxxx]
which works fine too :)
Fritz Gerneth
@fritz-gerneth
@gdsmith there are various means to replay an event-stream, to various sources. I.e. you can you a projector to re-send all messages to a SQS queue. we on our end use RabbitMQ for communications.
Alternatively you can query the event store directly from your service too
Zacharias
@netiul
@fritz-gerneth makes sense!
Zacharias
@netiul
On 2nd thought, do you take database performance in consideration when using composite ids? databases like postgres perform better with the uuid data type
Fritz Gerneth
@fritz-gerneth
as long as you are indexing the key properly (i.e. PK) that should not be too much of an issue. Same limitations/issue as with PKs in general apply. True enough UUIDs can be index-optimized but imho that's a performance optimization you'd only need in very large instances
Generally we do not take database performance into consideration at all. With read models / CQRS you don't care about database schemas etc in general but just optimize each read model for a single query (other way around,, create a read model for each query)
and if performance really becomes a problem we just delete the non-performing read model and create a new one from scratch