Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 31 2019 19:32

    prolic on v1.10.3

    (compare)

  • Jan 31 2019 19:32

    prolic on master

    update changelog (compare)

  • Jan 31 2019 19:29
    prolic closed #176
  • Jan 31 2019 19:28
    prolic commented #186
  • Jan 31 2019 19:28
    prolic closed #186
  • Jan 31 2019 19:28

    prolic on master

    fix restarting projection durin… respect lock in memory resetting projection and 12 more (compare)

  • Jan 31 2019 16:26
    fritz-gerneth commented #189
  • Jan 31 2019 16:21
    prolic commented #189
  • Jan 31 2019 16:06
    sandrokeil commented #189
  • Jan 31 2019 15:31
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 15:23
    kamil-nawrotkiewicz edited #76
  • Jan 31 2019 15:22
    kamil-nawrotkiewicz edited #76
  • Jan 31 2019 15:22
    kamil-nawrotkiewicz opened #76
  • Jan 31 2019 15:19
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 15:09
    grzegorzstachniukalm synchronize #186
  • Jan 31 2019 13:54
    coveralls commented #186
  • Jan 31 2019 13:48
    mrook commented #189
  • Jan 31 2019 13:41
    prolic commented #189
  • Jan 31 2019 13:40
    prolic commented #189
  • Jan 31 2019 13:31
    fritz-gerneth commented #189
Alexander Miertsch
@codeliner
cool! sounds good @andrew.magri_gitlab
zodimo
@zodimo
Hello
Alexander Miertsch
@codeliner
Hi!
Alexander Miertsch
@codeliner
Hi, I've started a new video series "From Online Event Storming to amazing software" with two videos already published. If you're interested in some details about prooph board and our coding bot Cody, check them out: https://youtu.be/0FAgsPNqUV4
zodimo
@zodimo
I will have a look. Thank you for sharing
George
@gdsmith
Hello all. Are people using event driven microservices here? If so what are you using as the transport for those messages that your service is producing? Do you have a long lived store of those external events or are they only available for anything listening at the time they happen? Thanks in advance
Alexander Miertsch
@codeliner
@gdsmith One of our clients uses prooph/event-store as a transport. They wanted to use Kafka two years ago but were not able to set it up in their Kubernetes cluster. Postgres event-store was meant to be an interim solution, but until today it's not replaced.
Every service has it's own local event store and then there is a shared event store with each service having a public stream in that shared store. There was some trouble with projections consuming too much resources but this was mainly caused by the set up of the docker containers and K8s cluster.
For prooph board we use AWS SQS as an event-driven transport.
George
@gdsmith
@codeliner thanks for the info. For prooph board are those external events in SQS just dropped when you’re finished then?
I guess that would makes sense if all apps are ES as you can just persist everything from remote to local after events arrive.
But say you wanted to bring up a new service that relied on one of those streams/queues how would you handle that?
Zacharias
@netiul
When creating a complex aggregate id, based on other values, how do you people approach it when you want to be able to extract those values from the id?
I guess this can not be satisfied with uuid v5 as it is a 1 way sha1 hashing
Fritz Gerneth
@fritz-gerneth
@netiul don't create artifical IDs. if your aggregate already has IDs (i.e. from the business logic), use these. UUID is great for a lot of things, but there is no 'must use' for it. but just a convenient way for aggregates that have no ID on their own
as an example: we have a service that scraps a REST API every X minutes, calculates changes, and triggers these changes as events (i.e. turn a REST API to event-based). The aggregates on our side are not UUIDs but the IDs the REST API provides
on anohter scenario we have composite IDs like DS-UUID1[xxxx]
which works fine too :)
Fritz Gerneth
@fritz-gerneth
@gdsmith there are various means to replay an event-stream, to various sources. I.e. you can you a projector to re-send all messages to a SQS queue. we on our end use RabbitMQ for communications.
Alternatively you can query the event store directly from your service too
Zacharias
@netiul
@fritz-gerneth makes sense!
Zacharias
@netiul
On 2nd thought, do you take database performance in consideration when using composite ids? databases like postgres perform better with the uuid data type
Fritz Gerneth
@fritz-gerneth
as long as you are indexing the key properly (i.e. PK) that should not be too much of an issue. Same limitations/issue as with PKs in general apply. True enough UUIDs can be index-optimized but imho that's a performance optimization you'd only need in very large instances
Generally we do not take database performance into consideration at all. With read models / CQRS you don't care about database schemas etc in general but just optimize each read model for a single query (other way around,, create a read model for each query)
and if performance really becomes a problem we just delete the non-performing read model and create a new one from scratch
Alexander Miertsch
@codeliner
@gdsmith it's exactly how @fritz-gerneth described it. But of course if you have multiple teams it requires some coordination. At the moment that's not a problem for prooph board since only one team is working on the product, but it could change in the future :)
For our client this was more of a problem and one reason they decided to use persistent public streams. A team can set up a new service without coordinating that activity with other teams. They just consume all public streams they need and that's it. Definitely a huge advantage, but it's not for free.
Also all teams use prooph board to design their services and document the events available in the public streams.
They have bi-weekly architecture meetings with all team leads on prooph board to discuss public event changes in the streams: like new event versions, new available event types, new consumers and sometimes even new stream versions.
They found a very good way (with our help ;)) to handle their architecture, but without proper collaboration and documentation you can end up in a mess.
Sascha-Oliver Prolic
@prolic
can we please have one more star on https://github.com/prooph/pdo-event-store/ ? thanks
Zacharias
@netiul
Done
Alexander Miertsch
@codeliner
:clap:
Sascha-Oliver Prolic
@prolic
nice, more than 100 stars now! :fireworks:
Sascha-Oliver Prolic
@prolic
prooph pdo-event-store 1.14 released, supporting PHP 8.1
https://github.com/prooph/pdo-event-store/releases/tag/v1.14.0
1 reply
Alexander Miertsch
@codeliner
awesome! thx @prolic
Zacharias
@netiul
Nice 👍
Sascha-Oliver Prolic
@prolic
I don't have much time to improve the event store client at the moment. I am working on some other open source project these days (not in PHP).
But I'll get back to it some time.
Alexander Miertsch
@codeliner

Hey, I'm thinking about a revival of prooph/event-sourcing but in a totally new way and it would be nice to get some feedback regarding the idea:

@sandrokeil had the idea to create a plugin for Cody (https://youtu.be/0FAgsPNqUV4) that generates framework agnostic domain model code: aggregates, value objects, messages, handlers, ... from a prooph board event map.
One reason to abandon prooph/event-sourcing was that it creates a framework dependency in your business logic. Something you don't really want. So instead you should use the package as inspiration only. But what if you could have a code generator, that generates all the glue code for you in the namespace of your choice?

  • Value Objects
  • Event Sourced Aggregates
  • Commands
  • Events
  • Repository Interface

One can then connect the result with the framework of their choice. We think about prooph/event-store and Ecotone packages that can be used in combination to also generate framework related glue code (e.g. repository implementation, PHP8 attributes for Ecotone, ...)

Would you be interested in such a package?

Sascha-Oliver Prolic
@prolic
me personally, no
but if that's useful for others, I have nothing against
Fritz Gerneth
@fritz-gerneth
I personally don't use generators either. in particular with constructor promotion & readonly creating VO/Events/Commands has become very easy
Alexander Miertsch
@codeliner
@fritz-gerneth well, creating the pure structure might be easy, but how do you handle from/to plain data conversion, validation, public contracts, ...
You can derive all of that from a single sticky note on an event map + it serves as documentation as well
Sandro Keil
@sandrokeil
We are not talking about old code generation :smile: we generate sophisticated code via PHP AST which can be changed by the developer and will not be overwritten. You just implement the business logic and don‘t waste time anymore to implement boring (glue) code. The time saving is amazing. You can also write your own code generator or extend an specific implementation. Cody is your Copilot and does all the monkey work. :smile:
Fritz Gerneth
@fritz-gerneth
consider me a skeptic - a fan of no magic in my code:) (too much bad experience :)) but that really is preference / my 2c :) . One thing I am sure you already have considered is backwards compatibility within the model (i.e. new event properties, removed event properties, not everything needs upcasters, but even upcasters want to be generated :))
Darryl Hein
@darrylhein
my 2¢: the number of times I've changed a major piece of architecture (ie, framework, db, etc) on a whole project is near 0 so the benefit to me of having something decoupled from frameworks is low. Because I maintain dozens of smaller projects, to me having it coupled to a framework or library so I can update it to get the new features is better than trying to update code within the project. If it is possible to have a generator that can update the code with my changes mixed in later, I'd probably be more interested, but the reliability of that concerns me (it'd have to be 99.9% reliable or give a few clear instructions on what to check). There are parts that make sense to have decoupled (like a payment processor or email/message provider), but the number of times I've used a decoupling are so few the extra effort has almost never been worth it. In this case, it seems like I'd be at least partially tied to Cody – not in the same way as a framework, but still reliant on it for future updates. It's always a tough call to find the balance between coupled and decoupled...I don't know that I have ever found the perfect spot.
Alexander Miertsch
@codeliner
@fritz-gerneth that's the reason why I'm asking for feedback. Many developers seem to be skeptic. We're looking for ways to change that ;) "Magic" is a good keyword here. We somehow need to communicate clearly, that Cody is not magic. I'm still not sure about the best way to do that.
Let's try it like this:
Usually developers are lazy. They try to automate repeating tasks and make their life easier. That's the key idea behind Cody as well.
Once you've found rules and conventions how to organize your code base, you'll do the same coding over and over again:
Create this class here, configure message routing there, set up an API endpoint, map data to VOs, define a contract (e.g. OpenAPI/AsyncAPI files) and so on. Isn't that boring?
Why not automate the boring parts? Writing the automation logic is a purely technical and depending on how crazy you want it to be also a challenging task. Something developers usually like. If you write and maintain the logic yourself would you still say it's magic?
Alexander Miertsch
@codeliner

@darrylhein good point. The context is super important. For companies with multiple teams all working with the same basic set up and maintaining software over a long period of time, framework decoupling is much more important than for smaller single team projects. A framework often influence the way how problems are solved in code. what works good today, can become a huge problem tomorrow when you want to scale (technically and in teams size)
Another good reason for decoupling is much cleaner business logic that is easier to understand and change over time.
But a framework agnostic domain model means:
more glue code required to make it work with your current framework of choice. But if you automate glue code generation as well as parts of the domain model code itself then you have a win-win situation.

if it is possible to have a generator that can update the code with my changes mixed in later, I'd probably be more interested

that's actually the case. It works like rector or a code sniffer. In rector you have those rules to upgrade parts of the code to a new PHP version. It also modifies your code without destroying it. Cody can do the same. If a VO exists already, Cody does not touch it! But if you want to add a new interface and/or method to all VOs, Cody can easily do that while keeping the original implementation in place.

Fritz Gerneth
@fritz-gerneth

we are maintaining a single low-time project (ever since starting to use prooph :)), so no need to upgrade multiple things, just making sure things keep on running / evolve with least effort :) ( Also, no hostility meant, I am being honest but not judging, there absolutely is a value of what you suggest)

when I talk about magic, I mean magic in code. I like stupid code. I like obvious code. I spent way too much time of my life debugging / figuring out why something magically does not work (any kind of magic routing / loading / ... ). I prefer repetitive over magic.

When we (team) implement new features, the basic setup is the stuff that takes the least time. Setting up read models / projections is what takes the most time (although repetitive but way more time consuming). That's the place where I personally would like to cut effort (esp since we use json functions and these are a PITA).

Darryl Hein
@darrylhein
@codeliner My experiences with Rector and other CS tools has been mixed. Most of the time it's perfect, but I recently had a Rector change break production as it wasn't terribly obvious what it did. I think Rector/similar and frameworks have the same issues in that area – both can result in invisible changes.
Darryl Hein
@darrylhein
I think for almost all teams it's deciding where to couple and where to decouple. A 100% decoupled system would be the most ideal, but there's no way that's going to reality – eventually you have to run on a computer, probably with a BIOS....unless you really want to go that deep. So then it's just how far or which parts do you couple to.
Alexander Miertsch
@codeliner
@darrylhein I don't know how critical that bug was, but compare the time saved by rector versus that one bug and compare it also with that imaginary number of bugs caused by humans doing such a refactoring.
The good thing about automation is: if it works once, it'll always work for that case. You save time and reduce the risk of silly mistakes made by developers who are bored to do the tasks.
Most bugs I produce are the result of boring work. When I'm 100% focused on the task, my code usually works. With or without tests ...
If I have to map data back and forth, write validation logic by hand etc. it happens more often that a bug slows me down. A spelling mistake for example. Of course most of the time I fix the bug even before committing the code, but it's still a waste of time. You know like wondering for an hour why the hell it's not working just to find out that you did a spelling mistake in a mapping ...
Darryl Hein
@darrylhein
@codeliner Well, users weren't able to create the primary item in the system. Of course that system doesn't have great testing, so that didn't help.
But, yes automation is great – use it all the time. I'm the Rector, CS, etc train.
But, I don't think automation > framework coupling.
There are apparently thousands of companies coupled to, all sorts of things. I don't know that if the team gets larger they're less likely to be coupled. From my experiences (in multi-billion $ companies), they were using an awful lot of coupling. It made it easier for them to hire.
But, in the end, it's a just a choice to decide at what level you uncouple at. If Prooph goes that (code gen) way, I'll have to see how it works and go from there.
Silviu
@silviuvoicu
Let's asume that the product/project is quit complex, and for us to resolve we need first some design before we code something. Here, we have multiple choice and formats, either on wall(oh, well, after pandemic) or in platform like prooph-board. And after the design, we should go to the implementation. Now, probably we have to made some choice about what we will use: use a particular framework for help us with the implementation, maybe use some code auto-generated, what kind of event store to use for the app or what database will be appropiate for the read models, among other things. With a framework, will have a way to write our aggregates, events, projections and maybe, maybe some tests without caring some much of the infrastracture cause the framework had made that choice for me, and if I don't have a special requirements to add something new functionality, then I am fine. With the code-generated, I willl generate the structure of some pieces like aggregates,events and so on and probably let me write the infrastructure code that will glue all things together. If will code -generated also some test from my model space, even better, maybe in form of given/when/then format style, but usally that's somehow couple of infrastructure code, which is either provided as it is by a framework or is custom made. I think, that whatever we choose to go on this road, it has to be focus on the problem we have to solve, to provide the tools either by a framwork or by code-gen tool, but also to provide some sort of flexibility, be able to do some sort of customization when need. But, to test yours idea, maybe not just some opinionated suggestion are what you need, but also an mvp, working example, you know, go to your customer and ask/communicate with them :), either company or developers.
Alexander Miertsch
@codeliner

@silviuvoicu

maybe not just some opinionated suggestion are what you need, but also an mvp, working example, you know, go to your customer and ask/communicate with them :), either company or developers.

yeah, that's the point.
We're in direct exchange with our customers and we demonstrated one real world result of what is possible with prooph board + Cody in a live demo session at EventSourcing.Live last year. BTW the recording is now available on youtube: https://www.youtube.com/watch?v=lKB8-l9MEvs
There you can see the potential incl. fully working FE and BE code generation.

We could release the Cody implementation used in the live demo as an example. But my fear is that people would think it's the only way to work with Cody:
Cody itself is an API. It gives you access to your design on prooph board. Now it is up to you, what you want to do with that information. Generate some boilerplate code, scenario tests or a fully working low-code platform.
Is this too low-level for developers? Because it requires too much learning effort?

I'd love to share the excitement with you. But I try to figure out what would be the best way to do that.

Fritz Gerneth
@fritz-gerneth
@codeliner some interesting things in the video for sure :) some follow up questions:
  • how do you handle validation (as it's skipped in the video)?
  • how do you handle logic (be it validation, read model logic, aggregate logic) that you cannot define in json (i.e. we have complex internal aggregate states etc..)? obviously I can add it in the generated code later on, but how would later updates on the board (and code re-generation) affect my custom added code?
  • how does the board handle "branches", think of different features worked on in parallel? how would branches be merged on the board level?
  • as we have to maintain our software for years (current installment for ~10 years), we have seen a lot of these tools come and go ;) not being maintained is one thing as long as they remain in our build chain, but SaaS really breaks away forcing you to look into alternatives.. I know the code generated is on our git, but it still breaks established workflows. any plans to bring the board as a self-hosted / run version? ideally as something I can define as a dependency on the project and start like "any other code generator" (just with a graphical UI instead of CLI).. running locally also would be preferred from a security & confidentiality perspective
Alexander Miertsch
@codeliner
@fritz-gerneth
Before I answer your questions I'd like to stress the point that the answers related to code generation are specific for the set up shown in the video. That said, you can build your own code generation logic the way you want/need it. @sandrokeil Put a lot of effort into an abstraction layer on top of PHP Parser to simplify code generation. It's organized in libs for specific tasks. Check out the github orga Open Code Modeling