Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 14 03:45
    goldsam opened #64
  • Jun 08 08:47
    unicornlab-muhammadjonir commented on 0dd0660
  • Mar 06 12:55

    yevhen on master

    Fuck Russia! 🖕 (compare)

  • Feb 21 11:27

    yevhen on master

    Fix uploading (compare)

  • Feb 21 11:16

    yevhen on master

    Add nake script for bash Fix endpoint (compare)

  • Feb 21 11:07

    yevhen on master

    Fix yml (compare)

  • Feb 21 10:59

    yevhen on master

    Restore nake first (compare)

  • Feb 21 10:57

    yevhen on master

    Fix nake command (compare)

  • Feb 21 10:54

    yevhen on master

    Fix forking (compare)

  • Feb 21 10:46

    yevhen on master

    Fork azurite process (compare)

  • Feb 21 10:41

    yevhen on master

    Build azurite (compare)

  • Feb 21 10:40

    yevhen on master

    Init submodule (compare)

  • Feb 21 10:33

    yevhen on master

    Switch to Azurite (compare)

  • Feb 21 10:28

    yevhen on master

    Add azurite as submodule (compare)

  • Feb 21 09:42

    yevhen on master

    Update build script to Nake V3 (compare)

  • Feb 21 09:29

    yevhen on master

    Extend gitignore Upgrade to .net6 Update to latest Nake and VS im… (compare)

  • Mar 22 2021 17:29
    yevhen commented #61
  • Mar 19 2021 09:30
    iamim commented #61
  • Mar 19 2021 08:50
    yevhen commented #61
  • Mar 19 2021 08:50
    yevhen commented #61
Kai
@2moveit
Newtonsoft.Json with Fable.JsonConverters. I think the problem has to do with the discriminated union in F#: type Event = | MyEvent1 | MyEvent2
I thought that's the standard way. But just RecordTypes are much faster
Yevhen Bobrov
@yevhen
could be. We don't serialize DUs. The normal practice is to convert them to DTOs and back, and then can use ProtoBuf or whatever else serializer
Kai
@2moveit
OK, I'll try to convert them to DTOs. Second step might be ProtoBuf as it might be faster. Now I'm just a little bit worried about the 2000 Entities/s. Currently it's quite hard to estimate how many events will appear. Worst case will be snapshots I guess
Yevhen Bobrov
@yevhen
what kind of stream is that? I presume it's not DDD business entity aggregate backed by stream, since 2K per/s should be enough for anyone (c) Gates ))
Kai
@2moveit
It's an app to calculate and create offers. We decided to change from saving the state to saving the changes as we got problems when we changed the format of the state. So our aggregate is an offer project that contains a bill of quanitites that the user can define. within that there are different positions like RequirementPositionAdded, MaterialPositionAdded. Within those positions there might some inputs like QuantitySet, ItemAddedManually, ItemAddedByReferenceBillOfMaterial. As the UI cannot be completely task based many events will be created. We also had to do a "hack". Instead of ReferenceBillOfMaterialLoaded that contains all items we had to split it to ReferenceBillOfMaterialDataLoaded and per item in the reference bill of material we get a ItemAddedByReferenceBillOfMaterial. Otherwise the data is too big.
And yes it's our first time with DDD and ES and SS ;-)
Any suggestion more than welcome
Yevhen Bobrov
@yevhen
well, there 100500 different ways to skin a cat )
I'm not familiar with BOM domain
but if takes too long to load your aggregate due to thousands of events in it's stream - snapshots will do the trick.
basically, if it's less than 2K you can ignore snapshotting since it will take few secs to open (read/replay)
but it also depends on the size of the events
you can read in 1K events page from Azure
Kai
@2moveit
Hehe, yep. I just wonder why we exceed limits (quantity of events/size of events) so we might just do it wrong. So our snapshot will not be possible in table storage, too.
But as you said it's difficult to say if you don't know the domain
Yevhen Bobrov
@yevhen
anyway, if nothing works you might try to look at your domain options, such as splitting that single aggregate into sub-entities, each backed by its own stream. So your BOM will be merely a projection
don't do snapshots into table storage, use blobs
also you can snapshot into some kind of cache, such as Memcached or Redis
or even on local disks server-side. it's all trade-offs
if BOM is just a collections of positions, Position could be an aggregate of its own
just a stupid idea :smile:
BOM may only track IDs of its Positions and you can load/replay Position streams in parallel
Kai
@2moveit
Split it might be a good idea. I just thought that all depending stuff should be in one aggregate/stream but as we do not save our calulation as events that might be possible. Because just the projection/calculation will know that it has to use different positions/aggregates and that should be fine. Thanks for all those new ideas
Yevhen Bobrov
@yevhen
good luck!
Scott Ranger
@scottrangerio

Hey,

Thanks for the project :) I have a small question about virtual partitions. I'm just trying to understand what the use case / requirement is for virtual partitions. My guess would be that by using virtual partitions, we can write to multiple streams (so long as they're in the same table partition) as part of a single entity group transaction, meaning that if any of the stream write fails we have transactional behaviour. Or maybe it's simply so that multiple streams can be served from the same partition server? Are either of these assumption close to the mark or are there any different reasons / use cases for virtual partitions?

Thanks!

Yevhen Bobrov
@yevhen
@scottrangerio Hi! Yes, you've got it right! With virtual partitions you may model you whole domain as a single partition while having virtual stream for every aggregate, and you can use ETG. For example you may create stream directory in the same partition and have all operations to be atomic. You may also model simple projections as rows and update them atomically with streams. Some simple apps could be modeled in this way. Think multi-tenant setup where each tenant is a single partition. 2000K entities/sec for partition is more than enough for a majority of event-sourced apps. It is possible to develop simple invoicing multi-tenant app by just using table storage features.
Further optimization is possible by employing actor framework such as Orleans and then batch all write operations to a single partition. It is possible to go very far with this simple technology mix (Streamstone + Orleankka). Add durable sagas (you may try Orleankka's switchable behaviors for this) - and you can develop fairly complex app by just using table storage and some added cleverness :smile:
Scott Ranger
@scottrangerio
Awesome! thanks for the detailed and in-depth answer :)
ardumez
@ardumez
Do you know how to backup on CosmoDb API Table or Azure storage Table ?
Jakub Konecki
@jkonecki
Take a look at AzCopy. Not perfect but will allow you to copy the whole table.
ardumez
@ardumez
Ok thanks but there are nothing on the Azure Portal without need code ?
ardumez
@ardumez
There are backup for CosmoDb each 4h that is possible to ask on Microsoft Support on ticket
But I don't know if the CosmoDb API Table is backup with or only DocumentDB Json is backup
Jakub Konecki
@jkonecki
Azure storage geo replication gives you copies in case of hardware failure. You only need backup in order you delete something yourself that you shouldn't. I don't think there is any feature for this in Azure Portal.
Jakub Konecki
@jkonecki
Hi, are there any breaking changes in 2.x vs 1.x? I've upgraded and everything compiles / tests OK.
Yevhen Bobrov
@yevhen
yes, sync api methods were removed (due to removal in underlying Azure SDK)
Jakub Konecki
@jkonecki
Cool, I wasn't using them anyway ;-)
Yevhen Bobrov
@yevhen
so you should be fine :)
Yevhen Bobrov
@yevhen
pushed v2.1.0. Added option to disable built-in entity change tracking
Yevhen Bobrov
@yevhen
it's safe to disable it during normal stream writes. It's only useful during replays when used in conjunction with inline synchronous stream projections, which is used rarely (or not used at all). Disabling it will save few object allocations. Not really that important but nice to have ...
Yevhen Bobrov
@yevhen

2.2.0: Release Notes

Now you can restore Stream header without reading it first (via Open). If you don't pass stream properties to Stream.From() method, the header will be merged instead of replaced which means it is enough to just have previous Etag and Version to restore the header.

Matej Hertis
@mhertis
What does this actually mean, what is the use case? :)
Yevhen Bobrov
@yevhen
The use-case is simple. If you know how other event stores work in regard to optimistic concurrency, they usually requires you to pass smth like ExpectedVersion when doing writes to a stream. So you can read an aggregate, remember the last stream version and then use it as expected. You can pass this version to UI and back. That don't require read-before-write like it was in SS prior to 2.2.0. This fix gives you a similar optimization by allowing you store the pair of (etag+version) and then use it for your next write. Not everyone is using actors to always hold a StreamHeader in memory and so to resurrect this object in disconnected scenario was requiring an additional read
Matej Hertis
@mhertis
Ok, thnx, now I get it :)
Matej Hertis
@mhertis
At the time, my thinking was limited to the actors context, sorry. :)
Tobias Kastrup Andersen
@lufthavn
Has anyone figured out how to use SS and the Cosmos emulator? Is it even possible? 😊
Yevhen Bobrov
@yevhen
I think @jkonecki might have some expertise running SS with CosmosDB
@lufthavn
What kind of problem you