Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 04 23:23
    Zer010 commented #64
  • Jun 14 03:45
    goldsam opened #64
  • Jun 08 08:47
    unicornlab-muhammadjonir commented on 0dd0660
  • Mar 06 12:55

    yevhen on master

    Fuck Russia! 🖕 (compare)

  • Feb 21 11:27

    yevhen on master

    Fix uploading (compare)

  • Feb 21 11:16

    yevhen on master

    Add nake script for bash Fix endpoint (compare)

  • Feb 21 11:07

    yevhen on master

    Fix yml (compare)

  • Feb 21 10:59

    yevhen on master

    Restore nake first (compare)

  • Feb 21 10:57

    yevhen on master

    Fix nake command (compare)

  • Feb 21 10:54

    yevhen on master

    Fix forking (compare)

  • Feb 21 10:46

    yevhen on master

    Fork azurite process (compare)

  • Feb 21 10:41

    yevhen on master

    Build azurite (compare)

  • Feb 21 10:40

    yevhen on master

    Init submodule (compare)

  • Feb 21 10:33

    yevhen on master

    Switch to Azurite (compare)

  • Feb 21 10:28

    yevhen on master

    Add azurite as submodule (compare)

  • Feb 21 09:42

    yevhen on master

    Update build script to Nake V3 (compare)

  • Feb 21 09:29

    yevhen on master

    Extend gitignore Upgrade to .net6 Update to latest Nake and VS im… (compare)

  • Mar 22 2021 17:29
    yevhen commented #61
  • Mar 19 2021 09:30
    iamim commented #61
  • Mar 19 2021 08:50
    yevhen commented #61
Jakub Konecki
@jkonecki
Are you sure 8082 should be used?
Yevhen Bobrov
@yevhen
ye, I'm running it on this port
Jakub Konecki
@jkonecki
Sorry, I won't be of much help
Yevhen Bobrov
@yevhen
this thing is broken
I'm not even sure if anyone is actually using CosmosDB beyond curiosity
the number of downloads say it all
497 downloads in total. WoW!
Jakub Konecki
@jkonecki
I'm using it for development and works fine. I use DocumentDB apo though
Api
Yevhen Bobrov
@yevhen
DocumentDb is fine. It has separate client and connection string format. But table api is done via azure table sdk (CloudStorageAccount/CloudTable) and I can't find a way how to make it work with local emulator
also, the package above is not .NET Core compatible. Which is kinda strange since it was announced like 2 weeks ago
ardumez
@ardumez
hi
why snapshot is include with an other event and not alone ? There are an exemple how to replay snapshot in domain class ?
ardumez
@ardumez
There are an example with eventlog ? Because is recomanded for build projections
Yevhen Bobrov
@yevhen
@ardumez to make snapshot to be completely in sync with last event - ACID.
ardumez
@ardumez
ok thanks
Kai
@2moveit
Hi, what is the approx throughput/sec of events reading from the stream? My second question is what's the best way to serialize the data? Right now deserialization seems to be the bottleneck. Especially as my events are F# discriminated unions.
Yevhen Bobrov
@yevhen
@2moveit SS doesn't make any assumptions abt serialization, you can choose whatever serialization protocol you want
regarding throughput - it's 2000 entities per second per partition, as per Azure table performance targets
it's both read/write
and you have 20K entities/sec limit per storage account
but you can shard your streams over storage accounts (pool) to get higher speeds
Kai
@2moveit
@yevhen thanks for the information. I know that SS does not make any assumption about serialization. I just wonder why our deserialization is a bottleneck and thought you may have some experience with it
Yevhen Bobrov
@yevhen
well, serialization is usually the 2nd slowest part after IO (write/read). What are you using?
Kai
@2moveit
Newtonsoft.Json with Fable.JsonConverters. I think the problem has to do with the discriminated union in F#: type Event = | MyEvent1 | MyEvent2
I thought that's the standard way. But just RecordTypes are much faster
Yevhen Bobrov
@yevhen
could be. We don't serialize DUs. The normal practice is to convert them to DTOs and back, and then can use ProtoBuf or whatever else serializer
Kai
@2moveit
OK, I'll try to convert them to DTOs. Second step might be ProtoBuf as it might be faster. Now I'm just a little bit worried about the 2000 Entities/s. Currently it's quite hard to estimate how many events will appear. Worst case will be snapshots I guess
Yevhen Bobrov
@yevhen
what kind of stream is that? I presume it's not DDD business entity aggregate backed by stream, since 2K per/s should be enough for anyone (c) Gates ))
Kai
@2moveit
It's an app to calculate and create offers. We decided to change from saving the state to saving the changes as we got problems when we changed the format of the state. So our aggregate is an offer project that contains a bill of quanitites that the user can define. within that there are different positions like RequirementPositionAdded, MaterialPositionAdded. Within those positions there might some inputs like QuantitySet, ItemAddedManually, ItemAddedByReferenceBillOfMaterial. As the UI cannot be completely task based many events will be created. We also had to do a "hack". Instead of ReferenceBillOfMaterialLoaded that contains all items we had to split it to ReferenceBillOfMaterialDataLoaded and per item in the reference bill of material we get a ItemAddedByReferenceBillOfMaterial. Otherwise the data is too big.
And yes it's our first time with DDD and ES and SS ;-)
Any suggestion more than welcome
Yevhen Bobrov
@yevhen
well, there 100500 different ways to skin a cat )
I'm not familiar with BOM domain
but if takes too long to load your aggregate due to thousands of events in it's stream - snapshots will do the trick.
basically, if it's less than 2K you can ignore snapshotting since it will take few secs to open (read/replay)
but it also depends on the size of the events
you can read in 1K events page from Azure
Kai
@2moveit
Hehe, yep. I just wonder why we exceed limits (quantity of events/size of events) so we might just do it wrong. So our snapshot will not be possible in table storage, too.
But as you said it's difficult to say if you don't know the domain
Yevhen Bobrov
@yevhen
anyway, if nothing works you might try to look at your domain options, such as splitting that single aggregate into sub-entities, each backed by its own stream. So your BOM will be merely a projection
don't do snapshots into table storage, use blobs
also you can snapshot into some kind of cache, such as Memcached or Redis
or even on local disks server-side. it's all trade-offs
if BOM is just a collections of positions, Position could be an aggregate of its own
just a stupid idea :smile:
BOM may only track IDs of its Positions and you can load/replay Position streams in parallel
Kai
@2moveit
Split it might be a good idea. I just thought that all depending stuff should be in one aggregate/stream but as we do not save our calulation as events that might be possible. Because just the projection/calculation will know that it has to use different positions/aggregates and that should be fine. Thanks for all those new ideas
Yevhen Bobrov
@yevhen
good luck!