Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 08 12:56
    yevhen commented #64
  • Nov 08 10:13
    Zer010 commented #64
  • Oct 25 19:46
    goldsam commented #64
  • Sep 04 23:23
    Zer010 commented #64
  • Jun 14 03:45
    goldsam opened #64
  • Jun 08 08:47
    unicornlab-muhammadjonir commented on 0dd0660
  • Mar 06 12:55

    yevhen on master

    Fuck Russia! 🖕 (compare)

  • Feb 21 11:27

    yevhen on master

    Fix uploading (compare)

  • Feb 21 11:16

    yevhen on master

    Add nake script for bash Fix endpoint (compare)

  • Feb 21 11:07

    yevhen on master

    Fix yml (compare)

  • Feb 21 10:59

    yevhen on master

    Restore nake first (compare)

  • Feb 21 10:57

    yevhen on master

    Fix nake command (compare)

  • Feb 21 10:54

    yevhen on master

    Fix forking (compare)

  • Feb 21 10:46

    yevhen on master

    Fork azurite process (compare)

  • Feb 21 10:41

    yevhen on master

    Build azurite (compare)

  • Feb 21 10:40

    yevhen on master

    Init submodule (compare)

  • Feb 21 10:33

    yevhen on master

    Switch to Azurite (compare)

  • Feb 21 10:28

    yevhen on master

    Add azurite as submodule (compare)

  • Feb 21 09:42

    yevhen on master

    Update build script to Nake V3 (compare)

  • Feb 21 09:29

    yevhen on master

    Extend gitignore Upgrade to .net6 Update to latest Nake and VS im… (compare)

Alexander Prooks
@aprooks
with sharding per azure storage account you can have unlimited iops in theory. But with limit per each stream
Gor Rustamyan
@Soarc
@SvenVandenbrande @yevhen just merged pull request, which adds netstandard2.0 and netstandard 1.6 support.
Yevhen Bobrov
@yevhen
Good news, citizens!
that's really cool PR. Huge props to Gor!
I'll push on nuget today and update readme
Gor Rustamyan
@Soarc
Nice :) Thank you.
Yevhen Bobrov
@yevhen
Streamstone 2.0 is on NuGet. Built for nestandard 1.6 and 2.0 targets
Alexander Prooks
@aprooks
Nice!
ardumez
@ardumez
Hi
how to simply modify an event if on a forget a field in the past?
Jakub Konecki
@jkonecki
If you're talking about updating already stored events than you can just query and update table storage directly.
ardumez
@ardumez
with Azure Table API or StreamStone ?
Jakub Konecki
@jkonecki
Azure Table API. I don't think SS has api for reading events by type
You would have to read ALL events from all streams
ardumez
@ardumez
ok thanks
Matej Hertis
@mhertis
Is there a possibility for streamstone to insert only one record (SS-UID) for duplicate event detection when inserting batch of events?
Yevhen Bobrov
@yevhen
Something like CommitId in NEventStore?
I don’t think it’s a good idea, at least not by default
What’s the use-case and what enables this option in your scenario? Do already receive events in a stable batches?
Matej Hertis
@mhertis
we have few (but frequent) commands that produce 3 to 5 events
Eduardo
@EduardoSantosGit
Hello, Does anyone has a project or example code implementing projections with StreamStone?
There is a link unavailable on github: https://github.com/yevhen/Streamstone/blob/master
Yevhen Bobrov
@yevhen
@EduardoSantosGit what kind of guidance are looking for? There is no global stream in Streamstone as you can find in other server-based event stores like GetEventStore. There are many factors in play which will make implementing projections with SS either trivial of complex: cardinality (1-N, 1-1, N-1, N-N), frequency of updates, consistency requirements, projection storage capabilities.
in general, the most trivial 1-1 projection could be implemented with polling and storing of last applied event sequence number (ie stream version) along with you projection - hopefully in the same transaction. Otherwise make sure that projection updates are idempotent.
Yevhen Bobrov
@yevhen
Did anyone here had successfully connected to local CosmosDb emulator using AzureSDK Table Api? How the connection string should look like?
Jakub Konecki
@jkonecki
I would assume standard Cosmosdb connection string should be used. For emulator it's https://localhost:8081 if installed using default settings gs
Yevhen Bobrov
@yevhen
Does it work for you? I've tried instructions on this page (section Authenticating Requests) but it doesn't work
according to announcement here I can use existing azure storage sdk and just change connection string
CloudStorageAccount.Parse("AccountName=localhost:8082;AccountKey=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==")
no luck ((
Jakub Konecki
@jkonecki
I haven't used table api - sorry
Are you sure 8082 should be used?
Yevhen Bobrov
@yevhen
ye, I'm running it on this port
Jakub Konecki
@jkonecki
Sorry, I won't be of much help
Yevhen Bobrov
@yevhen
this thing is broken
I'm not even sure if anyone is actually using CosmosDB beyond curiosity
the number of downloads say it all
497 downloads in total. WoW!
Jakub Konecki
@jkonecki
I'm using it for development and works fine. I use DocumentDB apo though
Api
Yevhen Bobrov
@yevhen
DocumentDb is fine. It has separate client and connection string format. But table api is done via azure table sdk (CloudStorageAccount/CloudTable) and I can't find a way how to make it work with local emulator
also, the package above is not .NET Core compatible. Which is kinda strange since it was announced like 2 weeks ago
ardumez
@ardumez
hi
why snapshot is include with an other event and not alone ? There are an exemple how to replay snapshot in domain class ?
ardumez
@ardumez
There are an example with eventlog ? Because is recomanded for build projections
Yevhen Bobrov
@yevhen
@ardumez to make snapshot to be completely in sync with last event - ACID.
ardumez
@ardumez
ok thanks
Kai
@2moveit
Hi, what is the approx throughput/sec of events reading from the stream? My second question is what's the best way to serialize the data? Right now deserialization seems to be the bottleneck. Especially as my events are F# discriminated unions.
Yevhen Bobrov
@yevhen
@2moveit SS doesn't make any assumptions abt serialization, you can choose whatever serialization protocol you want
regarding throughput - it's 2000 entities per second per partition, as per Azure table performance targets