Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 22 15:25
    CLAassistant commented #127
  • Jun 22 15:24
    dependabot[bot] opened #127
  • Jun 22 15:24
    dependabot[bot] labeled #127
  • Jun 22 15:24

    dependabot[bot] on nuget

    Bump Newtonsoft.Json from 12.0.… (compare)

  • Mar 18 20:18
    CLAassistant commented #100
  • Mar 18 20:18
    CLAassistant commented #90
  • Mar 18 20:18
    CLAassistant commented #26
  • Mar 18 20:18
    CLAassistant commented #88
  • Mar 18 20:18
    claassistantio commented #100
  • Mar 18 20:18
    claassistantio commented #90
  • Mar 18 20:18
    CLAassistant commented #85
  • Mar 18 20:18
    claassistantio commented #88
  • Mar 18 20:18
    CLAassistant commented #96
  • Mar 18 20:18
    claassistantio commented #85
  • Mar 18 20:18
    claassistantio commented #96
  • Mar 18 20:18
    CLAassistant commented #98
  • Mar 18 20:18
    claassistantio commented #98
  • Mar 18 20:18
    CLAassistant commented #94
  • Mar 18 20:18
    claassistantio commented #94
  • Mar 18 20:18
    CLAassistant commented #87
Robert Friberg
@rofr
it won't help a late arriving record that get's inserted before other records because it has an older timestamp
Alan Hemmings
@goblinfactory
you mean problems like this ? cockroachdb/cockroach#13808
this sounds like ... if you can invent a protocol that could work around drift, you wouldnt have solved a big database problem, ...you'd have solved a big server clock problem, an even bigger market in terms of patent value?
Robert Friberg
@rofr
The problem remains even with perfectly synchronized clocks.
Alan Hemmings
@goblinfactory
if there was a work around for database folk, then the server clock manufacturers would use the same technique to improve their clocks.
Robert Friberg
@rofr
Memstate is not at all sensitive to the system time
Alan Hemmings
@goblinfactory
ok, I'll take the bait..."why does the problem still remain with perfectly syncrhonized clocks?" :D
Robert Friberg
@rofr
Latency.
Alan Hemmings
@goblinfactory
only if we're back to server generating the IDs
I may have misunderstood, ... your original suggestion that started this discussion was that we use reverse time ticks on clients, and add in a client ID?
Robert Friberg
@rofr
Well, if a node reads the tail and then a late arriving command is inserted then the read had a missing record
Alan Hemmings
@goblinfactory
so is that a non starter then? in order for it to be remotely possible, what building blocks are needed?
how have other tech offered what you'd like to see in memstate?
you have this already with cassandra right?
Robert Friberg
@rofr
Seems like a showstopper, yes. Anyway, I found an event sourcing implementation :)
Not cassandra
postgres, sqlstreamstore, eventstore are the main ones. Working on a provider for Pravega
Cosmos might work...
Alan Hemmings
@goblinfactory
I have that on my trello board for domanium to investigate
cant' remember how I found it.
Robert Friberg
@rofr
and the answer is....
drum roll.....
Alan Hemmings
@goblinfactory
Ah, just read the homepage docs and remember where I saw it... I had googled table storage limitations, and it came up in top results for the limitations
Robert Friberg
@rofr
optimistic concurrency!
Alan Hemmings
@goblinfactory
dont you still need to sort clock problem?
Robert Friberg
@rofr
Let's just build it on top of Streamstone, that will be simple
Alan Hemmings
@goblinfactory
tbh ... I don't care how it's sorted, I know it's complicated, and solutions involve complicated stuff like gossip protocols, ...stuff I really don't want to care about.
streamstone looks great, from what I saw.
ok... will throw something together tomorrow and compare that with single node, and see how they run...
assuming I don't ignore the compiler warnings ...
(groan!)
this will be fun :D
has been fun, except for... you know... the ...er... ignoring compiler warning thing.
Robert Friberg
@rofr
On top of StreamStone you won't need the BatchingJournalWriter, it already does batching
Alan Hemmings
@goblinfactory
Hi Rob, going to have to postpone eval of streamstone + memstate until this weekend. The big issue is not performance, but cost. Streamstone doesnt appear to support AzureTable storage, it only support Azure CosmosDb CloudTable which is similar but quite different cost wise.
I also probably won't be able to use it myself, since I need a solution that works with Azure functions, serverless style, that's pay as you go, with zero minimum monthly cost.
Alan Hemmings
@goblinfactory
Hi Robert,
so ...done a bit of experimenting (not a lot), and as far as I can see, stereamstone doesn't solve any problems I have. Originally I thought it would help solve the problem of ...what happens if azure decides to create a second instance of an azure function without me expecting it to, despite any attribute or json or configurations requiring a single partition or any other trick to try to force an azure function into some type of singleton
azure doesnt guarantee singleton instances, it gaurantees single instance will be processing a message at a time.
partitioning seems to be much more around azure functions that are not http triggered, but much more around message triggers
Alan Hemmings
@goblinfactory
hi @rofr , is await DisposeAsync on the engine safe to call more than once? i.e. is it threadsafe? And, is it safe to wrap this in something like
       private static Engine<AccountModel> _engine;
   // static constructor for Azure function class Test1
        static Test1()
        {
            // safety net in case not stopped.
            AppDomain.CurrentDomain.ProcessExit += (s, e) => {
                if(!stopped)
                {
                    _engine.DisposeAsync().GetAwaiter().GetResult();
                }
            };
        }
Robert Friberg
@rofr
yes, thread safe and calling more than once has no effect
Albert
@albertyfwu

Hi @rofr. Recently I was looking into ways to hold a large singleton aggregate in memory but still have reasonable persistence to DB/file/etc. in order to recover state. I eventually got into event sourcing, Prevayler, and finally, memstate.

When I look at the github page, there appears to be infrequent development, and it's still in alpha/beta stage. Could you give some idea on how stable the current release is and what your intentions are for further development/support?

In addition, I have a question about using Postgres as the persistence. Suppose in the write-ahead approach, memstate attempts to persist a command C1 to Postgres. For whatever reason, the server persists C1, but we experience a client-side timeout. At this point, the in-memory state will be behind that of the persistence. If we then receive a command C2, that will persist to Postgres and then operate on the stale in-memory state and possibly result in a different state than when the commands are later replayed. Is there some mechanism in memstate to detect this kind of out-of-sync behavior (e.g. using event versioning) and recover transparently?

Robert Friberg
@rofr

Hi @albertyfwu
I'll take the easy question first..

In addition, I have a question about using Postgres as the persistence. Suppose in the write-ahead approach, memstate attempts to persist a command C1 to Postgres. For whatever reason, the server persists C1, but we experience a client-side timeout. At this point, the in-memory state will be behind that of the persistence. If we then receive a command C2, that will persist to Postgres and then operate on the stale in-memory state and possibly result in a different state than when the commands are later replayed. Is there some mechanism in memstate to detect this kind of out-of-sync behavior (e.g. using event versioning) and recover transparently?

JournalRecords have a RecordNumber and by default these are required to have an unbroken sequence. In the scenario above, the engine will throw an exception when receiving C2 if it has not yet seen C1.

Robert Friberg
@rofr

When I look at the github page, there appears to be infrequent development, and it's still in alpha/beta stage. Could you give some idea on how stable the current release is and what your intentions are for further development/support?

We are running memstate in production for several systems using EventStore or SqlStreamStore for storage. The core features are solid using either of these storage providers. There are some loose ends that need to be addressed before a 1.0 release though. Event subscriptions over a remote connection are not working for example.
PS: don't use the standalone Postgres provider, we may drop it altogether in favor of SqlStreamStore which has support for MySql, MSSQL and Postgres.

Albert
@albertyfwu

Hi @albertyfwu
I'll take the easy question first..

In addition, I have a question about using Postgres as the persistence. Suppose in the write-ahead approach, memstate attempts to persist a command C1 to Postgres. For whatever reason, the server persists C1, but we experience a client-side timeout. At this point, the in-memory state will be behind that of the persistence. If we then receive a command C2, that will persist to Postgres and then operate on the stale in-memory state and possibly result in a different state than when the commands are later replayed. Is there some mechanism in memstate to detect this kind of out-of-sync behavior (e.g. using event versioning) and recover transparently?

JournalRecords have a RecordNumber and by default these are required to have an unbroken sequence. In the scenario above, the engine will throw an exception when receiving C2 if it has not yet seen C1.

Thanks for the quick response. I'm still unclear on when this discrepancy is resolved. After C1 is written to the DB (but unknown to the engine), how does the engine know to throw an exception when receiving C2? At that time, it wouldn't have learned that C1 was successfully written to DB yet. Is there some syncing that's happening out-of-band between C1 and C2? Or is there some synchronous synchronization happening to resolve discrepancies between engine in-memory and DB at the time it persists C2 to DB? Let me know if I need to explain this question more clearly.

Another question I have is:
Albert
@albertyfwu

In my particular use case, I have one single big aggregate (instead of many instances).

I know that memstate persists/applies commands sequentially, but I actually need the aggregate "locked" before the "commands" are issued, since I'm doing something more like "event" rather than "command" sourcing.

My workflow is:

  • Lock the single big aggregate.
  • Validate the command.
  • If success, create an event, persist it, and apply it.
  • Unlock the single big aggregate.

It seems that memstate supports something like this instead:

  • Validate the request.
  • If success, create a command.
  • Lock the aggregate.
  • Persist the command and apply it.
  • Unlock the aggregate.

To me, it seems that I'd need to handle my own locking (before validation) before handing it off to memstate. Let me know if see how this can be more natively handled by memstate. Thanks.