These are chat archives for atomix/atomix
No, that's what it's built for! It's writes changes to disk, but state is held in memory to serve reads more quickly. The disk is needed for strong consistency.
DistributedMap is a strongly consistent, in memory key-value store with a Java
For scalability, you have to partition the cluster. That feature is not in Atomix right now, but it's been done successfully and will eventually make its way back in.
@kuujo I'm not sure if you noticed the questions are posted before. Can you provide some insights?
(1) CopycatClient is bootstrapped with a list of Addresses. Is it possible to reset the bootstrap addresses on same copycat client object after it went to suspended/closed state?
(2) When using MEMORY storage level, does MetaStore also use MEMORY buffer and <term, vote> is written only in memory? If yes, then isn't it going to be problematic if a server crashes and restarted very quickly while raft leader election was happening... node would forget that it already voted for the term and might give its vote again to some other candidate in same term. no?
(3) For MEMORY storage level, it seems jvm heap memory is used... in this case I can't see the benefit of jvm managing that memory ... wouldn't it be better to use off-heap memory so that GC is not impacted by copycat's buffers?
(4) Atomix DistributedGroup leader election, it appears if a node becomes group leader and then it's AtomixClient goes to SUSPENDED state (for whatever reason, say this particular node is partitioned off). Then this node will continue to think it is the group leader but rest of the nodes in group get a new leader elected. no?
(5) if a CopycatClient on node-A is in CONNECTED state, then say all of raft cluster hosts went down and whole raft cluster is replaced by another set of hosts, node-A's CopycatClient's session goes to UNSTABLE state and stays there indefinitely unless we restart the process at node A. Is my understanding correct?
If yes, then It makes more sense for node-A copycat client to give up after a while and close the session, then application can take note of CLOSED state of copycat client and discover raft cluster nodes again (say from a VIP that always knows about right set of raft cluster hosts) and "reset/reconnect" the copycat client (as asked in Q.1 before) with new raft cluster nodes.
As I said earlier, I'm looking to replace curator/zookeeper in Druid with copycat. Membership and Leader election are very critical part of Druid cluster, due to the large druid community and scale of operations, We would like to understand things upfront so that life is easy later with less (ideally none) surprises :)
It will be great if we can "meet" on hangout where we can talk in person. Thanks.