These are chat archives for atomix/atomix
Just about done with all the changes. Looking really good.
The only problem left is that the configuration for client nodes is pretty tedious. Really, a client node just has to store no partitions, so I’m wondering if the
PartitionGroups should only have to be defined by the nodes that participate in them, e.g. if nodes a, b, and c are configured with partition group
foo then they are the only nodes that replicate the group. All other nodes would then discover the existence of the groups via gossip. So, a “client” node would just be a node that’s not configured with any partition groups.
Well there’s a “system” partition group that is always required by nodes that store data. The system partition group is used for storing information about primitives and electing primaries in the primary-backup protocol.
In addition to the system group (which is usually just one partition), additional Raft or primary-backup groups can be added. Raft groups have to be on persistent nodes, and primary-backup groups can be on any node.
When a primitive is created, it’s replicated within some partition group. To create a persistent Raft replicated primitive, configure the primitive with a
RaftProtocol that points to the desired partition group. To create an in-memory primary-backup primitive, create a
MultiPrimaryProtocol that points to the desired primary-backup partition group. The configured
PrimitiveProtocol is a client-level configuration (e.g. timeouts, retries, etc) and is used to create a
PrimitiveProxys for each partition via the indicated
PartitionGroup. This is how primitives are decoupled from the replication protocol (Raft or primary-backup).
One of those branches adds profiles, so rather than configuring partition groups you can just do:
cluster: local-node: id: foo address: localhost:1234 profiles: - consensus - data-grid
That configures a Raft partition group and a primary-backup partition group with a Raft system group if persistent nodes are defined.
Need to tune the failure detector parameters.
There are two types of nodes: persistent and ephemeral (currently
DATA respectively). Persistent nodes will always be present in the configuration unless explicitly removed, but they may be activated/deactivated. Ephemeral nodes are removed from the cluster configuration when they become unavailable.
ClusterConfigto make them tunable. Same goes for messaging/timeouts