These are chat archives for atomix/atomix
We are no longer monitoring this channel, please join Slack! https://join.slack.com/t/atomixio/shared_invite/enQtNDgzNjA5MjMyMDUxLTVmMThjZDcxZDE3ZmU4ZGYwZTc2MGJiYjVjMjFkOWMyNmVjYTc5YjExYTZiOWFjODlkYmE2MjNjYzZhNjU2MjY
Hey @jfim sorry I have been in meetings. I think you’re on the right track. Seems like that would make sense, but it actually wouldn’t be accurate and would lose accuracy the further behind the leader you got. Copycat can exclude entries from replication depending on feedback from the state machine. For instance, in Atomix’s
DistributedMap state machine, if clients submit two commands
put(foo, 123) and
put(foo, 345) one after the other, what will happen is the leader will likely replicate
put(foo, 123) and
put(foo, 345) to a majority of the cluster in the same batch, and once both are committed the map state machine on the leader will release
put(foo, 123) since it no longer contributes to its state. Thereafter, when replicating those entries to any additional nodes that are further behind the leader, the leader will actually just exclude
put(foo, 123) since it’s irrelevant to the committed state of the cluster, and that entry will ultimately be compacted from the log.
If snapshotting is being used, the behavior changes a bit, but not much. For snapshotted state machines, when a snapshot is taken and stored Copycat will release all
SNAPSHOTTABLE entries up to that point in the log, and those entries can will then be excluded from replication in the same manner. But even in that case, Copycat still releases internal entries (keep-alive, leader changes, etc) in the manner described above. For instance, when the client with session
1 submits a keep-alive, all prior keep-alives are no longer needed and so they’re excluded from replication and eventually compacted from the log.
So, that’s all to say that because Copycat attempts to exclude entries from replication when they no longer contribute to the system’s state, it’s not very straightforward to determine how many entries behind the leader a follower is. In order to determine precisely how far a follower is behind a leader, you would actually need to determine the number of live entries (which will be replicated to that follower) in the leader’s log from the follower’s
matchIndex up to the leader’s last log index. That calculation would be complicated a bit by the differences between tombstones and non-tombstones. It would simply be too expensive to track it with any precision, but perhaps somewhat meaningful numbers could be derived from indexes.
StateMachinewith a nicer API, and they therefore also serve as real-world examples of state machines. The resource state machines can be found under the
*.state.*packages throughout Atomix. Note, though, that most of the Atomix state machines use incremental log compaction by tracking the liveness of commits and releasing (
closeing) commits that no longer contribute to the state machine's state. But as I mentioned, Copycat optionally abstracts this away for
Snapshottablestate machines. Snapshotting is simpler to implement in a state machine, but tracking liveness allows for the replication optimizations I mentioned earlier. For the examples in Atomix, the
DistributedLongs state machine) implements snapshotting, and the rest use incremental compaction. The process of incremental compaction is not yet described completely on the website, but perhaps the plethora of examples account for the lack of documentation. Even if parts of the docs are good now, the rest will continue to be developed until the full 1.0 release.