These are chat archives for atomix/atomix
We are no longer monitoring this channel, please join Slack! https://join.slack.com/t/atomixio/shared_invite/enQtNDgzNjA5MjMyMDUxLTVmMThjZDcxZDE3ZmU4ZGYwZTc2MGJiYjVjMjFkOWMyNmVjYTc5YjExYTZiOWFjODlkYmE2MjNjYzZhNjU2MjY
DistributedValuewhile Jepsen creates partitions in the network. Jepsen then checks to ensure that the responses received by clients represent a linearizable history. This bug made it possible for two different entries to get applied at the same logical time on different servers. Jepsen should have seen that inconsistency at some point in the tests after forcing a leader change and submitting reads to different servers. But I think we just haven't run them enough. They need to become part of the normal development workflow and be run very frequently to catch rare bugs like this.
As for queues, yeah, continually polling will be expensive. That certainly limits their usefulness. The problem is polling requires a write - since the state of the state machine would be modified by removing an item from the queue - which means a write to disk and replication of an entry. That's crazy. We've had implementations of queues that push to clients, but it has never been finalized because there are several ways to go about it, and a decision just hasn't been made on the right way.
Currently, the closest thing Atomix has to an efficient queue is using the messaging feature in
DistributedGroup. That messaging does not require any polling - messages are pushed to clients via the events system - so one write to add a message and one write to remove a message is all that's required. But I don't believe that particular API satisfies all potential use cases. I think we do want to implement a standalone work queue resource or something like that.
The questions around that just have to do largely with ordering guarantees, how to handle multiple consumers, etc. I think if ordering is no concern then it should be easy to implement.
If we can decide on the particular use case and API then implementing an efficient queue that doesn't require any polling should be trivial.
DistributedQueuefor now. Using session events it would be much, much more efficient. The question is, do consumers need to process queue items in sequential order? If so, that may mean pushing an item to a client and waiting for an ack before distributing the next item.
Have at it. Basically all we need is:
QueueStateto add/remove those listeners
session.publish("item")when an item is added
itemevent listener in the
The easiest implementation would just be doing that so the client gets notified when an item is added to the queue. Then clients can poll to get the item. There are all sorts of problems with this though. If a client polls a queue, the item is removed before the response is received. If the client gets disconnected after an item is removed from the queue then the item is lost. So, this is basically at most once processing, whereas using socks would get at least once processing
DistributedQueuefirst. I actually already have a lot of that code written and it will be in the net release
Yep... that all sounds good and is simple to do in Atomix. This should be similar to how the
DistributedGroup queues work internally now. When a message is stored, it's published to one or all of the client's via
session.publish. If the client's session expires before an ack is received, its re-sent to another session and so on. Session expirations work the same as ephemeral nodes in ZooKeeper, but the benefit is that the state machine (the servers) is the coordinator, rather than a client needing to coordinate using low level primitives like ephemeral nodes and watches. That means less coordination between clients and servers. When a client's session expires the state machine can decide what to do. The state machine always has a fully consistent view of the state of clients as well, meaning the state machine can select a consumer to send a message to based on how many messages are in transit to/being processed by all consumers. It seems to me doing this in ZooKeeper requires a lot more coordination between client and server, e.g. elect a leader to control all consumers, set ephemeral nodes and watches, when an ephemeral node is lost send an event to the coordinator which then has to communicate with the servers to decide what to do with messages that weren't fully processed. If two coordinators manage to exist at the same time - which is impossible to prevent - they could conflict with each other. In Atomix that can all be done in the state machine with no additional communication and a guaranteed single system view. This seems to be a totally different way of thinking about solving these types of problems. All the consensus services I see seem to prefer providing primitives like ZooKeeper's and allowing clients to use them to coordinate state changes. Atomix is a different paradigm in that almost all coordination for each of the patterns is done entirely inside the consensus algorithm.
Good point on failed work queues. That's really helpful. I think we can integrate these ideas into a sort of work queue resource.
Yeah... I do need to make a contributor guide. We just got a critical release into Copycat yesterday and several other bug fixes in Copycat and Atomix so I've been planning on releasing them ASAP. But I do want to make sure the added resource events are in the release since people have been asking for them for a long time, and this is part of that. I have some changes around here somewhere that should make the addition of these events really straightforward. I think I need to get those merged, then it should be simple to add queue events - a few lines of code and tests really.
My release process is: release it when it feels right, which in general is when someone from ONOS requests a release, a major bug fix is merged, a major feature is done, or enough minor bug fixes are done. All of those conditions have been met this time.