These are chat archives for atomix/atomix

7th
Apr 2017
Johno Crawford
@johnou
Apr 07 2017 08:09
hey all, so i'm checking out http://atomix.io/atomix/api/latest/io/atomix/collections/DistributedMap.Options.html if I set withLocalCache to true is that essentially a replicating distributed map? or will it only cache lookups on the stack?
tofflos
@tofflos
Apr 07 2017 17:36
I looked into the performance numbers. I created a simple a class NoopCommand with no fields and a NoopStateMachine which only executes command.close(). I connected a single client to a single server configured with netty-transport and memory-storage. From the client I then ran a tight single-threaded loop creating and submitting NoopCommands. I got roughly 5000 commands per second.
I'm ran the same test with a three server cluster and a single client and got roughly 1000 commands per second.
I wonder if the client was connected to the LEADER... HMm...
tofflos
@tofflos
Apr 07 2017 17:42
ServerSelectionStrategies.LEADER enabled. Let's see how it does now...
It's about the same. My computer isn't breaking a sweat in CPU, memory, disk and network utilization.
tofflos
@tofflos
Apr 07 2017 17:49
    CopycatServer s1 = CopycatServer.builder(address)
            .withName("s1")
            .withStateMachine(NoopStateMachine::new)
            .withStorage(Storage.builder().withStorageLevel(StorageLevel.MEMORY).build())
            .withTransport(NettyTransport.builder().build())
            .build();
    for (n = 0; n < 100000; n++) {
        c1.submit(new NoopCommand()).join();
    }
public class NoopCommand implements Command<Object> {
}

public class NoopStateMachine extends StateMachine {

public void create(Commit<NoopCommand> commit) {
    commit.close();
}

}

Jordan Halterman
@kuujo
Apr 07 2017 20:40
@johnou when caching is enabled in the map, what it does is listens for map events and updates a local map. That map is only read when ReadConsistency.LOCAL is used. Technically, it should probably just use ReadConsistency.SEQUENTIAL since events are still sequentially consistent
Jordan Halterman
@kuujo
Apr 07 2017 20:46
@tofflos you’re getting 1k/sec because those are blocking commands (submit(…).join()). Each command has to go to the leader and get replicated to followers and committed before a response. So, basically you’re getting 1000 * (a request to the leader, plus an AppendRequest and response to the follower, plus a disk write and sync on the leader and follower, plus a response) / sec. The only real parallelism that’s possible is in flushing to disk on the leader and follower. The protocol is not able to optimize such blocking operations from a single client. But when multiple clients are submitting blocking commands, or when one client is submitting concurrent commands, it can batch and pipeline commits under high load. If 1000 commands are submitted concurrently, then you’ll likely get a few AppendRequests with multiple commands, and flushing to disk occurs for batches rather than for each command. In a real world system, typically many clients are submitting various operations to the cluster concurrently, even if individual clients are blocking, and the leader can batch and pipeline those operations.
most of the ZooKeeper benchmarks, for example, use 30 clients
tofflos
@tofflos
Apr 07 2017 20:49
Interesting. I'll play around with some threads. :)