These are chat archives for atomix/atomix
FOLLOWERSselection strategy, it will prefer to connect to and communicate with followers to scale reads
AppendRequestends up sending only a single entry, and that's far more overhead than batching alone.
xon the underlying object. Typically, there's no overhead to that type of thing as Java's compiler will basically make it a direct method call
DISKlogs. I also experimented a bit with an alternative approach - buffering writes externally and flushing on my the committed writes on commit - but that proved slower than just flushing
MAPPEDsegments in my tests. Pipelining too proved fruitless. I was unable to get pipelining to outperform straight batches because of the significantly reduced batch sizes in pipelining.