These are chat archives for atomix/atomix

18th
Nov 2016
Mikhail
@middlesphere
Nov 18 2016 19:44
Hi, there! If I run two Atomix instances on the same host but with different ports, can they share the same folder? (.withDirectory parameter has same value)
Jordan Halterman
@kuujo
Nov 18 2016 20:08
No they can't. This probably needs to be better documented, but the storage directory has to be totally unique for a server. If you run a cluster on one machine and all the nodes use the same folder, what will happen is they'll just start using each other's state and you'll see some strange behavior. When the first node starts it will be fine, but when the second node starts it will think it's recovering from disk and use the first node's configuration. So, if you did that with two separate clusters, they'd probably just end up getting merged into one since the nodes rely on the configuration in that folder to know where their peers are.
Hmm... you may be able to do it by assigning different names to each cluster. I don't recall for sure let me check. I think the name may be used in log/configuration file names. But they'll never actually use the same log files.
So, yeah. What you can do is assign different names to servers in each cluster so they use different prefixes if you really want to use the same directory, but there's not any real advantage to that aside from vanity.
Jordan Halterman
@kuujo
Nov 18 2016 20:14
The name defaults to copycat so you'll get copycat-1-1.log and copycat-12345678.snapshot and things like that. Setting the name will just change the prefix. Atomix uses that to set the prefix to atomix
Mikhail
@middlesphere
Nov 18 2016 20:36
@kuujo Thanks, Jordan! I will put second instance to another folder.
Is there any info about using Atomix in production? I really like Atomix so I introduce one of my services based on Atomix, to my managers on Monday . And I think they will ask me about production success stories.
Jordan Halterman
@kuujo
Nov 18 2016 20:48
Atomix is used in ONOS and has been used for monitoring in some very large Hadoop clusters, but I don't like to oversell software and TBH we're currently still working through some issues that have been exposed by those large use cases. Those issues haven't been inside the Raft implementation, which seems to be doing great. They're related to the more complex aspects of recovering state on clients after network partitions cause sessions to expire. Those issues pertain to more complex resources that maintain client-side state. We're also still working on improving Jensen tests for those more complex use cases.
Mikhail
@middlesphere
Nov 18 2016 20:59
@kuujo Thanx, again. I hope, those issues are not actual for small cluster (from 3 to 5 nodes)