These are chat archives for atomix/atomix

29th
Jul 2018
Jordan Halterman
@kuujo
Jul 29 2018 01:12
@wanghhao did you ever try my suggestion?
Johno Crawford
@johnou
Jul 29 2018 09:10
@kuujo looks like a bug with Java 10 and 11
PrimaryBackupDistributedMultisetTest>DistributedMultisetTest.testMultisetViews:83 expected:<3> but was:<6>
[ERROR]  RaftDistributedMultisetTest>DistributedMultisetTest.testMultisetViews:83 expected:<3> but was:<6>
Jordan Halterman
@kuujo
Jul 29 2018 09:46
strange
those are stream counts
is that line 83 or 84?
Jordan Halterman
@kuujo
Jul 29 2018 10:09
@johnou
man adding that Raft storage lock really exposed some issues with tests
Johno Crawford
@johnou
Jul 29 2018 10:43
@kuujo yeah just saw your follow up
also what's the probability that the tests across different builds are joining the same cluster with multicast?
Jordan Halterman
@kuujo
Jul 29 2018 10:44
they should be run in containers or something right?
Johno Crawford
@johnou
Jul 29 2018 10:44
yeah that's what i'm thinking..
Jordan Halterman
@kuujo
Jul 29 2018 10:48
I was actually just mentioning you to ask about that Java 10 issue though ^^^
Johno Crawford
@johnou
Jul 29 2018 10:49
ah yeah that's what i was doing
Jordan Halterman
@kuujo
Jul 29 2018 10:49
gonna take a nap soon
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 23.289 s <<< FAILURE! - in io.atomix.core.multiset.RaftDistributedMultisetTest
[ERROR] testMultisetViews(io.atomix.core.multiset.RaftDistributedMultisetTest) Time elapsed: 2.459 s <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<6>
83
the first assert
assertEquals(3, multiset.entrySet().stream().count());
Jordan Halterman
@kuujo
Jul 29 2018 10:50
damn
Johno Crawford
@johnou
Jul 29 2018 10:51
it's possible there's a bug in the jdk
that would suck
ah atomix doesn't build on java 11
import sun.misc.Unsafe;
Jordan Halterman
@kuujo
Jul 29 2018 10:59
yeah
Unsafe is actually not used any more so we can remove it
Johno Crawford
@johnou
Jul 29 2018 11:01
the new bytecleaner uses it but supports java 11
indirect reference then tries the cleaner directly added in later jres
so you want to remove all the memory utils?
you don't use PooledDirectAllocator / PooledHeapAllocator in ONOS?
Jordan Halterman
@kuujo
Jul 29 2018 17:25
No
Johno Crawford
@johnou
Jul 29 2018 17:55
Cool I removed all unused utility classes which rely on unsafe and found some bugs on the way
Now the entire Atomix project compiles on Java 11
Alexander Richter
@alex-richter
Jul 29 2018 20:04
Hey guys, I have a question regarding the storage mechanism of replicated logs. In my current projects, I will have an Atomix cluster running for months without restarting, collecting plenty of data every day. I need this data to be 100% consistent across all nodes (which is why I plan to use Raft as the replication mechanism). However, I only need to access the data of the last 24 hours. So I was wondering: Is there a way for Atomix to get rid of logs that are older than that, to save drive space and memory? Cause in case I need to restart a node, the startup would take a long time since all log entries (most of which are not even needed anymore) would have to be replayed.
Johno Crawford
@johnou
Jul 29 2018 21:16
Sounds like you might need to tweak the log compaction properties
Alexander Richter
@alex-richter
Jul 29 2018 21:33
Any tips on where to start there?
Jordan Halterman
@kuujo
Jul 29 2018 21:35
The logs are always compacted based on size rather than time. Snapshots are taken of all primitives in a partition and old logs are deleted once the log rolls over to a new segment as long as the partition is not under high load (the Raft servers track the frequency of writes and avoid the costly compaction operation if a lot is happening). So, all you have to do is configure the Raft partition group’s segment size.
In the RaftPartitionGroup builder that is
Default is 64MB
Jordan Halterman
@kuujo
Jul 29 2018 21:40
In general, Raft logs will stay between 1x and 2x the segment size
Unless you’re running benchmarks or something
The rest of the disk usage will be the snapshots of the primitive state machines, in which case size is dependent on the amount of data stored
Alexander Richter
@alex-richter
Jul 29 2018 21:59
Sounds good... so if i get that right, Atomix will - once the log files exceed the segment size - take the logs, figure out the end state that it is in after all the entries have been applied in sequential order, and then start a new segment with the end state of the previous one and throw the old segment away?
Jordan Halterman
@kuujo
Jul 29 2018 23:19
Correct.
And replace the segment with .snapshot files which will be used to restore the end state of the segment before replaying logs
And actually it’s the end state of all logs since the beginning of time. Takes a snapshot after segment 1 representing the state in segment 1. Then takes a snapshot after segment 2 representing the state of segments 1 and 2, and so on.
Johno Crawford
@johnou
Jul 29 2018 23:27
@kuujo shall I merge that cleaner branch in or you still need to go over it again?
Jordan Halterman
@kuujo
Jul 29 2018 23:44
Go for it