These are chat archives for atomix/atomix

16th
Nov 2018
william.z
@zwillim
Nov 16 2018 07:43

I re-run an election when an LeaderElection's state changes like this:

        election.addStateChangeListener(state -> {
            LOGGER.warn("election state changed, current state is: " + state);
            if (state == PrimitiveState.EXPIRED) {
                election.run(atomix.getMembershipService().getLocalMember().id());
                return;
            }
            if (state == PrimitiveState.SUSPENDED) {
                election.run(atomix.getMembershipService().getLocalMember().id());
                return;
            }
            if (state == PrimitiveState.CONNECTED) {
                election.run(atomix.getMembershipService().getLocalMember().id());
                return;
            }
        });

I'm not sure if this is ok, because I got error like this :

2018-11-14 18:50:26 [ERROR] [io.atomix.utils.concurrent.ThreadPoolContext.lambda$new$0(ThreadPoolContext.java:83) raft-partition-group-data-5] __|An uncaught exception occurred
io.atomix.primitive.PrimitiveException$Timeout
        at io.atomix.core.election.impl.BlockingLeaderElection.complete(BlockingLeaderElection.java:109)
        at io.atomix.core.election.impl.BlockingLeaderElection.run(BlockingLeaderElection.java:49)
        at cn.ac.iie.di.ban.data.exchange.runner.server.DERunnerServer.lambda$initGroup$2(DERunnerServer.java:270)
        at io.atomix.primitive.proxy.impl.DefaultProxyClient.lambda$onStateChange$8(DefaultProxyClient.java:180)
        at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:891)
        at java.util.concurrent.CopyOnWriteArraySet.forEach(CopyOnWriteArraySet.java:404)
        at io.atomix.primitive.proxy.impl.DefaultProxyClient.onStateChange(DefaultProxyClient.java:180)
        at io.atomix.primitive.proxy.impl.DefaultProxyClient.lambda$null$0(DefaultProxyClient.java:75)
        at io.atomix.primitive.session.impl.BlockingAwareSessionClient.lambda$null$0(BlockingAwareSessionClient.java:50)
        at io.atomix.utils.concurrent.ThreadPoolContext.lambda$new$0(ThreadPoolContext.java:81)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

The at cn.ac.iie.di.ban.data.exchange.runner.server.DERunnerServer.lambda$initGroup$2(DERunnerServer.java:270) is just here:

if (state == PrimitiveState.SUSPENDED) {
        **        election.run(atomix.getMembershipService().getLocalMember().id()); **
                return;
            }
I'm using version 3.0.7
Wayne Hunter
@incogniro
Nov 16 2018 08:53
We’re using Guice in the project too. I’ll see what happens once removed. Thanks
Wayne Hunter
@incogniro
Nov 16 2018 12:40
I updated my JDK and enabled trace. My issue appears to be with Netty during the atomic.start() call.
2018-11-16 12:14:49 DEBUG NettyMessagingService:262 - Failed to initialize native (epoll) transport. Reason: failed to load the required native library. Proceeding with nio.
2018-11-16 12:14:49 DEBUG NioEventLoop:76 - -Dio.netty.noKeySetOptimization: false
2018-11-16 12:14:49 DEBUG NioEventLoop:76 - -Dio.netty.selectorAutoRebuildThreshold: 512
2018-11-16 12:14:49 DEBUG PlatformDependent:71 - Platform: MacOS
2018-11-16 12:14:49 DEBUG PlatformDependent0:76 - -Dio.netty.noUnsafe: false
2018-11-16 12:14:49 DEBUG PlatformDependent0:76 - Java version: 11
2018-11-16 12:14:49 DEBUG PlatformDependent0:71 - sun.misc.Unsafe.theUnsafe: available
2018-11-16 12:14:49 DEBUG PlatformDependent0:71 - sun.misc.Unsafe.copyMemory: available
2018-11-16 12:14:49 DEBUG PlatformDependent0:71 - java.nio.Buffer.address: available
2018-11-16 12:14:49 DEBUG PlatformDependent0:91 - direct buffer constructor: unavailable
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
    at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31)
    at io.netty.util.internal.PlatformDependent0$4.run(PlatformDependent0.java:224)
    at java.base/java.security.AccessController.doPrivileged(Native Method)
    at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:218)
    at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:208)
    at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:79)
    at io.netty.channel.nio.NioEventLoop.newTaskQueue(NioEventLoop.java:258)
    at io.netty.util.concurrent.SingleThreadEventExecutor.<init>(SingleThreadEventExecutor.java:165)
    at io.netty.channel.SingleThreadEventLoop.<init>(SingleThreadEventLoop.java:58)
    at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:141)
    at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:127)
    at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:36)
    at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
    at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
    at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47)
    at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
    at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:77)
    at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:72)
    at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:59)
    at io.atomix.cluster.messaging.impl.NettyMessagingService.initEventLoopGroup(NettyMessagingService.java:265)
    at io.atomix.cluster.messaging.impl.NettyMessagingService.start(NettyMessagingService.java:166)
    at io.atomix.cluster.AtomixCluster.startServices(AtomixCluster.java:300)
    at io.atomix.core.Atomix.startServices(Atomix.java:852)
    at io.atomix.cluster.AtomixCluster.start(AtomixCluster.java:293)
    at io.atomix.core.Atomix.start(Atomix.java:840)
2018-11-16 12:14:49 DEBUG PlatformDependent0:81 - java.nio.Bits.unaligned: unavailable true
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
    at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31)
    at io.netty.util.internal.PlatformDependent0$5.run(PlatformDependent0.java:273)
    at java.base/java.security.AccessController.doPrivileged(Native Method)
    at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:266)
...
Continued
2018-11-16 12:14:49 DEBUG ResourceLeakDetector:81 - -Dio.netty.leakDetection.level: simple
2018-11-16 12:14:49 DEBUG ResourceLeakDetector:81 - -Dio.netty.leakDetection.targetRecords: 4
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.numHeapArenas: 16
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.numDirectArenas: 16
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.pageSize: 8192
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.maxOrder: 11
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.chunkSize: 16777216
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.tinyCacheSize: 512
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.smallCacheSize: 256
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.normalCacheSize: 64
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.cacheTrimInterval: 8192
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.useCacheForAllThreads: true
2018-11-16 12:14:49 DEBUG InternalThreadLocalMap:76 - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2018-11-16 12:14:49 DEBUG InternalThreadLocalMap:76 - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2018-11-16 12:14:49 DEBUG DefaultChannelId:76 - -Dio.netty.processId: 10112 (auto-detected)
2018-11-16 12:14:49 DEBUG NetUtil:76 - -Djava.net.preferIPv4Stack: false
2018-11-16 12:14:49 DEBUG NetUtil:76 - -Djava.net.preferIPv6Addresses: false
2018-11-16 12:14:49 DEBUG NetUtil:86 - Loopback interface: lo0 (lo0, 0:0:0:0:0:0:0:1%lo0)
2018-11-16 12:14:49 DEBUG NetUtil:81 - Failed to get SOMAXCONN from sysctl and file /proc/sys/net/core/somaxconn. Default: 128
2018-11-16 12:14:49 DEBUG DefaultChannelId:76 - -Dio.netty.machineId: ac:de:48:ff:fe:00:11:22 (auto-detected)
2018-11-16 12:14:49 DEBUG ByteBufUtil:76 - -Dio.netty.allocator.type: pooled
2018-11-16 12:14:49 DEBUG ByteBufUtil:76 - -Dio.netty.threadLocalDirectBufferSize: 0
2018-11-16 12:14:49 DEBUG ByteBufUtil:76 - -Dio.netty.maxThreadLocalCharBufferSize: 16384