Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 09:33
    sndaergg synchronize #1060
  • 06:16
    sndaergg commented #1060
  • 06:15
    sndaergg commented #1060
  • Jan 22 12:25
    johnou commented #1060
  • Jan 22 09:25
    sndaergg commented #1060
  • Jan 21 01:55
    sndaergg review_requested #1060
  • Jan 21 01:55
    sndaergg review_requested #1060
  • Jan 16 10:03
    sanderdg commented #951
  • Jan 14 10:18
    skyfirst93 commented #1060
  • Jan 14 10:09
    skyfirst93 commented #1050
  • Jan 10 22:08
    thiago304 edited #1061
  • Jan 10 22:06
    thiago304 opened #1061
  • Dec 30 2019 04:37
    sndaergg reopened #1060
  • Dec 30 2019 04:37
    sndaergg closed #1060
  • Dec 28 2019 03:01
    sndaergg commented #1060
  • Dec 28 2019 02:51
    sndaergg commented #1060
  • Dec 27 2019 09:39
    sndaergg commented #1060
  • Dec 25 2019 09:16
    johnou commented #1060
  • Dec 25 2019 09:12
    johnou synchronize #1060
  • Dec 25 2019 07:34
    sndaergg commented #1060
Wayne Hunter
@incogniro
Continued
2018-11-16 12:14:49 DEBUG ResourceLeakDetector:81 - -Dio.netty.leakDetection.level: simple
2018-11-16 12:14:49 DEBUG ResourceLeakDetector:81 - -Dio.netty.leakDetection.targetRecords: 4
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.numHeapArenas: 16
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.numDirectArenas: 16
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.pageSize: 8192
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.maxOrder: 11
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.chunkSize: 16777216
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.tinyCacheSize: 512
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.smallCacheSize: 256
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.normalCacheSize: 64
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.cacheTrimInterval: 8192
2018-11-16 12:14:49 DEBUG PooledByteBufAllocator:76 - -Dio.netty.allocator.useCacheForAllThreads: true
2018-11-16 12:14:49 DEBUG InternalThreadLocalMap:76 - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
2018-11-16 12:14:49 DEBUG InternalThreadLocalMap:76 - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
2018-11-16 12:14:49 DEBUG DefaultChannelId:76 - -Dio.netty.processId: 10112 (auto-detected)
2018-11-16 12:14:49 DEBUG NetUtil:76 - -Djava.net.preferIPv4Stack: false
2018-11-16 12:14:49 DEBUG NetUtil:76 - -Djava.net.preferIPv6Addresses: false
2018-11-16 12:14:49 DEBUG NetUtil:86 - Loopback interface: lo0 (lo0, 0:0:0:0:0:0:0:1%lo0)
2018-11-16 12:14:49 DEBUG NetUtil:81 - Failed to get SOMAXCONN from sysctl and file /proc/sys/net/core/somaxconn. Default: 128
2018-11-16 12:14:49 DEBUG DefaultChannelId:76 - -Dio.netty.machineId: ac:de:48:ff:fe:00:11:22 (auto-detected)
2018-11-16 12:14:49 DEBUG ByteBufUtil:76 - -Dio.netty.allocator.type: pooled
2018-11-16 12:14:49 DEBUG ByteBufUtil:76 - -Dio.netty.threadLocalDirectBufferSize: 0
2018-11-16 12:14:49 DEBUG ByteBufUtil:76 - -Dio.netty.maxThreadLocalCharBufferSize: 16384
azthec
@azthec
Hello, when using a Raft Partition Group with a AtomicMap, I sometimes get a io.atomix.primitive.PrimitiveException$Timeout error when adding values to the map while I manually kill nodes. Would someone be willing to help me with this? Don't think it warrants a github issue.
LQ-Gov
@LQ-Gov
hi,sir,How can I listen a leader node change event? i don't find it in website. my atomix version is 3.0.8.
There is a way to get Raft partition leaders, but those are just informative methods. The (strongly) recommended way to do any sort of leader election is using the leader election primitives which provide events, support for multiple leaders, support for leader transfer, prioritization, etc
Junbo Ruan
@aruanruan
@kuujo when not setting multicast, 'group' property in MultipcastConfig may be null in some environment
NettyBroadcastService.Builder will throw NullPointerException
Jordan Halterman
@kuujo
Got a fix for it? Submit a PR?
miniarak
@miniarak
Hi! I am trying to build an atomix cluster with 3 nodes. 2 of them start fine. But the last one starts but raises an exception. io.atomix.s torage.StorageException: Failed to acquire storage lock; ensure each Raft server is configured with a distinct storage directory. Can someone explain to me why does causes this error?
Jordan Halterman
@kuujo
@miniarak according to the message they’re all trying to store data in the same folder on the same node. If you’re running them on the same node then you have to configure the storageDirectory for Raft partitions to be different for each node
Jordan Halterman
@kuujo
In code it’s .withStorageDirectory on RaftPartitionGroup builders, and in configuration it’s storage.directory
Otherwise they’re all trying to get a lock on the same directory to write the same log files. Of course, in a real cluster with multiple nodes you don’t have this problem.
Also, please join the Slack workspace! Soon I will just start responding to questions with a link to Slack to try to force these threads to move over there.
miniarak
@miniarak
@kuujo Thank you!...And how do I join the Slack workspace?
Jordan Halterman
@kuujo
There’s a link above or in the README
LQ-Gov
@LQ-Gov
@kuujo sir,I feel the custom primitive some overdesign. config,buider,proxy,sync,async....,it's too much interface need to be implement...
Lukasz Antoniak
@lukasz-antoniak

Hi Team! I am trying to replace ZooKeeper with Atomix. We use ZooKeeper as strongly consistent store to persist cluster state and notify nodes when peers are joining or leaving the cluster. Strong consistency implies usage of Raft. For development and unit-testing purpose, I have tried to setup single node Raft cluster. Unfortunately, atomix.start().join() never completes.

cluster {
  clusterId: "atomix"
  node {
    id: member1
    address: "localhost:5001"
  }
  multicast {
    enabled: true
  }
  discovery {
    type: multicast
  }
}

managementGroup {
  type: raft
  partitions: 1
  members: [member1]
  storage {
    directory: "/tmp/atomix/mgmt"
    # memory or disk
    level: memory
  }
}

partitionGroups.raft {
  type: raft
  partitions: 1
  members: [member1]
  storage {
    directory: "/tmp/atomix/pg"
    # memory or disk
    level: memory
  }
}

Any hints?

Lukasz Antoniak
@lukasz-antoniak

Actually one time it was hanging, and now I receive message about port binding. I have verified with lsof that nothing listens on 5001. After changing the port to any random value, issue still persists.

[2018-12-16 09:04:21,204] INFO RaftServer{raft-partition-1}{role=CANDIDATE} - Starting election (io.atomix.protocols.raft.roles.CandidateRole:165)
[2018-12-16 09:04:21,205] INFO RaftServer{raft-partition-1} - Transitioning to LEADER (io.atomix.protocols.raft.impl.RaftContext:170)
[2018-12-16 09:04:21,206] INFO RaftServer{raft-partition-1} - Found leader member1 (io.atomix.protocols.raft.impl.RaftContext:170)
[2018-12-16 09:04:21,209] INFO Started (io.atomix.protocols.raft.partition.RaftPartitionGroup:210)
[2018-12-16 09:04:21,209] INFO Started (io.atomix.primitive.partition.impl.DefaultPartitionService:196)
[2018-12-16 09:04:21,559] INFO Started (io.atomix.core.impl.CoreTransactionService:384)
[2018-12-16 09:04:21,559] INFO Started (io.atomix.core.impl.CorePrimitivesService:360)
[2018-12-16 09:04:22,512] INFO 3.0.8 (revision 5b38cc built on 2018-11-13 15:47:34)
(io.atomix.core.Atomix:824)
[2018-12-16 09:04:22,521] WARN Failed to bind TCP server to port 0.0.0.0:5001 due to {} (io.atomix.cluster.messaging.impl.NettyMessagingService:558)
java.net.BindException: Address already in use

I have tried versions 3.0.6, 3.0.8 and 3.1.0-beta2.

Lukasz Antoniak
@lukasz-antoniak
Argh, many apologies. I have hooked up Atomix in place where I create ZK client and it turned out to be twice in unit-tests. All works fine!
Lukasz Antoniak
@lukasz-antoniak
Hi team! Any plans to support ephemeral entries in AtomicDocumentTree?
jose igancio hernandez velasco
@hjoseigancio_gitlab
hi, I'm trying to connect an onos node to an atomix cluster by changing the cluster.json without restarting the onos service. Onos detects the change in the file but does not make the new connection.
Is it possible to do this without having to stop the onos service?
Jordan Halterman
@kuujo
Nope it’s not possible. In past releases we detected the configuration change and restarted the container, but that proved to be pretty buggy. There’s not really a difference between how that was done and just stopping, configuring, and restarting the node though.

Please join Slack!

We are no longer monitoring this channel, which is why nobody’s getting responses. Gitter has never been very easy to monitor, so we moved to Slack. The following link is a permanent invite to the Slack workspace:
https://join.slack.com/t/atomixio/shared_invite/enQtNDgzNjA5MjMyMDUxLTVmMThjZDcxZDE3ZmU4ZGYwZTc2MGJiYjVjMjFkOWMyNmVjYTc5YjExYTZiOWFjODlkYmE2MjNjYzZhNjU2MjY
gianluca
@gianluca.aguzzi_gitlab
Hey! I'm trying to startup a cluster with two computer in the same LAN but, dispite in the same machine the cluster works, in two computer nothing seems to starts. The console prints multiple time Connection timeout... What can I do?
Vikram G Palakurthi
@mrtinkz
Hello everyone, first thanks for the great api. I am trying to refer docs and all of the links point to http which is being blocked since I am behind the corporate proxy. Can the website be updated so it could use the right protocol, thanks.
sladezhang
@sladezhang
Hello everyone. I'm working on a project that needs to replicate a map to several host and maintain serializable consistency level. Atomix seems to be a prefect tool for this. In our scenario, read is frequent (2w qps on average on a 3 node cluster) and write is rare ( 1 qps at maxium ). Are there any benchmark of Atomix's performance with regard to r/w qps ? or any suggestions on whether should I use Atomix in this senario?
Srivalle
@Srivalle

we want to deploy atomix cluster in kubernetes . we tried with helm but atomix pods are failing k8s-admin@k8s-master:~/atomix-helm$ kubectl describe pod test2-atomix-0
Name: test2-atomix-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=test2-atomix
controller-revision-hash=test2-atomix-65cf449cf4
statefulset.kubernetes.io/pod-name=test2-atomix-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/test2-atomix
Init Containers:
configure:
Image: ubuntu:16.04
Port: <none>
Host Port: <none>
Command:
sh
-c
/scripts/create-config.sh --nodes=$ATOMIX_NODES > /config/atomix.properties
Environment:
ATOMIX_NODES: 3
Mounts:
/config from system-config (rw)
/scripts from init-scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-498sp (ro)
Containers:
atomix:
Image: atomix/atomix:3.0.6
Ports: 5678/TCP, 5679/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--config
/etc/atomix/system/atomix.properties
/etc/atomix/user/atomix.conf
--ignore-resources
--data-dir=/var/lib/atomix/data
--log-level=INFO
--file-log-level=OFF
--console-log-level=INFO
Requests:
cpu: 500m
memory: 512Mi
Liveness: http-get http://:5678/v1/status delay=60s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get http://:5678/v1/status delay=10s timeout=10s period=10s #success=1 #failure=6
Environment:
JAVA_OPTS: -Xmx2G
Mounts:
/etc/atomix/system from system-config (rw)
/etc/atomix/user from user-config (rw)
/var/lib/atomix from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-498sp (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-test2-atomix-0
ReadOnly: false
init-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: test2-atomix-init-scripts
Optional: false
user-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: test2-atomix-config
Optional: false
system-config:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-498sp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-498sp
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 82s (x30 over 41m) default-scheduler pod has unbound immediate PersistentVolumeClaims
But Persistent volume claim is in pending state k8s-admin@k8s-master:~/atomix-helm$ kubectl get pvc data-test2-atomix-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-test2-atomix-0 Pending

Srivalle
@Srivalle
Hi
Johno Crawford
@johnou

Please join Slack!

We are no longer monitoring this channel, which is why nobody’s getting responses. Gitter has never been very easy to monitor, so we moved to Slack. The following link is a permanent invite to the Slack workspace:
https://join.slack.com/t/atomixio/shared_invite/enQtNDgzNjA5MjMyMDUxLTVmMThjZDcxZDE3ZmU4ZGYwZTc2MGJiYjVjMjFkOWMyNmVjYTc5YjExYTZiOWFjODlkYmE2MjNjYzZhNjU2MjY
banik2promit
@banik2promit

Hi,

In order to form a ONOS cluster with we must create an Atomix cluster first. My all three Atomix nodes and three ONOS nodes are same 3 physical hosts.

I am facing problems running Atomix cluster. The first Atomix node is running correctly and displaying following outputs:

13:39:55.911 [main] INFO io.atomix.core.Atomix - 3.0.7 (revision 9e8e73 built on 2018-10-11 18:07:26)

13:39:56.241 [netty-messaging-event-epoll-server-0] INFO i.a.c.m.impl.NettyMessagingService - TCP server listening for connections on 0.0.0.0:5679
13:39:56.243 [netty-messaging-event-epoll-server-0] INFO i.a.c.m.impl.NettyMessagingService - Started
13:39:56.306 [atomix-bootstrap-heartbeat-receiver] INFO i.a.c.d.BootstrapDiscoveryProvider - Joined
13:39:56.306 [atomix-bootstrap-heartbeat-receiver] INFO i.a.c.i.DefaultClusterMembershipService - atomix-1 - Member activated: Member{id=atomix-1, address=192.168.0.211:5679, properties={}}
13:39:56.308 [atomix-bootstrap-heartbeat-receiver] INFO i.a.c.i.DefaultClusterMembershipService - Started
13:39:56.309 [atomix-cluster-0] INFO i.a.c.m.i.DefaultClusterCommunicationService - Started
13:39:56.311 [atomix-cluster-0] INFO i.a.c.m.i.DefaultClusterEventService - Started
13:39:56.318 [atomix-0] INFO i.a.p.p.i.DefaultPartitionGroupMembershipService - Started
13:39:56.337 [atomix-0] INFO i.a.p.p.i.HashBasedPrimaryElectionService - Started
13:39:56.371 [atomix-0] INFO i.a.p.r.p.impl.RaftPartitionServer - Starting server for partition PartitionId{id=1, group=system}
13:39:56.577 [raft-server-system-partition-1] INFO i.a.protocols.raft.impl.RaftContext - RaftServer{system-partition-1} - Transitioning to FOLLOWER
13:40:00.157 [raft-server-system-partition-1] WARN i.a.p.raft.roles.FollowerRole - RaftServer{system-partition-1}{role=FOLLOWER} - java.net.ConnectException
13:40:00.158 [raft-server-system-partition-1] WARN i.a.p.raft.roles.FollowerRole - RaftServer{system-partition-1}{role=FOLLOWER} - java.net.ConnectException

The problem is getting occured while running second and third Atomix nodes. While running those two nodes Atomix log displays "Failed to acquire storage lock; ensure each Raft server is configured with a distinct storage directory". Full logs are given below:

13:54:13.743 [main] INFO io.atomix.core.Atomix - 3.0.7 (revision 9e8e73 built on 2018-10-11 18:07:26)

13:54:13.984 [netty-messaging-event-epoll-server-0] INFO i.a.c.m.impl.NettyMessagingService - TCP server listening for connections on 0.0.0.0:5679
13:54:13.985 [netty-messaging-event-epoll-server-0] INFO i.a.c.m.impl.NettyMessagingService - Started
13:54:14.342 [atomix-bootstrap-heartbeat-receiver] INFO i.a.c.d.BootstrapDiscoveryProvider - Joined
13:54:14.343 [atomix-bootstrap-heartbeat-receiver] INFO i.a.c.i.DefaultClusterMembershipService - atomix-2 - Member activated: Member{id=atomix-2, address=192.168.0.213:5679, properties={}}
13:54:14.345 [atomix-bootstrap-heartbeat-receiver] INFO i.a.c.i.DefaultClusterMembershipService - Started
13:54:14.345 [atomix-cluster-0] INFO i.a.c.m.i.DefaultClusterCommunicationService - Started
13:54:14.348 [atomix-cluster-0] INFO i.a.c.m.i.DefaultClusterEventService - Started
13:54:14.464 [atomix-cluster-heartbeat-sender] INFO i.a.c.i.DefaultClusterMembershipService - atomix-1 - Member updated: Member{id=atomix-1, address=192.168.0.211:5679, properties={}}
13:54:14.622 [atomix-partition-group-membership-service-0] INFO i.a.p.p.i.DefaultPartitionGroupMembershipService - Started
13:54:14.638 [atomix-partition-group-membership-service-0] INFO i.a.p.p.i.HashBasedPrimaryElectionService - Started
13:54:14.673 [atomix-partition-group-membership-service-0] INFO i.a.p.r.p.impl.RaftPartitionServer - Starting server for partition PartitionId{id=1, group=system}
Exception in thread "main" java.util.concurrent.CompletionException: io.atomix.storage.StorageException: Failed to acquire storage lock; ensure each Raft server is configured with a distinct storage directory

Emil Kirschner
@entzik
joined slack but not getting any replies either….
Basanth Gowda
@basanth_gitlab
Hello - This is Basanth, I am new to Atomix, though I have been following it for little more than a year..
Was wondering, if there is a way to run Atomix on a single JVM for distributed Map. we will run with multiple JVM's in production, but should be able to get it running on desktops
Matthew Burghoffer
@mjburghoffer
@johnou the slack invite link is no longer valid - is it possible to make a new one (and put the link somewhere accessible for others)?
Marc Sernatinger
@msernatinger
Is there info on the slack server somewhere?
Came by this chat via https://atomix.io/community/ and didn't see any mention of a Slack
santhoshkumar
@santhoshTpixler
Hello, @kuujo , I am playing around with Atomix for a week, it is amazing. It's a lot of work effort. Thank you for providing such a great sutff.
raushan47
@raushan47
Hi All, I am trying to create a distributed map
    MultiRaftProtocol protocol = MultiRaftProtocol.builder()
              .withReadConsistency(ReadConsistency.LINEARIZABLE)
              .build();
    Map<String, String> map = atomix.<String, String>mapBuilder("my-map")
              .withProtocol(protocol)
              .withKeyType(String.class)
              .withValueType(String.class)
              .build();
but i always get NullPointerExecption in the build step, any idea what is missing
raushan47
@raushan47
@damianoneill @santhoshTpixler - do you have slack intivitation link ?
@kuujo - could you please share slack invitation link , as old link is not active ?
raushan47
@raushan47
@jhalterman - do you have invitation link to slack community for atomix
I just got this link from Slack so it should work
raushan47
@raushan47
@kuujo - thanks.
codealways
@DarshanMurthy
hey all!
hirik
@hirik
Hi, I'm new to atomix. Is rollingUpgrade is supported in atomix cluster ?
Xun Liu
@liuxunorg
hi @kuujo
I developed a service atomix-java-3.0-server with atomix-java-3.0 version,
Can I use atomix-go-client to connect to atomix-java-3.0-server?