This isn't a heavily used comms channel for Gluster, is it?
Hello, does anyone knows the GlusterFS upgrade constraints, i mean the Gluster Docs has the same steps for all upgrade versions ( i checked from 3.7 onwards). Can i upgrade from 3.7 to 3.13 ( i tried already and failed ) or i need to upgrade to some intermediary versions?
So, I don't recall that large an upgrade being completed by anyone. @atinmu what's a good intermediate step?
@rapkeru technically there shouldn't be any limitations of moving from 3.7 to 3.13, FWIW, it'd be good if you can reach out to gluster-users ML stating the exact failures and other details..
is they as way to configure glusterfs with docker-swarm
Not too familiar with Swarm, but the basic logic should be similar to how Gluster is setup/configured with k8s. So, the answer to your question @buts101 is an yes.
@buts101 I had found few good posts about docker and gluster, it is really easy to setup distributet storage, ping me and I can forward you this posts
Can we revive this room?
for public usage?
Hey All, we are starting the documentation hackathon now. Let us hack our way to documentation glory! :)
Ah! we are already started? I thought another 20mins left
Are we writting down what we are going to work on to avoid duplicating effort?
Added geo-replication feature documentation to glusterd2 repo #hackathon gluster/glusterd2#1044
@harigowtham that's a good thought!
@aravindavk thank you, have updated the tracking document with your contribution
Sent a patch for setting standard on commit msg
do take a look please, and comment on it if there are concerns
Hey all. Does anyone know if there's a Kubernetes distribution which supports Gluster 4.0 yet?
@mweichert not yet.
Hey guys. I have a volume which has snapshots on three nodes. The third node has been removed from the volume and no longer exists. But now I can't mount the volume - gluster still wants to talk to node 3. I can't peer detach because there's snapshots associated with node 3. I can't seem to delete my snapshots either. Any ideas of how to force remove snapshots or peers
@mweichert how did you remove the node at first place? Peer detach shouldn’t go through if you have an active volume hosted by the same node
Atin, the node went down and we're just trying to get it up. So we removed the brick from each volume.
Sorry, I should clarify. gluster-3 went down to a filesystem crash. In an effort to quickly get our gluster cluster back up and running, we just want to get things up and running without gluster-3.
So we've deleted the virtual server entirely and removed the gluster-3 bricks from each volume
Whenever i attempt to delete a snapshot, I get "snapshot [name] might not be in an usable state"
Hi, anybody there? :smiley:
Ju Lim (Red Hat)
hi @nicolas-goudry we're here
Maybe you can help me!
I’m trying to setup GlusterFS on Kubernetes
So I found Heketi and tried to follow their docs, but I’m kinda stuck on requirements…
I updated my cluster with kops to add 3 new nodes.
Everything went well but now some of my deployments also uses these 3 new nodes…
Do you know how to « reserve » those 3 nodes for GlusterFS?