Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Atin Mukherjee
    @atinmu
    @rapkeru technically there shouldn't be any limitations of moving from 3.7 to 3.13, FWIW, it'd be good if you can reach out to gluster-users ML stating the exact failures and other details..
    ashish
    @buts101
    is they as way to configure glusterfs with docker-swarm
    ?
    sankarshan
    @sankarshanmukhopadhyay
    Not too familiar with Swarm, but the basic logic should be similar to how Gluster is setup/configured with k8s. So, the answer to your question @buts101 is an yes.
    Matjaž Mav
    @matjazmav
    @buts101 I had found few good posts about docker and gluster, it is really easy to setup distributet storage, ping me and I can forward you this posts
    Amar Tumballi
    @amarts
    Can we revive this room?
    for public usage?
    Vijay Bellur
    @vbellur
    +1
    Vijay Bellur
    @vbellur
    o/
    Vijay Bellur
    @vbellur
    Hey All, we are starting the documentation hackathon now. Let us hack our way to documentation glory! :)
    Aravinda Vishwanathapura
    @aravindavk
    yay!
    Amar Tumballi
    @amarts
    Ah! we are already started? I thought another 20mins left
    :-)
    hari gowtham
    @harigowtham
    Are we writting down what we are going to work on to avoid duplicating effort?
    Aravinda Vishwanathapura
    @aravindavk
    Added geo-replication feature documentation to glusterd2 repo #hackathon gluster/glusterd2#1044
    Vijay Bellur
    @vbellur
    @harigowtham that's a good thought!
    @aravindavk thank you, have updated the tracking document with your contribution
    Amar Tumballi
    @amarts
    Sent a patch for setting standard on commit msg
    do take a look please, and comment on it if there are concerns
    Michael Weichert
    @mweichert
    Hey all. Does anyone know if there's a Kubernetes distribution which supports Gluster 4.0 yet?
    Vijay Bellur
    @vbellur
    @mweichert not yet.
    Michael Weichert
    @mweichert
    Hey guys. I have a volume which has snapshots on three nodes. The third node has been removed from the volume and no longer exists. But now I can't mount the volume - gluster still wants to talk to node 3. I can't peer detach because there's snapshots associated with node 3. I can't seem to delete my snapshots either. Any ideas of how to force remove snapshots or peers
    ?
    Atin Mukherjee
    @atinmu
    @mweichert how did you remove the node at first place? Peer detach shouldn’t go through if you have an active volume hosted by the same node
    Michael Weichert
    @mweichert
    Atin, the node went down and we're just trying to get it up. So we removed the brick from each volume.
    Sorry, I should clarify. gluster-3 went down to a filesystem crash. In an effort to quickly get our gluster cluster back up and running, we just want to get things up and running without gluster-3.
    So we've deleted the virtual server entirely and removed the gluster-3 bricks from each volume
    Michael Weichert
    @mweichert
    Whenever i attempt to delete a snapshot, I get "snapshot [name] might not be in an usable state"
    Nicolas Goudry
    @nicolas-goudry
    Hi, anybody there? :smiley:
    Ju Lim (Red Hat)
    @julienlim
    hi @nicolas-goudry we're here
    Nicolas Goudry
    @nicolas-goudry
    Maybe you can help me!
    I’m trying to setup GlusterFS on Kubernetes
    So I found Heketi and tried to follow their docs, but I’m kinda stuck on requirements…
    I updated my cluster with kops to add 3 new nodes.
    Everything went well but now some of my deployments also uses these 3 new nodes…
    Do you know how to « reserve » those 3 nodes for GlusterFS?
    Also, how to attach raw block devices to nodes? There’s no mention about this in K8S doc on nodes… But there is some mention in K8S volumes documentation.
    I raised an issue on gluster-kubernetes (gluster/gluster-kubernetes#515) about this
    you might also try the #heketi channel on IRC (freenode)
    Nicolas Goudry
    @nicolas-goudry
    Yeah I read all blog posts about GFS/K8S… But none explains how to create those 3 nodes, and they don’t explain how to configure them :cry:
    All tutorials start AFTER node creation… I don’t understand why…
    Will try heketi on IRC also ! Thanks
    Atin Mukherjee
    @atinmu
    @humblec ^^ can you please help with the pointers?
    Michael Weichert
    @mweichert
    Is Gluster 4x a stable or preview release?
    sdeepugd
    @sdeepugd
    HI . we use gluster file system along with docker container. and these setups are used by users. So at any point of time more than 1000 members will read and write data simultaneously . What kind of volume should we go for ?
    AlexSyS777
    @AlexSyS777

    Hello to anyone

    I see issue on my cluster when tried create block volume via heketi

    [root@heketi-75dcfb7d44-c5bqc /]# heketi-cli blockvolume create --ha 3 --size 1
    Error: Failed to allocate new block volume: Block Hosting Volume Creation is disabled. Create a Block hosting volume and try again.
    [root@heketi-75dcfb7d44-c5bqc /]# heketi-cli cluster list
    Clusters:
    Id:c0775ad2dfcd0957fc3a761fb00e06f9 [file][block]
    [root@heketi-75dcfb7d44-c5bqc /]# exit
    kubernetes@k8s-master-dev-1:~$ ksys exec -ti glusterfs-qczp9 bash
    [root@k8s-node-dev-1 /]# systemctl status gluster-blockd
    ● gluster-blockd.service - Gluster block storage utility
    Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled)
    Active: inactive (dead)

    Oct 17 20:23:10 k8s-node-dev-1 systemd[1]: Dependency failed for Gluster block storage utility.
    Oct 17 20:23:10 k8s-node-dev-1 systemd[1]: Job gluster-blockd.service/start failed with result ‘dependency’.
    [root@k8s-node-dev-1 /]#

    Whats wrong?
    I'm using heketi on k8s

    AlexSyS777
    @AlexSyS777
    Just for info, issue above related to https://bugzilla.redhat.com/show_bug.cgi?id=1462792 and solved :)
    RockychengJson
    @RockychengJson
    Hi,all. Can I achieve conditional mount using glusterfs? Here is my situation. I have several groups, and each group have the same directory structure but with different group data. Can I pass a parameter such as group name, so that glusterfs can determine using which group file. For example, I have a directory named "root_dir", and inside the root_dir , I have three sub-directory, namely the "group_one", "group_two", "group_three". I mount the "root_dir" to a kubernetes pod, and the current group is group one , so I want the files in "group_one" is available with "rwx", but the other two sub-directories are unavailable with"---".