Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Aravinda Vishwanathapura
    @aravindavk
    Added geo-replication feature documentation to glusterd2 repo #hackathon gluster/glusterd2#1044
    Vijay Bellur
    @vbellur
    @harigowtham that's a good thought!
    @aravindavk thank you, have updated the tracking document with your contribution
    Amar Tumballi
    @amarts
    Sent a patch for setting standard on commit msg
    do take a look please, and comment on it if there are concerns
    Michael Weichert
    @mweichert
    Hey all. Does anyone know if there's a Kubernetes distribution which supports Gluster 4.0 yet?
    Vijay Bellur
    @vbellur
    @mweichert not yet.
    Michael Weichert
    @mweichert
    Hey guys. I have a volume which has snapshots on three nodes. The third node has been removed from the volume and no longer exists. But now I can't mount the volume - gluster still wants to talk to node 3. I can't peer detach because there's snapshots associated with node 3. I can't seem to delete my snapshots either. Any ideas of how to force remove snapshots or peers
    ?
    Atin Mukherjee
    @atinmu
    @mweichert how did you remove the node at first place? Peer detach shouldn’t go through if you have an active volume hosted by the same node
    Michael Weichert
    @mweichert
    Atin, the node went down and we're just trying to get it up. So we removed the brick from each volume.
    Sorry, I should clarify. gluster-3 went down to a filesystem crash. In an effort to quickly get our gluster cluster back up and running, we just want to get things up and running without gluster-3.
    So we've deleted the virtual server entirely and removed the gluster-3 bricks from each volume
    Michael Weichert
    @mweichert
    Whenever i attempt to delete a snapshot, I get "snapshot [name] might not be in an usable state"
    Nicolas Goudry
    @nicolas-goudry
    Hi, anybody there? :smiley:
    Ju Lim (Red Hat)
    @julienlim
    hi @nicolas-goudry we're here
    Nicolas Goudry
    @nicolas-goudry
    Maybe you can help me!
    I’m trying to setup GlusterFS on Kubernetes
    So I found Heketi and tried to follow their docs, but I’m kinda stuck on requirements…
    I updated my cluster with kops to add 3 new nodes.
    Everything went well but now some of my deployments also uses these 3 new nodes…
    Do you know how to « reserve » those 3 nodes for GlusterFS?
    Also, how to attach raw block devices to nodes? There’s no mention about this in K8S doc on nodes… But there is some mention in K8S volumes documentation.
    I raised an issue on gluster-kubernetes (gluster/gluster-kubernetes#515) about this
    you might also try the #heketi channel on IRC (freenode)
    Nicolas Goudry
    @nicolas-goudry
    Yeah I read all blog posts about GFS/K8S… But none explains how to create those 3 nodes, and they don’t explain how to configure them :cry:
    All tutorials start AFTER node creation… I don’t understand why…
    Will try heketi on IRC also ! Thanks
    Atin Mukherjee
    @atinmu
    @humblec ^^ can you please help with the pointers?
    Michael Weichert
    @mweichert
    Is Gluster 4x a stable or preview release?
    sdeepugd
    @sdeepugd
    HI . we use gluster file system along with docker container. and these setups are used by users. So at any point of time more than 1000 members will read and write data simultaneously . What kind of volume should we go for ?
    AlexSyS777
    @AlexSyS777

    Hello to anyone

    I see issue on my cluster when tried create block volume via heketi

    [root@heketi-75dcfb7d44-c5bqc /]# heketi-cli blockvolume create --ha 3 --size 1
    Error: Failed to allocate new block volume: Block Hosting Volume Creation is disabled. Create a Block hosting volume and try again.
    [root@heketi-75dcfb7d44-c5bqc /]# heketi-cli cluster list
    Clusters:
    Id:c0775ad2dfcd0957fc3a761fb00e06f9 [file][block]
    [root@heketi-75dcfb7d44-c5bqc /]# exit
    kubernetes@k8s-master-dev-1:~$ ksys exec -ti glusterfs-qczp9 bash
    [root@k8s-node-dev-1 /]# systemctl status gluster-blockd
    ● gluster-blockd.service - Gluster block storage utility
    Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled)
    Active: inactive (dead)

    Oct 17 20:23:10 k8s-node-dev-1 systemd[1]: Dependency failed for Gluster block storage utility.
    Oct 17 20:23:10 k8s-node-dev-1 systemd[1]: Job gluster-blockd.service/start failed with result ‘dependency’.
    [root@k8s-node-dev-1 /]#

    Whats wrong?
    I'm using heketi on k8s

    AlexSyS777
    @AlexSyS777
    Just for info, issue above related to https://bugzilla.redhat.com/show_bug.cgi?id=1462792 and solved :)
    RockychengJson
    @RockychengJson
    Hi,all. Can I achieve conditional mount using glusterfs? Here is my situation. I have several groups, and each group have the same directory structure but with different group data. Can I pass a parameter such as group name, so that glusterfs can determine using which group file. For example, I have a directory named "root_dir", and inside the root_dir , I have three sub-directory, namely the "group_one", "group_two", "group_three". I mount the "root_dir" to a kubernetes pod, and the current group is group one , so I want the files in "group_one" is available with "rwx", but the other two sub-directories are unavailable with"---".
    miguel de anda
    @mdeanda
    Hi all. My home gluster setup seems to have gotten messed up again after a reboot. I have multiple bricks all in a 2+1 replica with an arbitor. After the reboot it seems like running commands from arbitor server are painfully slow and time out (peer status, volume info). When the arbitor is down it functions normally. The interesting thing is all three hours also locally mount the bricks and data is available on all. I'm not sure how to troubleshoot this.
    rmartinez3
    @rmartinez3
    Hello everyone, I have a geo replication setup running and have noticed that gluster brick for changelogs/htime file keeps growing in size. I have been reading that it is used by geo replication. However is there a way i can either archive or discard some of the data. I Have seen the HTIME file go up to 200 mb. If theres a way i can safely shorten the file or archive. That will be helpful. Thanks
    ssuio
    @ssuio
    Branch problem, why merging v3.7.13 to v3.7.15 got 300+ conflicts? Is there any way to merge to v3.7.15 without conflict ?
    sdeepugd
    @sdeepugd
    HI everyone. I am getting "Error : Request timed out " while doing rebalance . I have aded new bricks to my replicated volume.i.e. First it was 1x3 volume and added three more bricks to make it distributed-replicated volume(2x3) . What should i do for the timeout error ?
    DamonBlais
    @DamonBlais
    Hey there! I am having zero luck building any of the branches on Ubuntu 18.04 -- anyone have ideas? Without tirpc it complains libgfrpc has no reference to log2 -- with tirpc 'release-5' fails at https://gist.github.com/DamonBlais/f1b409e83de7d7e9cba6a1d236c8f738
    DamonBlais
    @DamonBlais
    'release-6' and 'master' fail at the same spot with libtrpc-dev installed from Ubuntu 18.04 repository. With the library removed from the system, they fail at libgfrpc instead. https://gist.github.com/DamonBlais/2b1fb66e7b9eba577c6034cef407617d
    DamonBlais
    @DamonBlais
    script / lines used to compile (from the documentation, with a little more verbosity) https://gist.github.com/DamonBlais/bc44b719950451c3dd7235d8b795f500
    vans163
    @vans163
    hello guys can anyone help me with file-snapshot?
    I tried to set features.file-snapshot on my volume using gluster 5.3
    volume set: failed: option : features.file-snapshot does not exist
    im trying to accomplish file level snapshots
    onewings
    @onewings
    Hi guys , I have a question
    I have a distributed replicated 2x2 configuration, How can I add other two bricks in a new node In a way that all the bricks are well rebalanced across the three nodes? in my tests the 2 bricks added (from the same node) after a rebalance contain the same data, which isn't the best for fault tolerance (if this node goes down the data is lost)
    Sudheer Singh
    @sudheerit11
    Hi Guys,
    Is there any docker plugin for glusterfs
    sancroth
    @sancroth
    Hey guys quick question.
    I want to share certain directories replicated between 2 servers. For example i want /home/user/dir1 , home/user/dir2 /and home/user/dir3/dir1 to be common on both servers. But the naming cannot change since it's used by the setup given already.
    Question 1 : Is it a killer to crate and mount 3 different replica volumes, one for each?
    Question 2 : Can it be done with a single Volume somehow and still keep the replication?
    My problem right now is that the single mount will keep the contents of each directory. Note that each directory give has multiple more directories on the same level so bricking this level is impossible by my understandings so far(will also replicate all other dirs on same level)