by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Ju Lim (Red Hat)
    @julienlim
    you might also try the #heketi channel on IRC (freenode)
    Nicolas Goudry
    @nicolas-goudry
    Yeah I read all blog posts about GFS/K8S… But none explains how to create those 3 nodes, and they don’t explain how to configure them :cry:
    All tutorials start AFTER node creation… I don’t understand why…
    Will try heketi on IRC also ! Thanks
    Atin Mukherjee
    @atinmu
    @humblec ^^ can you please help with the pointers?
    Michael Weichert
    @mweichert
    Is Gluster 4x a stable or preview release?
    sdeepugd
    @sdeepugd
    HI . we use gluster file system along with docker container. and these setups are used by users. So at any point of time more than 1000 members will read and write data simultaneously . What kind of volume should we go for ?
    AlexSyS777
    @AlexSyS777

    Hello to anyone

    I see issue on my cluster when tried create block volume via heketi

    [root@heketi-75dcfb7d44-c5bqc /]# heketi-cli blockvolume create --ha 3 --size 1
    Error: Failed to allocate new block volume: Block Hosting Volume Creation is disabled. Create a Block hosting volume and try again.
    [root@heketi-75dcfb7d44-c5bqc /]# heketi-cli cluster list
    Clusters:
    Id:c0775ad2dfcd0957fc3a761fb00e06f9 [file][block]
    [root@heketi-75dcfb7d44-c5bqc /]# exit
    kubernetes@k8s-master-dev-1:~$ ksys exec -ti glusterfs-qczp9 bash
    [root@k8s-node-dev-1 /]# systemctl status gluster-blockd
    ● gluster-blockd.service - Gluster block storage utility
    Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled)
    Active: inactive (dead)

    Oct 17 20:23:10 k8s-node-dev-1 systemd[1]: Dependency failed for Gluster block storage utility.
    Oct 17 20:23:10 k8s-node-dev-1 systemd[1]: Job gluster-blockd.service/start failed with result ‘dependency’.
    [root@k8s-node-dev-1 /]#

    Whats wrong?
    I'm using heketi on k8s

    AlexSyS777
    @AlexSyS777
    Just for info, issue above related to https://bugzilla.redhat.com/show_bug.cgi?id=1462792 and solved :)
    RockychengJson
    @RockychengJson
    Hi,all. Can I achieve conditional mount using glusterfs? Here is my situation. I have several groups, and each group have the same directory structure but with different group data. Can I pass a parameter such as group name, so that glusterfs can determine using which group file. For example, I have a directory named "root_dir", and inside the root_dir , I have three sub-directory, namely the "group_one", "group_two", "group_three". I mount the "root_dir" to a kubernetes pod, and the current group is group one , so I want the files in "group_one" is available with "rwx", but the other two sub-directories are unavailable with"---".
    miguel de anda
    @mdeanda
    Hi all. My home gluster setup seems to have gotten messed up again after a reboot. I have multiple bricks all in a 2+1 replica with an arbitor. After the reboot it seems like running commands from arbitor server are painfully slow and time out (peer status, volume info). When the arbitor is down it functions normally. The interesting thing is all three hours also locally mount the bricks and data is available on all. I'm not sure how to troubleshoot this.
    rmartinez3
    @rmartinez3
    Hello everyone, I have a geo replication setup running and have noticed that gluster brick for changelogs/htime file keeps growing in size. I have been reading that it is used by geo replication. However is there a way i can either archive or discard some of the data. I Have seen the HTIME file go up to 200 mb. If theres a way i can safely shorten the file or archive. That will be helpful. Thanks
    ssuio
    @ssuio
    Branch problem, why merging v3.7.13 to v3.7.15 got 300+ conflicts? Is there any way to merge to v3.7.15 without conflict ?
    sdeepugd
    @sdeepugd
    HI everyone. I am getting "Error : Request timed out " while doing rebalance . I have aded new bricks to my replicated volume.i.e. First it was 1x3 volume and added three more bricks to make it distributed-replicated volume(2x3) . What should i do for the timeout error ?
    DamonBlais
    @DamonBlais
    Hey there! I am having zero luck building any of the branches on Ubuntu 18.04 -- anyone have ideas? Without tirpc it complains libgfrpc has no reference to log2 -- with tirpc 'release-5' fails at https://gist.github.com/DamonBlais/f1b409e83de7d7e9cba6a1d236c8f738
    DamonBlais
    @DamonBlais
    'release-6' and 'master' fail at the same spot with libtrpc-dev installed from Ubuntu 18.04 repository. With the library removed from the system, they fail at libgfrpc instead. https://gist.github.com/DamonBlais/2b1fb66e7b9eba577c6034cef407617d
    DamonBlais
    @DamonBlais
    script / lines used to compile (from the documentation, with a little more verbosity) https://gist.github.com/DamonBlais/bc44b719950451c3dd7235d8b795f500
    vans163
    @vans163
    hello guys can anyone help me with file-snapshot?
    I tried to set features.file-snapshot on my volume using gluster 5.3
    volume set: failed: option : features.file-snapshot does not exist
    im trying to accomplish file level snapshots
    onewings
    @onewings
    Hi guys , I have a question
    I have a distributed replicated 2x2 configuration, How can I add other two bricks in a new node In a way that all the bricks are well rebalanced across the three nodes? in my tests the 2 bricks added (from the same node) after a rebalance contain the same data, which isn't the best for fault tolerance (if this node goes down the data is lost)
    Sudheer Singh
    @sudheerit11
    Hi Guys,
    Is there any docker plugin for glusterfs
    sancroth
    @sancroth
    Hey guys quick question.
    I want to share certain directories replicated between 2 servers. For example i want /home/user/dir1 , home/user/dir2 /and home/user/dir3/dir1 to be common on both servers. But the naming cannot change since it's used by the setup given already.
    Question 1 : Is it a killer to crate and mount 3 different replica volumes, one for each?
    Question 2 : Can it be done with a single Volume somehow and still keep the replication?
    My problem right now is that the single mount will keep the contents of each directory. Note that each directory give has multiple more directories on the same level so bricking this level is impossible by my understandings so far(will also replicate all other dirs on same level)
    Gowtham Shanmugasundaram
    @GowthamShanmugam
    Hi all, In tendrl we are fetching volume profile status from gluster get-state output using attribute "volume{index_no}.profile_enabled" , but i can see this field in downstream but not in upstream. Is it removed recently? i am using upstream 3.12.15
    Junsong Li
    @lijunsong
    @GowthamShanmugam Just out of curiosity, I checked 3.6 and 3.12 code, I don't see anywhere an attribute "profile_enabled" is used in the codebase
    jay vyas
    @jayunit100
    Is this the new IRC ? Hope so :)
    Walker
    @WalkerGriggs
    I'm in the process of deploying gluster to provide persistent storage for jupyter notebooks. I'm using heketi as a rest endpoint, which creates a block for each persistent volume claim. Is there an upper limit to the block ports (49152-), or do I need to open 800+ ports for 800+ blocks?
    sancroth
    @sancroth
    May i ask how is the performance of gluterfs overall? I wish to sync a media catalog of a store ~120gb images mainly. I tried to set it up, but i dont even know if i configured the thing correctly and the drives were not ssd so the performance was really bad. Also the fs was ext3 or 4 i dont remember right now, it was on an old demo server
    The sync should basically happen between 2 servers
    Just tryig to move a 10gigs folder in a brick to be copied to the other server, took over an hour
    monoxane
    @monoxane
    Ive got a remove-brick task "pending" even though it was successful and the bricks are removed, this is stopping me from rebalancing but I cant stop the task because the bricks dont exist anymore, how can I get around this?
    Andrea Posarelli
    @andreaposarelli
    Hi there... I'm a new gluster use, I'm looking to some advice...
    I have 3 Dell node with 6 SAS disk. I setup a Proxmox cluster with one disk for proxmox and the other 5 dedicated to Gluster.
    Seems working well but if I create a replicated volume I lose 2/3 of available space.
    There is another way to handle disk space with gluster?
    Distributed seems to be not a good choice because if I have a fail I lose everything, right?
    How about create a gluster distributed volume of 3 big brick built on top raid5 on every node? Is a bad idea?
    Andrea Posarelli
    @andreaposarelli
    Or maybe it's enough do a replica 2 for 3 node instead replica 3?
    dlebee
    @dlebee
    Hello whats better to have 2 brick + 1 arbiter or 3 bricks ? :|
    Wei Wu
    @WeiBanjo
    Anyone having issue downloading Gluster packages ?
    2019-09-04 16:24:27 (890 KB/s) - Connection closed at byte 65137. Retrying.
    This is what I am getting. Using wget https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.8/Debian/9/amd64/apt/pool/main/g/glusterfs/glusterfs-client_4.1.8-1_amd64.deb
    matrixbot
    @matrixbot
    Jesse Hi! I'm looking to move my zfs to glusterfs at some point - for encryption, I'm thinking ecryptfs, do you guys think that is fine? My nodes will be very low powered ARM devices, idk how LUKS would perform on them, and gluster's encryption doesn't encrypt file names from what I understand.
    Paul
    @paulm17
    Hi all. I'm having issues connecting a mac osx client to glusterfs drive. Is this possible? I couldn't find anything recent with google.

    If I do:

    sudo mount -t glusterfs server:myvol1 /mnt/filestore

    I get:

    mount: exec /Library/Filesystems/glusterfs.fs/Contents/Resources/mount_glusterfs for /mnt/filestore: No such file or directory

    if I do:

    sudo mount server:myvol1 /mnt/filestore

    I get:

    mount_nfs: can't mount myvol1 from server onto /mnt/filestore: Permission denied
    Can mount it fine on a remote linux machine
    I'm guessing there's not mac client?
    Johannes Schüth
    @Jotschi
    Hi, I'm just getting started with gluster and testing it. I regularly loose a brick daemon on my host. In the brick logs I see aio_read_error() on /gluster/G2/data/.glusterfs/health_check returned after that the glusterfsd is killed with signum 15. Is this expected behavior? I have no idea what causes the aio_read_error and why the daemon is not restarted.
    I'm using glusterfs 7.2-1
    chrisbecke
    @chrisbecke
    literally dead.