Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    sdeepugd
    @sdeepugd
    HI everyone. I am getting "Error : Request timed out " while doing rebalance . I have aded new bricks to my replicated volume.i.e. First it was 1x3 volume and added three more bricks to make it distributed-replicated volume(2x3) . What should i do for the timeout error ?
    DamonBlais
    @DamonBlais
    Hey there! I am having zero luck building any of the branches on Ubuntu 18.04 -- anyone have ideas? Without tirpc it complains libgfrpc has no reference to log2 -- with tirpc 'release-5' fails at https://gist.github.com/DamonBlais/f1b409e83de7d7e9cba6a1d236c8f738
    DamonBlais
    @DamonBlais
    'release-6' and 'master' fail at the same spot with libtrpc-dev installed from Ubuntu 18.04 repository. With the library removed from the system, they fail at libgfrpc instead. https://gist.github.com/DamonBlais/2b1fb66e7b9eba577c6034cef407617d
    DamonBlais
    @DamonBlais
    script / lines used to compile (from the documentation, with a little more verbosity) https://gist.github.com/DamonBlais/bc44b719950451c3dd7235d8b795f500
    vans163
    @vans163
    hello guys can anyone help me with file-snapshot?
    I tried to set features.file-snapshot on my volume using gluster 5.3
    volume set: failed: option : features.file-snapshot does not exist
    im trying to accomplish file level snapshots
    onewings
    @onewings
    Hi guys , I have a question
    I have a distributed replicated 2x2 configuration, How can I add other two bricks in a new node In a way that all the bricks are well rebalanced across the three nodes? in my tests the 2 bricks added (from the same node) after a rebalance contain the same data, which isn't the best for fault tolerance (if this node goes down the data is lost)
    Sudheer Singh
    @sudheerit11
    Hi Guys,
    Is there any docker plugin for glusterfs
    sancroth
    @sancroth
    Hey guys quick question.
    I want to share certain directories replicated between 2 servers. For example i want /home/user/dir1 , home/user/dir2 /and home/user/dir3/dir1 to be common on both servers. But the naming cannot change since it's used by the setup given already.
    Question 1 : Is it a killer to crate and mount 3 different replica volumes, one for each?
    Question 2 : Can it be done with a single Volume somehow and still keep the replication?
    My problem right now is that the single mount will keep the contents of each directory. Note that each directory give has multiple more directories on the same level so bricking this level is impossible by my understandings so far(will also replicate all other dirs on same level)
    Gowtham Shanmugasundaram
    @GowthamShanmugam
    Hi all, In tendrl we are fetching volume profile status from gluster get-state output using attribute "volume{index_no}.profile_enabled" , but i can see this field in downstream but not in upstream. Is it removed recently? i am using upstream 3.12.15
    Junsong Li
    @lijunsong
    @GowthamShanmugam Just out of curiosity, I checked 3.6 and 3.12 code, I don't see anywhere an attribute "profile_enabled" is used in the codebase
    jay vyas
    @jayunit100
    Is this the new IRC ? Hope so :)
    Walker
    @WalkerGriggs
    I'm in the process of deploying gluster to provide persistent storage for jupyter notebooks. I'm using heketi as a rest endpoint, which creates a block for each persistent volume claim. Is there an upper limit to the block ports (49152-), or do I need to open 800+ ports for 800+ blocks?
    sancroth
    @sancroth
    May i ask how is the performance of gluterfs overall? I wish to sync a media catalog of a store ~120gb images mainly. I tried to set it up, but i dont even know if i configured the thing correctly and the drives were not ssd so the performance was really bad. Also the fs was ext3 or 4 i dont remember right now, it was on an old demo server
    The sync should basically happen between 2 servers
    Just tryig to move a 10gigs folder in a brick to be copied to the other server, took over an hour
    monoxane
    @monoxane
    Ive got a remove-brick task "pending" even though it was successful and the bricks are removed, this is stopping me from rebalancing but I cant stop the task because the bricks dont exist anymore, how can I get around this?
    Andrea Posarelli
    @andreaposarelli
    Hi there... I'm a new gluster use, I'm looking to some advice...
    I have 3 Dell node with 6 SAS disk. I setup a Proxmox cluster with one disk for proxmox and the other 5 dedicated to Gluster.
    Seems working well but if I create a replicated volume I lose 2/3 of available space.
    There is another way to handle disk space with gluster?
    Distributed seems to be not a good choice because if I have a fail I lose everything, right?
    How about create a gluster distributed volume of 3 big brick built on top raid5 on every node? Is a bad idea?
    Andrea Posarelli
    @andreaposarelli
    Or maybe it's enough do a replica 2 for 3 node instead replica 3?
    dlebee
    @dlebee
    Hello whats better to have 2 brick + 1 arbiter or 3 bricks ? :|
    Wei Wu
    @WeiBanjo
    Anyone having issue downloading Gluster packages ?
    2019-09-04 16:24:27 (890 KB/s) - Connection closed at byte 65137. Retrying.
    This is what I am getting. Using wget https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.8/Debian/9/amd64/apt/pool/main/g/glusterfs/glusterfs-client_4.1.8-1_amd64.deb
    matrixbot
    @matrixbot
    Jesse Hi! I'm looking to move my zfs to glusterfs at some point - for encryption, I'm thinking ecryptfs, do you guys think that is fine? My nodes will be very low powered ARM devices, idk how LUKS would perform on them, and gluster's encryption doesn't encrypt file names from what I understand.
    Paul
    @paulm17
    Hi all. I'm having issues connecting a mac osx client to glusterfs drive. Is this possible? I couldn't find anything recent with google.

    If I do:

    sudo mount -t glusterfs server:myvol1 /mnt/filestore

    I get:

    mount: exec /Library/Filesystems/glusterfs.fs/Contents/Resources/mount_glusterfs for /mnt/filestore: No such file or directory

    if I do:

    sudo mount server:myvol1 /mnt/filestore

    I get:

    mount_nfs: can't mount myvol1 from server onto /mnt/filestore: Permission denied
    Can mount it fine on a remote linux machine
    I'm guessing there's not mac client?
    Johannes Schüth
    @Jotschi
    Hi, I'm just getting started with gluster and testing it. I regularly loose a brick daemon on my host. In the brick logs I see aio_read_error() on /gluster/G2/data/.glusterfs/health_check returned after that the glusterfsd is killed with signum 15. Is this expected behavior? I have no idea what causes the aio_read_error and why the daemon is not restarted.
    I'm using glusterfs 7.2-1
    chrisbecke
    @chrisbecke
    literally dead.
    soonnyblack
    @soonnyblack
    Hi everyone, someone can help me to fix this error : read failed on gfid:.....Bad file descriptor
    Fanyuanli
    @Fanyuanli
    Hello everyone. I ported the latest version of glusterfs 9dev to the arm 32-bit platform. I created a disperse volume on six nodes with two redundant nodes. Then mount it to a directory. I used the tool FIO to compare the read/write rate of a single disk with that of a glusterfs volume. I thought that the read / write rate of a distributed volume should be about four times that of a single disk. As a result, the read-write rate of distributed volume is not as high as that of single disk. I see that when I write disks, they are written concurrently, not sequentially. Why is that?
    Ghost
    @ghost~5ea0f34ed73408ce4fe165e3

    Hi, can GlusterFS be used to "bind" multiple block storages into one mount point?
    Say I have:

    /mnt/BlockStorage1
    /mnt/BlockStorage2
    /mnt/BlockStorage3
    /mnt/BlockStorage4

    Can it work as one mount?

    salamani
    @salamani

    Glusterfs I build it from source, when ran whole test suit using, ./run-tests.sh, Some tests failed, failed one I tried to recheck using prove -vf <failed test>, tests observed hanging, Any input on this will be useful
    Example: prove -vf ./tests/bugs/snapshot/bug-1166197.t
    ```
    ./tests/bugs/snapshot/bug-1166197.t ..
    1..27
    lvremove VG|LV|Tag|Select ...

    vgremove VG|Tag|Select ...

    ok 1 [ 3114/ 40] < 13> 'verify_lvm_version'
    ok 2 [ 15/ 2183] < 14> 'glusterd'
    ok 3 [ 17/ 16] < 15> 'pidof glusterd'
    ok 4 [ 36/ 2440] < 17> 'setup_lvm 1'
    ok 5 [ 21/ 185] < 19> 'gluster --mode=script --wignore volume create patchy petard1.fyre.ibm.com:/d/backends/patchy_snap_mnt'
    ok 6 [ 15/ 247] < 20> 'gluster --mode=script --wignore volume set patchy nfs.disable false'
    ok 7 [ 24/ 1517] < 21> 'gluster --mode=script --wignore volume start patchy'
    ok 8 [ 15/ 164] < 22> 'gluster --mode=script --wignore snapshot config activate-on-create enable'
    ok 9 [ 26/ 2203] < 23> 'gluster --mode=script --wignore volume set patchy features.uss enable'
    ok 10 [ 61/ 105] < 25> 'Started volinfo_field patchy Status'
    ok 11 [ 20/ 40] < 26> '1 is_nfs_export_available'
    ok 12 [ 31/ 41] < 27> 'mount_nfs petard1.fyre.ibm.com:/patchy /mnt/nfs/0 nolock'
    ok 13 [ 54/ 16] < 28> 'mkdir /mnt/nfs/0/testdir'
    ok 14 [ 22/ 1694] < 30> 'gluster --mode=script --wignore snapshot create snap1 patchy no-timestamp'
    ok 15 [ 22/ 1803] < 31> 'gluster --mode=script --wignore snapshot create snap2 patchy no-timestamp'
    ok 16 [ 26/ 25] < 33> '0 STAT /mnt/nfs/0/testdir/.snaps'
    ok 17 [ 18/ 2] < 35> 'cd /mnt/nfs/0/testdir'
    ok 18 [ 46/ 6] < 36> 'cd .snaps'
    ``` Test is not going ahead of 18th test now, any input on this?