Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    sancroth
    @sancroth
    The sync should basically happen between 2 servers
    Just tryig to move a 10gigs folder in a brick to be copied to the other server, took over an hour
    monoxane
    @monoxane
    Ive got a remove-brick task "pending" even though it was successful and the bricks are removed, this is stopping me from rebalancing but I cant stop the task because the bricks dont exist anymore, how can I get around this?
    Andrea Posarelli
    @andreaposarelli
    Hi there... I'm a new gluster use, I'm looking to some advice...
    I have 3 Dell node with 6 SAS disk. I setup a Proxmox cluster with one disk for proxmox and the other 5 dedicated to Gluster.
    Seems working well but if I create a replicated volume I lose 2/3 of available space.
    There is another way to handle disk space with gluster?
    Distributed seems to be not a good choice because if I have a fail I lose everything, right?
    How about create a gluster distributed volume of 3 big brick built on top raid5 on every node? Is a bad idea?
    Andrea Posarelli
    @andreaposarelli
    Or maybe it's enough do a replica 2 for 3 node instead replica 3?
    dlebee
    @dlebee
    Hello whats better to have 2 brick + 1 arbiter or 3 bricks ? :|
    Wei Wu
    @WeiBanjo
    Anyone having issue downloading Gluster packages ?
    2019-09-04 16:24:27 (890 KB/s) - Connection closed at byte 65137. Retrying.
    This is what I am getting. Using wget https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.8/Debian/9/amd64/apt/pool/main/g/glusterfs/glusterfs-client_4.1.8-1_amd64.deb
    matrixbot
    @matrixbot
    Jesse Hi! I'm looking to move my zfs to glusterfs at some point - for encryption, I'm thinking ecryptfs, do you guys think that is fine? My nodes will be very low powered ARM devices, idk how LUKS would perform on them, and gluster's encryption doesn't encrypt file names from what I understand.
    Paul
    @paulm17
    Hi all. I'm having issues connecting a mac osx client to glusterfs drive. Is this possible? I couldn't find anything recent with google.

    If I do:

    sudo mount -t glusterfs server:myvol1 /mnt/filestore

    I get:

    mount: exec /Library/Filesystems/glusterfs.fs/Contents/Resources/mount_glusterfs for /mnt/filestore: No such file or directory

    if I do:

    sudo mount server:myvol1 /mnt/filestore

    I get:

    mount_nfs: can't mount myvol1 from server onto /mnt/filestore: Permission denied
    Can mount it fine on a remote linux machine
    I'm guessing there's not mac client?
    Johannes Schüth
    @Jotschi
    Hi, I'm just getting started with gluster and testing it. I regularly loose a brick daemon on my host. In the brick logs I see aio_read_error() on /gluster/G2/data/.glusterfs/health_check returned after that the glusterfsd is killed with signum 15. Is this expected behavior? I have no idea what causes the aio_read_error and why the daemon is not restarted.
    I'm using glusterfs 7.2-1
    chrisbecke
    @chrisbecke
    literally dead.
    soonnyblack
    @soonnyblack
    Hi everyone, someone can help me to fix this error : read failed on gfid:.....Bad file descriptor
    Fanyuanli
    @Fanyuanli
    Hello everyone. I ported the latest version of glusterfs 9dev to the arm 32-bit platform. I created a disperse volume on six nodes with two redundant nodes. Then mount it to a directory. I used the tool FIO to compare the read/write rate of a single disk with that of a glusterfs volume. I thought that the read / write rate of a distributed volume should be about four times that of a single disk. As a result, the read-write rate of distributed volume is not as high as that of single disk. I see that when I write disks, they are written concurrently, not sequentially. Why is that?
    Ghost
    @ghost~5ea0f34ed73408ce4fe165e3

    Hi, can GlusterFS be used to "bind" multiple block storages into one mount point?
    Say I have:

    /mnt/BlockStorage1
    /mnt/BlockStorage2
    /mnt/BlockStorage3
    /mnt/BlockStorage4

    Can it work as one mount?

    salamani
    @salamani

    Glusterfs I build it from source, when ran whole test suit using, ./run-tests.sh, Some tests failed, failed one I tried to recheck using prove -vf <failed test>, tests observed hanging, Any input on this will be useful
    Example: prove -vf ./tests/bugs/snapshot/bug-1166197.t
    ```
    ./tests/bugs/snapshot/bug-1166197.t ..
    1..27
    lvremove VG|LV|Tag|Select ...

    vgremove VG|Tag|Select ...

    ok 1 [ 3114/ 40] < 13> 'verify_lvm_version'
    ok 2 [ 15/ 2183] < 14> 'glusterd'
    ok 3 [ 17/ 16] < 15> 'pidof glusterd'
    ok 4 [ 36/ 2440] < 17> 'setup_lvm 1'
    ok 5 [ 21/ 185] < 19> 'gluster --mode=script --wignore volume create patchy petard1.fyre.ibm.com:/d/backends/patchy_snap_mnt'
    ok 6 [ 15/ 247] < 20> 'gluster --mode=script --wignore volume set patchy nfs.disable false'
    ok 7 [ 24/ 1517] < 21> 'gluster --mode=script --wignore volume start patchy'
    ok 8 [ 15/ 164] < 22> 'gluster --mode=script --wignore snapshot config activate-on-create enable'
    ok 9 [ 26/ 2203] < 23> 'gluster --mode=script --wignore volume set patchy features.uss enable'
    ok 10 [ 61/ 105] < 25> 'Started volinfo_field patchy Status'
    ok 11 [ 20/ 40] < 26> '1 is_nfs_export_available'
    ok 12 [ 31/ 41] < 27> 'mount_nfs petard1.fyre.ibm.com:/patchy /mnt/nfs/0 nolock'
    ok 13 [ 54/ 16] < 28> 'mkdir /mnt/nfs/0/testdir'
    ok 14 [ 22/ 1694] < 30> 'gluster --mode=script --wignore snapshot create snap1 patchy no-timestamp'
    ok 15 [ 22/ 1803] < 31> 'gluster --mode=script --wignore snapshot create snap2 patchy no-timestamp'
    ok 16 [ 26/ 25] < 33> '0 STAT /mnt/nfs/0/testdir/.snaps'
    ok 17 [ 18/ 2] < 35> 'cd /mnt/nfs/0/testdir'
    ok 18 [ 46/ 6] < 36> 'cd .snaps'
    ``` Test is not going ahead of 18th test now, any input on this?