by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Goktug Yildirim
    @gkyildirim

    Here is my leo_storage config.

    ======================================================================

    LeoFS - Storage Configuration

    #

    See: http://leo-project.net/leofs/docs/configuration/configuration_2.html

    ======================================================================

    --------------------------------------------------------------------

    SASL

    --------------------------------------------------------------------

    See: http://www.erlang.org/doc/man/sasl_app.html

    #

    The following configuration parameters are defined for

    the SASL application. See app(4) for more information

    about configuration parameters

    SASL error log path

    sasl.sasl_error_log = /var/log/leo_storage/sasl/sasl-error.log

    Restricts the error logging performed by the specified sasl_error_logger

    to error reports, progress reports, or both.

    errlog_type = [error | progress | all]

    sasl.errlog_type = error

    Specifies in which directory the files are stored.

    If this parameter is undefined or false, the error_logger_mf_h is not installed.

    sasl.error_logger_mf_dir = /var/log/leo_storage/sasl

    Specifies how large each individual file can be.

    If this parameter is undefined, the error_logger_mf_h is not installed.

    sasl.error_logger_mf_maxbytes = 10485760

    Specifies how many files are used.

    If this parameter is undefined, the error_logger_mf_h is not installed.

    sasl.error_logger_mf_maxfiles = 5

    --------------------------------------------------------------------

    Manager's Node(s)

    --------------------------------------------------------------------

    Name of Manager node(s)

    managers = [manager0@127.0.0.1, manager1@127.0.0.1]

    --------------------------------------------------------------------

    STORAGE

    --------------------------------------------------------------------

    Object container

    obj_containers.path = [/var/db/leo_storage/avs]
    obj_containers.num_of_containers = [8]

    e.g. Case of plural pathes

    obj_containers.path = [/var/leofs/avs/1, /var/leofs/avs/2]

    obj_containers.num_of_containers = [32, 64]

    Metadata Storage: [bitcask, leveldb] - default:leveldb

    obj_containers.metadata_storage = leveldb

    A number of virtual-nodes for the redundant-manager

    num_of_vnodes = 168

    Enable strict check between checksum of a metadata and checksum of an object

    - default:false

    object_storage.is_strict_check = false

    Threshold of slow processing (msec) - default:1000(msec)

    object_storage.threshold_of_slow_processing = 1000

    Timeout of seeking metadatas per a metadata - default:10(msec)

    seeking_timeout_per_metadata = 10

    Maximum number of processes for both write and read operation

    since v1.2.20

    max_num_of_procs = 3000

    Total number of obj-storage-read processes per object-container, AVS

    Range: [1..100]

    since v1.2.20

    num_of_obj_storage_read_procs = 3

    --------------------------------------------------------------------

    STORAGE - Watchdog

    --------------------------------------------------------------------

    #

    Watchdog.REX(RPC)

    #

    Is rex-watchdog enabled - default:false

    watchdog.rex.is_enabled = true

    rex - watch interval - default:5sec

    watchdog.rex.interval = 10

    Threshold memory capacity of binary for rex(rpc) - default:32MB

    watchdog.rex.threshold_mem_capacity = 33554432

    #

    Watchdog.CPU

    #

    Is cpu-watchdog enabled - default:false

    watchdog.cpu.is_enabled = true

    cpu - raised error times

    watchdog.cpu.raised_error_times = 5

    cpu - watch interval - default:5sec

    watchdog.cpu.interval = 10

    Threshold CPU load avg for 1min/5min - default:5.0

    watchdog.cpu.threshold_cpu_load_avg = 5.0

    Threshold CPU load util - default:100 = "100%"

    watchdog.cpu.threshold_cpu_util = 100

    #

    Watchdog.IO

    #

    Is io-watchdog enabled - default:false

    watchdog.io.is_enabled = true

    io - watch interval - default:1sec

    watchdog.io.interval = 1

    Threshold input size/sec - default:134217728(B) - 128MB/sec

    watchdog.io.threshold_input_per_sec = 134217728

    Threshold output size/sec - default:134217728(B) - 128MB/se

    Sorry for ugly sharing. Here is the proper one: https://gist.github.com/gkyildirim/ee87c75358c962867dfeceaf53032eb3
    Yosuke Hara
    @yosukehara

    @gkyildirim here is a diff file: https://gist.github.com/yosukehara/cbd0c785643f563c5017fbf63791306c
    I’ve recognized watchdog.io is removed w/1.2.2x - https://gist.github.com/yosukehara/cbd0c785643f563c5017fbf63791306c#file-leo_storage-conf-diff-L28-L42

    If you can find error log into erlang.log.* or app/error.20160*.*.*, please let me know.

    Goktug Yildirim
    @gkyildirim
    @yosukehara thanks for your findings. I’ve commented out watchdog.io lines but config error is still there.
    Here is the requested log files output (storage): https://gist.github.com/gkyildirim/bda64be806fe20664de6ae298a725178
    Yosuke Hara
    @yosukehara
    @gkyildirim Thanks for your reply. I have some questions as below:
    1. what do you use LeoFS’ version?
    2. Let me know leo_manager's configuration file
    Yosuke Hara
    @yosukehara
    leofs-v2-future-plan-public.20161020.001.jpeg
    We've been making LeoFS' future plan, and recently reached the consensus about that as an attached file.
    In order to include your idea in the plan, we'd like to know LeoFS' issues, suggestions and requests from you.
    If you would like to have ideas, please don't hesitate to contact me/us.
    Thanks,
    Yosuke Hara
    @yosukehara

    We found a serious issue, which occurs by combination of processing of the large size object handling and the data-compaction.
    There is a possibility which causes data-loss when executing the data-compaction in case of LeoFS' Gateway (leo_gateway) configuration as below:

    "large_object.reading_chunked_obj_len" > "large_object.chunked_obj_len"

    We already filed this issue on GitHub, leo-project/leofs#531

    If you set up your LeoFS' Gateway with "large_object.reading_chunked_obj_len" > "large_object.chunked_obj_len", please let us know.

    Yosuke Hara
    @yosukehara
    LeoFS v1.3.1 has been released: https://github.com/leo-project/leofs/releases/tag/1.3.1
    Fixed bugs of data-compaction and read-repair, and improved large-object handling
    Tony Pourchier
    @tonyp13
    Hi, I made a little software 5 years ago called dragondisk, I know some people used it to access LeoFS. This S3 client is obsolete now, but I still have tens of thousands of users. I am making a new version, it should be ready in march...
    I am interesting in leofs, I think the project is not well known, and I would like to add something on my website or in the software to help. If you are ok, and if you have any suggestions, don't hesitate to tell me (before the next release). Thank you.
    Yosuke Hara
    @yosukehara
    It's OK for us. We're planning to improve LeoFS' documentation from this month. If you have issues about it, let me know. Thanks.
    Timofey Isakov
    @tisOO
    Hello,
    I've installed leofs 1.3.1 by "Quick Start: Building a cluster". I've tried to use s3cmd with signature v4 support, but I had "403 Access denied". When I try to enable signature_v2, it works. But your site says that leofs supports "AWS Signature v4" since 1.3.0. May be I must change something in configuration?
    Yosuke Hara
    @yosukehara

    Hi, in my .s3cfg, I added a configuration as below:

    signature_v2 = True

    I'm going to check AWS signature v4 support of s3cmd.

    Thanks

    Yosuke Hara
    @yosukehara
    @tisOO I’ve checked that, and I could not face the same situation using s3cmd v1.6.1 as below:
    $ s3cmd --version
    s3cmd version 1.6.1
    
    $ grep signature_v2 ~/.s3cfg
    signature_v2 = False
    
    $ s3cmd mb s3://test
    Bucket 's3://test/' created
    
    $ s3cmd sync deps/leo_commons s3://test/
    upload: 'deps/leo_commons/.git/HEAD' -> 's3://test/leo_commons/.git/HEAD'  [1 of 62]
     41 of 41   100% in    0s     5.11 kB/s  done
    upload: 'deps/leo_commons/.git/config' -> 's3://test/leo_commons/.git/config'  [2 of 62]
     273 of 273   100% in    0s    98.56 kB/s  done
    ...
    Michał Buczko
    @mbuczko
    hi, I have just installed leofs (compiled directly from master branch) and tried out its S3 capabilities with Transmit on my Mac. Having 5 jpeg files (each around 2MB) almost each upload ends up with error {reason,{case_clause,{error,timeout}}} on leofs_gateway. 3rd upload stops completely with Transmit message: "Server said: Could not send request body: connection was closed by server". Here are more detailed logs from gateway: https://gist.github.com/mbuczko/41bb73768d201442c49e6fd8edaa7314. Could you please give me a hint what's wrong here? Maybe I haven't configured timeouts or something?
    Yosuke Hara
    @yosukehara
    @mbuczko I’ll check it
    Yosuke Hara
    @yosukehara

    @mbuczko I’ve just briefly tested LeoFS v1.3.2 on Mac OS as below:

    I want to check the following:
    - LeoFS’ version

    • LeoFS’ configuration [leo_gateway.conf, leo_storage.conf and leo_manager_0.conf]
    Michał Buczko
    @mbuczko
    @yosukehara thanks for checking it out. LeoFS version and *.conf files are taken directly from LeoFS master branch. I didn't touch them at all. The only thing that differs from the installation guide, as I noticed at the moment, is Erlang/OTP version I had installed: Erlang/OTP 18 [erts-7.3.1.2] [source] [64-bit] [smp:8:8] [async-threads:10] [kernel-poll:false]
    I wonder if it's not the root cause of my problems. erlang was compiled from sources with following configuration: https://gist.github.com/mbuczko/f1ac403a51f6776f1a48922bd0a0b9df
    Yosuke Hara
    @yosukehara
    @mbuczko you can use an Erlang package for Mac OS - https://www.erlang-solutions.com/resources/download.html
    I recommend you use Erlang/OTP 18.3 of the package for Mac OS
    Андреев Павел
    @Argentum88
    Hello! Which is the best place where I can ask a question about the details of the internal operation of the leofs?
    Yosuke Hara
    @yosukehara
    Michał Buczko
    @mbuczko
    hi. I was experimenting a bit with leofs on alternative linux distros and recently I tried to run it on alpine. It almost worked ;) the only problem I have is gateway which fails with cryptic error log that I can't really understand. I'd be grateful for a hint where to start looking at with this problem. https://gist.github.com/mbuczko/ba753e69cba86b1fb5e003a43f6361e7
    Yosuke Hara
    @yosukehara
    hi,
    Let me know Erlang’s version. I’ve found its related issue: ninenines/ranch#145
    Michał Buczko
    @mbuczko
    sure, here it is:
    erlang-18.3.2-r0
    Erlang/OTP 18 [erts-7.3.1] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false]
    Yosuke Hara
    @yosukehara
    thanks, can you upgrade Erlang’s version to v1.8.3 or later? because of http://erlang.org/pipermail/erlang-questions/2016-April/088969.html
    Yosuke Hara
    @yosukehara
    can you upgrade Erlang’s version to v1.8.3 or later? because of http://erlang.org/pipermail/erlang-questions/2016-April/088969.html
    • correct: Erlang/OTP 18.3.4.4 or later
    • error: Erlang/OTP 18.3 or later
    Michał Buczko
    @mbuczko
    @yosukehara thanks for the hint, it works perfectly now. one more question regarding gateways - is it possible to have a s3 and rest gateway running simultaneously? from technical point of view it's not a problem to run 2 instances but is it something that leo can handle well?
    Yosuke Hara
    @yosukehara

    regarding gateways - is it possible to have a s3 and rest gateway running simultaneously? from technical point of view it's not a problem to run 2 instances but is it something that leo can handle well?

    If it is means one gateway is rest, and the other gateway is s3 (
    there are multiple protocols in one LeoFS system), LeoFS can handle that.

    Michał Buczko
    @mbuczko
    great, thanks for your help :)
    Yosuke Hara
    @yosukehara
    Yosuke Hara
    @yosukehara
    Published v1.3.5 's official packages: https://leo-project.net/leofs/download.html
    Yosuke Hara
    @yosukehara
    LeoFS v1.3.6 has been release:
    Jasper Siepkes
    @siepkes
    hi all! I was doing some experimenting on a lab setup and end up with the follwing state (by doing some stupid things):
     [State of Node(s)]
    -------+--------------------------------------+--------------+----------------+----------------+----------------------------
     type  |                 node                 |    state     |  current ring  |   prev ring    |          updated at         
    -------+--------------------------------------+--------------+----------------+----------------+----------------------------
      S    | leofs-storage-1@10.100.2.199         | stop         | -1             | -1             | 2017-09-28 13:01:42 +0200
      S    | leofs-storage-2@10.100.2.201         | stop         | -1             | -1             | 2017-09-28 13:01:36 +0200
      G    | leofs-gateway-s3-1@10.100.2.220      | running      | 8f824bb0       | 0dc4658a       | 2017-09-27 15:47:21 +0200
    -------+--------------------------------------+--------------+----------------+----------------+----------------------------
    i could just blow the cluster away start over again but im interested in fixing it as an excersie
    i cant seem to delete the 2 stopped storage nodes
    it gives the following error:
    # leofs-adm detach leofs-storage-1@10.100.2.199
    [ERROR] Could not get node-status
    does anyone have any suggestions?
    ( I would expect to always be able to remove a storage node )
    Yosuke Hara
    @yosukehara
    @siepkes Thank you for your report. I’ve recognized leofs-adm detach fail when stopping all nodes as below:
    • leo-project/leofs#855
    I’ll fix this issue at v1.4.0.
    Jasper Siepkes
    @siepkes
    @yosukehara Thanks for the feedback!
    Jasper Siepkes
    @siepkes
    On a related note i'm currently struggling a bit with the (buzzword alert) "cloud-nativeness" of LeoFS. What I mean is how tolerant LeoFS is to VM's being blown away and being recreated. For example leo-project/leofs#514 kinda rains on my cloudnativiness self-healing parade ;-). I realize that kind of manual intervention to reactivate a master node after it went down is normal in the "classic" scenario where you have a server / VM which you always keep around (update LeoFS in VM, etc.). However if you use some form of container orchestration tools with Terraform or Kubernetes this is different. Doing an upgrade usually means deploying an new (immutable) container image in a VM and destroying the old one. My question is; Is this a use case LeoFS wants to support at some point?
    Yosuke Hara
    @yosukehara
    @siepkes We do not have a plan of "dockerize" yet. But we're planning to implement a persistent volumes for K8s with v1.5 o v1.6:
    https://kubernetes.io/docs/concepts/storage/persistent-volumes/