Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Joe Mordica
    @jmordica
    Hi there! A couple of questions if you don’t mind. I recently came across bedrock via a TechCrunch article and it’s very intriguing. We run our own telephony platform in a multi cluster GKE env (multiple regions) and this seems to be a good fit.
    1. can bedrockdb assign priority value dynamically?
    1. What needs to happen if nodes need to be added to the cluster at a later point in time?
    1. Is the data for designed to be ephemeral? Or should you always keep the data dir as fresh as possible and not rely on backups when/if a reboot of a node happens?
    David Barrett
    @quinthar
    Currently bedrock is not designed to dynamically add/remove nodes. It likely wouldn't be that hard from a clustering perspective, but we operate on a fixed set of hardware with a 5 year lifetime, so adding/removing nodes is very rare.
    The process right now for adding a node is to incrementally restart the server with a description of the IP address and Port that the new node will communicate on, and then one by one each node will connect to it
    I'm not sure what you mean about the data being ephemeral. It's intended to be a stable enterprise database, so the data is permanent. We take one follower node down nightly to do a complete backup to s3 (which we do with a bunch of parallel block uploads, so it's pretty fast), and keep a few days in the journal.
    David Barrett
    @quinthar
    This design allows "bootstrapping" direct from s3 by doing a parallel download of blocks from s3, which are decrypted and reassembled with multiple threads. Then it gets and replays whatever transactions are missing from peers.
    So you can think of real-time synchronization happening via our private blockchain, but when we bootstrap we don't replay from scratch we just download a fresh backup and then reconnect and replay the most recent hours.
    @umbrellerde sorry for the slow response -- i think Connection: wait is currently not working. I believe we broke it when adding some multi-threaded optimizations, and hasn't been rebuilt yet. Is that right @tylerkaraszewski ?
    Joe Mordica
    @jmordica
    Wow Saturday response! I think i understand the process of adding a new node. Like you said i dont think we would do this either as we would most likely run 2 nodes per data center with a total of 6 nodes in 3 regions.
    What i meant by the data being ephemeral is that in our kubernetes environment i would hope to be able to completely lose the .db file for a particular node when/if a crash happens on an instances or a VM needs a reboot (and be able to easily restore from s3 using an initContainer before starting bedrockdb). And then i believe bedrock would catch itself up to the other nodes. Hopefully i’m understanding that lifecycle properly?
    the last question is about priority and whether or not one can be assigned dynamically on startup? So if i have a startup script that assigns a random integer as the priority are there any drawbacks to that?
    Joe Mordica
    @jmordica
    Lastly, is there a config param that can be applied to the bedrock process when starting to specify ‘ASYNC’ as a default for writes?
    Joe Mordica
    @jmordica
    I'm noticing a few things that need to be added to the MySQL plugin that would make it more adaptable with MySQL tools like TablePlus etc.. (like SHOW TABLES etc..). Would you guys accept pull requests for these types of enhancements? It would obviously have to touch other places in the codebase so just checking before putting in time here.
    David Barrett
    @quinthar

    we would most likely run 2 nodes per data center with a total of 6 nodes in 3 regions.

    This is exactly what we do and it works great.

    would hope to be able to completely lose the .db file for a particular node when/if a crash happens on an instances or a VM needs a reboot (and be able to easily restore from s3 using an initContainer before starting bedrockdb).

    Ah we do have a method to bootstrap from a S3 backup and then catch up, but we haven't automated it to the level you are likely thinking (ie, it doesn't identify the most recent backup and download it automatically). I could imagine you adding this without too much effort however. (I'm not 100% sure if our backup code is open source now that I think about it, we could look into this if you were seriously interested.)

    the last question is about priority and whether or not one can be assigned dynamically on startup? So if i have a startup script that assigns a random integer as the priority are there any drawbacks to that?

    Yes, that could likely work fine. In practice we find it makes the system easier to manage if the priorities are static, as then you have a better sense of which nodes are doing which without needing to consult the logs.

    David Barrett
    @quinthar

    Lastly, is there a config param that can be applied to the bedrock process when starting to specify ‘ASYNC’ as a default for writes?

    Hm, I don't recall, but I imagine that would be easy to add.

    Would you guys accept pull requests for these types of enhancements? It would obviously have to touch other places in the codebase so just checking before putting in time here.

    Yes, PRs very welcome!!

    Joe Mordica
    @jmordica
    Thanks for getting back with me!
    Joe Mordica
    @jmordica
    Are you guys still open for accepting PR’s? We will be submitting additional PR’s related to enhancing the MySQL plugin along with a prometheus exporter etc.. if so.
    burggraf
    @burggraf
    Is this room still active?
    BG Bruno
    @bgbruno:matrix.org
    [m]

    I like #sqlite and works pretty much good with #strapi and #directus

    seems to me still better choice than use "cloud db" but I need more instances into 1file - so I searched and found this article https://sqlite.org/forum/info/339d237d6517783a which refers to you

    I what I read is pretty great https://bedrockdb.com very good builded project 😃👏

    Q) so it is process service above #sqlite connected through port for sync ?
    https://bedrockdb.com/multizone.html

    Q) it that communications encrypted somehow ?

    Q) how colisions are handeled ?

    Lauri Ojansivu
    @xet7
    About various databases:

    @Nevernown

    Problem with NoSQL, like MongoDB, is about how to make those queries with Javascript. I'm currently trying to do some query in Javascript, but I have not got it working it yet. I do know how to do it with SQL.

    For MySQL or PostgreSQL, if you sometime get a lot of data, it may happen that you would need to migrate to SQLite to keep costs manageable:
    https://news.ycombinator.com/item?id=27673359

    Many small queries are efficient in SQLite:
    https://sqlite.org/np1queryprob.html

    Using SQLite is 35% faster than using filesystem:
    https://www.sqlite.org/fasterthanfs.html

    If you need global scale with SQLite, there is BedrockDB:
    https://bedrockdb.com
    https://twit.tv/shows/floss-weekly/episodes/456

    For GDPR encrypted stuff, there is Databunker:
    https://databunker.org

    If MySQL is not your preferred option, you could also look is MySQL compatible Noria any better:

    If I remember correcly something from interview, encrypting network traffic could be done with vpn. But please check from BedrockDB docs
    or source code
    Lauri Ojansivu
    @xet7
    Huh, in multizone docs is vpn mentioned
    AFAIK BedrockDB keeps data in RAM for fast queries, and persists on disk
    Lauri Ojansivu
    @xet7
    It's possible to stream backup sqlite with https://litestream.io/
    Anyway, I have not yet used BedrockDB. It's just that I remember something from interview of BedrockDB and noticed some new activity on this chat channel
    jfinity
    @jfinity
    Hi, I presume(/hope) this is the case, but is bedrock "safe" (if not fast) to run on "typical" cloud network attached storage (as opposed to local physical disks) as long as there is only a signal server attached to it?
    Footie Fives
    @footiefives_twitter
    hi, Im having problems connecting using the mysql client. the port seems open but the client just hangs
    azureuser@Horseportal:~$ mysql -h 127.0.0.1
    ^C
    azureuser@Horseportal:~$ mysql -V
    mysql Ver 8.0.29-0ubuntu0.20.04.3 for Linux on x86_64 ((Ubuntu))
    azureuser@Horseportal:~$
    May 16 13:30:09 Horseportal bedrock: xxxxxx (main.cpp:377) main [main] [info] [performance] main poll loop timing: 10001 ms elapsed. 10001 ms in poll. 0 ms in postPoll.
    May 16 13:30:14 Horseportal bedrock: xxxxxx (BedrockServer.cpp:2127) _acceptSockets [main] [dbug] Accepting socket from '127.0.0.1:54778' on port 'localhost:3306'
    May 16 13:30:14 Horseportal bedrock: xxxxxx (BedrockServer.cpp:2260) handleSocket [socket2] [info] Socket thread starting
    May 16 13:30:16 Horseportal bedrock: xxxxxx (STCPManager.cpp:196) shutdown [socket2] [dbug] Shutting down socket '127.0.0.1:54778'
    May 16 13:30:16 Horseportal bedrock: xxxxxx (BedrockServer.cpp:2421) handleSocket [socket2] [info] Socket thread complete (0 remaining)
    running on Azure
    bedrock seems to be running fine

    nc localhost 8888
    Query: SELECT 1 AS foo, 2 AS bar;

    200 OK
    commitCount: 1
    nodeName: Horseportal
    peekTime: 163
    totalTime: 323
    unaccountedTime: 133
    Content-Length: 16

    foo | bar
    1 | 2

    Footie Fives
    @footiefives_twitter
    Has anyone been able to get Bedrock to complie on the Raspberry Pi?
    maxDBs(max(maxDBs, 1ul)) error
    g++-9 -g -std=c++17 -fpic -DSQLITE_ENABLE_NORMALIZE -O2 -Wall -Werror -Wformat-security -Wno-error=deprecated-declarations -I/home/pi/Bedrock -I/home/pi/Bedrock/mbedtls/include -MMD -MF .build/sqlitecluster/SQLitePool.d -MT .build/sqlitecluster/SQLitePool.o -o .build/sqlitecluster/SQLitePool.o -c sqlitecluster/SQLitePool.cpp
    sqlitecluster/SQLitePool.cpp: In constructor ‘SQLitePool::SQLitePool(size_t, const string&, int, int, int, const string&, int64_t, bool)’:
    sqlitecluster/SQLitePool.cpp:13:26: error: no matching function for call to ‘max(size_t&, long unsigned int)’
    : _maxDBs(max(maxDBs, 1ul)),
    Footie Fives
    @footiefives_twitter
    I got Bedrock compiled for the Raspberry Pi and working ok.
    Footie Fives
    @footiefives_twitter
    quick question. I cant get any sqlite clients to work with the bedrock.db file. sqlite or sqlitebrowser allsays its not in an sqlite file format
    Footie Fives
    @footiefives_twitter
    is the file format of the db file incompatible with the sqlite clients?