Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Brian Lee
    @lkm1321_gitlab
    i'm looking for something that supports SQL read/write queries across multiple machines. it's okay that the sync is a bit delayed, but i need immediate read/write at least to the local DB.
    i tinkered a bit with rqlite, but realised that it only allows write to the leader/master node. in my application, i expect significant latency between machines, so this will be a problem..
    am i correct in understanding that bedrock allows both read/write to local db, and synchronisation happens in the background?
    also, would it be possible to use bedrock as a C/C++ library, similar to what sqlite3 provides?
    thanks in advance for help :)
    Jose V. Trigueros
    @jvtrigueros

    Hi all, I currently run a Discord bot with a SQLite backend database. It was working fine until I enabled sharding, meaning there are multiple threads writing/reading from the same SQLite database. I've enabled WAL mode however I still get SQLITE_BUSY error codes.

    I stumbled upon Bedrock, so I figured I could give it a shot. I was able to build and run Bedrock successfully, pointing to my existing SQLite file, however, because there's no Java SDK for Bedrock I decided to give the MySQL plugin a shot. This is where the horror story starts, I'm only able to make SQL queries by connecting to Bedrock via the MySQL CLI client, but not if I'm connected via my bot using the ORM.

    Am I out of luck here?

    Paul Bergeron
    @dinedal

    I hope this channel can help me: I just created Expensify/Bedrock#788

    The tests pass and it is running, but it doesn't seem to matter what I send on 8888, I get no response. Any advice?

    $ nc localhost 8888
    SELECT 1;
    Query: SELECT 1;
    Query:
    SELECT 1;

    All with no response

    Paul Bergeron
    @dinedal
    Additionally, the mysql client always responds with ERROR 1045 (28000): Access denied
    David Barrett
    @quinthar
    @teamtad_twitter Hey sorry for the slow response, can you share a link to your code, or copy/paste the relevant section here?
    Lmk the actual compilation errors
    David Barrett
    @quinthar
    @lkm1321_gitlab Sorry for the delay -- yes, all writes are (currently) escalated to the leader. But all reads are done locally. But yes, there's a C++ "Plugin" interface. Here's a really basic example: https://github.com/Expensify/Bedrock/blob/master/plugins/DB.cpp
    @dinedal Can you check /var/log/syslog to see if any errors are being output? Also, what is the exact command line you are running?
    Zachary Whitley
    @zacharywhitley
    Hi, I just started checking out bedrock and wanted to let you know that I didn't see any docs for installing on CentOS 7 but I did manage to get it built :)
    Zachary Whitley
    @zacharywhitley
    Here's a gist if you're interested in adding it. I haven't had a chance to go over it closely but it worked for me https://gist.github.com/zacharywhitley/3d159d3c78a49b54e2fbbe51b79b128a
    Thomas
    @symgryph
    Anyone got this working on alpine?
    I was trying to get to work on embedded system
    g++-9 -o bedrock .build/main.o -Lmbedtls/library -L/Bedrock -rdynamic -lbedrock -lstuff -ldl -lpcrecpp -lpthread -lmbedtls -lmbedx509 -lmbedcrypto -lz
    /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: /Bedrock/libstuff.a(SLog.o): in function SLogStackTrace()': /Bedrock/libstuff/SLog.cpp:10: undefined reference tobacktrace'
    /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: /Bedrock/libstuff.a(libstuff.o): in function SGetCallstack[abi:cxx11](int, void* const*)': /Bedrock/libstuff/libstuff.cpp:107: undefined reference tobacktrace_symbols'
    /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: /Bedrock/libstuff.a(libstuff.o): in function SException::SException(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, SString, STableComp, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, SString> > > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': /Bedrock/libstuff/libstuff.cpp:95: undefined reference tobacktrace'
    /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: /Bedrock/libstuff.a(SSignal.o): in function _SSignal_StackTrace(int, siginfo_t*, void*)': /Bedrock/libstuff/SSignal.cpp:151: undefined reference tobacktrace'
    /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: /Bedrock/libstuff/SSignal.cpp:157: undefined reference to `backtrace_symbols_fd'
    collect2: error: ld returned 1 exit status
    make: * [Makefile:114: bedrock] Error 1
    just testing or are we SOL for musl?
    David Barrett
    @quinthar_twitter
    Hm, I haven't done any embedded toolchain stuff in quite a while, but no effort has been made to make Bedrock compile there. I'm sure it's doable... just not done. Sorry!
    János Veres
    @jveres
    Hi! I created a super minimal Docker image for Bedrock, it's less than 6MB. If anyone's interested it's available at https://hub.docker.com/r/jveres/bedrock-xsim.
    trancephorm
    @trancephorm
    Hello everyone, been researching lately what decentralized database solutions exist and I must say Bedrock seems pretty exciting to me, still I cannot fully grasp how it relates to decentralization property, better said how much it is censorship proof? I'm especially interested how it relates to Chromia and Wip2p, what happens if one node is attacked and start malicious edits or deleting of data?
    David Barrett
    @quinthar_twitter
    Hi @trancephorm nice to meet you! I don't think it's really designed for that use case. BedrockDB is designed for all nodes to operate on servers that you control, for queries you write. It's not a database opened up to arbitrary parties. Now, with plugin you could add an authentication layer and only expose stored procedures (rather than enable arbitrary queries) -- this is what we do for Expensify. But still, all of the nodes of the cluster need to be running on your own servers, as it can't somehow differentiate between "good" and "bad" queries.
    trancephorm
    @trancephorm
    @quinthar_twitter Thanks for this clarification. And what exactly is Expensify? If the nodes are dispensed thought the whole Internet, would the client connect to nearest node? Let's say I trust my nodes, and if there are many of them, as there is no single point of failure, may we say this is pretty much censorship resistant system? Actually Bedrock may be the best option for me, because of such properties.
    David Barrett
    @quinthar_twitter
    Hm, BedrockDB is basically a replacement for MySQL: anything you'd consider using MySQL for, you can use Bedrock for. But you wouldn't use MySQL or Bedrock for, say, a distributed database where you don't control every node. If you are afraid of one of the nodes being compromised this isn't a good platform, because any node can see and modify anything. On the other hand, if you are merely concerned about something being knocked offline and becoming inaccessible, then yes, Bedrock could be perfect because it's very fault tolerant
    (You can check out https://expensify.com -- it's a mobile app for scanning receipts and getting reimbursed)
    trancephorm
    @trancephorm
    So there's no some automatism in discovering the nearest node, I guess some DNS balancing is how load is balanced over the nodes, which is not considering what node is the closest?
    David Barrett
    @quinthar_twitter
    Well that would happen at a different layer -- you'd use something like IP anycast for that, which would cause DNS to resolve to the nearest server.
    trancephorm
    @trancephorm
    What attracted me to Bedrock is SQL compatibility, yet I think solutions like https://wip2p.eth.link/ are more suitable for my use case, but I don't have idea how I would make relational database work on key-value based "database", because abstraction layer is so spartan I wouldn't even call it a database.... Anyways, thanks for your responses!
    umbrellerde
    @umbrellerde

    Hello there, I am just starting with Bedrock Jobs and have some questions, I hope this is the right place?
    I want my consumers to wait up to 30s for any Job thats with "hello.", so I send this request:

    GetJob
    name: hello.*
    connection: wait
    timeout: 30000

    But this command immediately returns a "404 No job found". Shouldn't it wait 30s before returning?

    Joe Mordica
    @jmordica
    Hi there! A couple of questions if you don’t mind. I recently came across bedrock via a TechCrunch article and it’s very intriguing. We run our own telephony platform in a multi cluster GKE env (multiple regions) and this seems to be a good fit.
    1. can bedrockdb assign priority value dynamically?
    1. What needs to happen if nodes need to be added to the cluster at a later point in time?
    1. Is the data for designed to be ephemeral? Or should you always keep the data dir as fresh as possible and not rely on backups when/if a reboot of a node happens?
    David Barrett
    @quinthar
    Currently bedrock is not designed to dynamically add/remove nodes. It likely wouldn't be that hard from a clustering perspective, but we operate on a fixed set of hardware with a 5 year lifetime, so adding/removing nodes is very rare.
    The process right now for adding a node is to incrementally restart the server with a description of the IP address and Port that the new node will communicate on, and then one by one each node will connect to it
    I'm not sure what you mean about the data being ephemeral. It's intended to be a stable enterprise database, so the data is permanent. We take one follower node down nightly to do a complete backup to s3 (which we do with a bunch of parallel block uploads, so it's pretty fast), and keep a few days in the journal.
    David Barrett
    @quinthar
    This design allows "bootstrapping" direct from s3 by doing a parallel download of blocks from s3, which are decrypted and reassembled with multiple threads. Then it gets and replays whatever transactions are missing from peers.
    So you can think of real-time synchronization happening via our private blockchain, but when we bootstrap we don't replay from scratch we just download a fresh backup and then reconnect and replay the most recent hours.
    @umbrellerde sorry for the slow response -- i think Connection: wait is currently not working. I believe we broke it when adding some multi-threaded optimizations, and hasn't been rebuilt yet. Is that right @tylerkaraszewski ?
    Joe Mordica
    @jmordica
    Wow Saturday response! I think i understand the process of adding a new node. Like you said i dont think we would do this either as we would most likely run 2 nodes per data center with a total of 6 nodes in 3 regions.
    What i meant by the data being ephemeral is that in our kubernetes environment i would hope to be able to completely lose the .db file for a particular node when/if a crash happens on an instances or a VM needs a reboot (and be able to easily restore from s3 using an initContainer before starting bedrockdb). And then i believe bedrock would catch itself up to the other nodes. Hopefully i’m understanding that lifecycle properly?
    the last question is about priority and whether or not one can be assigned dynamically on startup? So if i have a startup script that assigns a random integer as the priority are there any drawbacks to that?
    Joe Mordica
    @jmordica
    Lastly, is there a config param that can be applied to the bedrock process when starting to specify ‘ASYNC’ as a default for writes?
    Joe Mordica
    @jmordica
    I'm noticing a few things that need to be added to the MySQL plugin that would make it more adaptable with MySQL tools like TablePlus etc.. (like SHOW TABLES etc..). Would you guys accept pull requests for these types of enhancements? It would obviously have to touch other places in the codebase so just checking before putting in time here.
    David Barrett
    @quinthar

    we would most likely run 2 nodes per data center with a total of 6 nodes in 3 regions.

    This is exactly what we do and it works great.

    would hope to be able to completely lose the .db file for a particular node when/if a crash happens on an instances or a VM needs a reboot (and be able to easily restore from s3 using an initContainer before starting bedrockdb).

    Ah we do have a method to bootstrap from a S3 backup and then catch up, but we haven't automated it to the level you are likely thinking (ie, it doesn't identify the most recent backup and download it automatically). I could imagine you adding this without too much effort however. (I'm not 100% sure if our backup code is open source now that I think about it, we could look into this if you were seriously interested.)

    the last question is about priority and whether or not one can be assigned dynamically on startup? So if i have a startup script that assigns a random integer as the priority are there any drawbacks to that?

    Yes, that could likely work fine. In practice we find it makes the system easier to manage if the priorities are static, as then you have a better sense of which nodes are doing which without needing to consult the logs.

    David Barrett
    @quinthar

    Lastly, is there a config param that can be applied to the bedrock process when starting to specify ‘ASYNC’ as a default for writes?

    Hm, I don't recall, but I imagine that would be easy to add.

    Would you guys accept pull requests for these types of enhancements? It would obviously have to touch other places in the codebase so just checking before putting in time here.

    Yes, PRs very welcome!!

    Joe Mordica
    @jmordica
    Thanks for getting back with me!