People
Repo info
Activity
  • 12:08
    tschottdorf commented #31585
  • 12:07
    tschottdorf closed #31585
  • 12:07
    tschottdorf commented #31585
  • 12:00
    cockroach-teamcity commented #31184
  • 11:58
    tschottdorf commented #31576
  • 11:58
    dt commented #30641
  • 11:57
    awoods187 commented #31586
  • 11:57
    awoods187 commented #31586
  • 11:55
    awoods187 commented #31586
  • 11:55
    awoods187 opened #31586
  • 11:48
    awoods187 opened #31585
  • 11:33
    isoos commented #31576
  • 11:19
    tschottdorf commented #31584
  • 11:19
    tschottdorf assigned #31584
  • 11:19
    tschottdorf milestoned #31584
  • 11:19
    tschottdorf labeled #31584
  • 11:19
    tschottdorf labeled #31584
  • 11:19
    tschottdorf opened #31584
  • 11:03
    tschottdorf commented #31576
  • 10:47
    isoos commented #31576
Ian Morris Nieves
@inieves
@mberhault im using docker
marc
@mberhault
I believe you can pass a --user <username> to docker run to set the username
Ian Morris Nieves
@inieves
@mberhault im actually using docker-compose, but thank you for that lead... that should be enough to figure it out... ill research now
marc
@mberhault
Thanks for the link. Does solution 3 for your use case?
Ian Morris Nieves
@inieves
@mberhault yes solution 3 was the one I was looking for. I am used to this sort of thing being a setting in a config file of the app in question, but cockroachdb is super light on the config, so I wasn't sure where else to look... i do prefer the light config, by far.
@mberhault thanks for your support!
Daniel
@dansouza
@bladefist it sounds like cron should be fine for what you're doing (or dkron.io, if you want something highly available with a web UI) - I personally don't think that scheduling queries belong in CRDB's administration panel (for the same reason a CMS wouldn't come bundled with a web server). In UNIX world, it's costume for tools to do one thing, do them well, with separation of concerns. So I guess that you would write a shellscript wrapper that runs a query that you pass as argument, and in case of any failure, does a ROLLBACK. You can use "expect" together with a command-line sql client for that (run the query, "expect" for the right return, etc). https://linux.die.net/man/1/expect
Xorth
@bladefist
So you're tellin me wordpress won't be bundled in the next release of cockroach? :)
Cool, dkron.io looks useful. I'll go down this path. Thanks!
Roko
@rkruze
@bladefist on your question about IMPORT TABLE vscockroach sql < dump.sql. The import command will make use of all the nodes of the cluster to convert the csv file. The cockroach sql command will have some ability to run a few of the commands in parallel but will be much less so then the IMPORT TABLE command.
Xorth
@bladefist
Right, but if you're restoring a dump (Backup), you're restricted to sql. I'll be enterprise soon so that won't matter for long but still
Roko
@rkruze
@bladefist true, RESTORE is much faster
Roko
@rkruze
@bladefist Where you running cockroach sql < file.sql or IMPORT PGDUMP?
Xorth
@bladefist
cockroach sql < file.sql @rkruze
Tim Aksu
@timaksu_twitter
Hello, we are using cockroach with Nakama on an AWS instance in Sydney. Nakama keeps crashing - once a day or so. We have the identical setup on an AWS instance in Frankfurt and it has been going for months without a crash. We aren't sure why the Sydney one is so unstable.
Happy to provide logs of the sydney instance if someone wants to walk me through how :)
marc
@mberhault
@timaksu_twitter: is there anything pointing towards CockroachDB having issues, or just Nakama? If the latter, you may want to ask on their channel first: https://gitter.im/heroiclabs/nakama
Tim Aksu
@timaksu_twitter
Just came from there
We never have to reset Nakama
At random times we wont be able to log in to our application anymore - Nakama will still be up but cockroach will be dead, so we just have to restart it manually (dont even need to restart Nakama) for it to start working again
marc
@mberhault
Ah, sorry, I thought you said Nakama keeps crashing.
Anyway, do you see anything of interest at the end of the logs of the crashed nodes?
If it's a fatal error, you should see a log line starting with F101015 ... (the date and time) followed by a giant stack trace.
Tim Aksu
@timaksu_twitter
I did say Nakama keeps crashing whoops! Typo! :)
marc
@mberhault
No worries. We're on the same page now.
marc
@mberhault
Any luck looking through the logs?
Adi Kancherla
@etherpan
Question on building a release binaries - my current approach is running 'make buildoss' once on mac and once on debian
the binary built on debian doesn't work on centos
looks like this issue is fixed in cockroachdb/cockroach#12694
so to make binary that works across linux distros - do I have to build it in docker image based on ubuntu xenial?
irfan sharif
@irfansharif
random drive-by comment: the DB selector on https://www.cockroachlabs.com/docs/stable/cockroachdb-in-comparison.html could be made more obvious, or drawn attention to
David
@lopezator
Already integrated your latest changes into my project, works perfect. Thank you @dt !! :)
Adi Kancherla
@etherpan
I deleted my cockroach-data dir with the process still running. Then I used the sql client to create a table and inserted some rows.
Where is this data stored now on disk?
The sql client also prints out the clusterId
Where is it reading all this data from?
marc
@mberhault
@etherpan: we have a builder image for exactly this purpose. You can use it to build one various platforms. eg: build/builder.sh mkrelease <linux|darwin|windows> (see build/mkrelease.sh for more details)
For your data directory, file descriptors still function properly after a file is deleted, so any open files are still fine. However, the moment the node tries to open a file that's supposed to exist, it will crash.
Adi Kancherla
@etherpan
Thanks Marc - I used the builder image. Worked flawlessly.
So if I insert a large amount of data using the sql client, the node will try to create a new sst file in cockroach-data dir, but since the dir doesn't exist anymore, it will crash. Is my assumption correct?
marc
@mberhault
That sounds about right. rocksdb continuously writes new files, but there's no easy way to tell when.
Adi Kancherla
@etherpan
Gotcha
Jesse Seldess
@jseldess
@irfansharif, can you open a docs issue with your feedback on the comparison chart?
István Soós
@isoos
What happened with the optimizer page?
https://www.cockroachlabs.com/docs/v2.1/sql-optimizer.html is the first (and only relevant) link in google, and it is a 404.
Justin Jaffray
@justinj
@isoos it’s now located at https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer.html - I’ll mention that to the docs team, thanks for the heads up
István Soós
@isoos
thanks!
Daniel
@dansouza
@bladefist no problem!
irfan sharif
@irfansharif
Jesse Seldess
@jseldess
Thanks, Irfan.