Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 15 18:14
    vivekmenezes closed #36840
  • Apr 15 18:14
    vivekmenezes commented #36840
  • Apr 15 18:13
    cockroach-teamcity commented #36848
  • Apr 15 18:13
    vivekmenezes review_requested #36848
  • Apr 15 18:13
    vivekmenezes review_requested #36848
  • Apr 15 18:13
    vivekmenezes opened #36848
  • Apr 15 18:10

    craig[bot] on staging.tmp

    (compare)

  • Apr 15 18:10

    craig[bot] on staging

    exec/colserde: add arrow format… Merge #36578 36578: exec/colse… (compare)

  • Apr 15 18:10

    craig[bot] on staging.tmp

    exec/colserde: add arrow format… [ci skip] -bors-staging-tmp-365… (compare)

  • Apr 15 18:10

    craig[bot] on staging.tmp

    [ci skip] (compare)

  • Apr 15 18:10
    asubiotto commented #36578
  • Apr 15 18:09
    craig[bot] closed #36844
  • Apr 15 18:09
    craig[bot] closed #36643
  • Apr 15 18:09
    craig[bot] closed #36838
  • Apr 15 18:09
    craig[bot] closed #36817
  • Apr 15 18:08
    craig[bot] commented #36844
  • Apr 15 18:08
    craig[bot] commented #36838
  • Apr 15 18:08

    craig[bot] on master

    roachprod: distributed authoriz… changefeedccl: deflake TestChan… workload,workloadccl: standardi… and 3 more (compare)

  • Apr 15 18:08
    craig[bot] commented #36817
  • Apr 15 17:52
    ajwerner edited #36838
kena
@knz
@erikmiller: ask on forum
@goodforever: you should disable the krb build as hinted on the above linked issue. the proposed diff is on the top level crdb makefile
goodforever
@goodforever
@knz Thanks!
kena
@knz
yes looks like it
russ
@russmack
Playing around on my laptop, importing 6 million tsv rows, only 4.5 million records in the table, no errors that I can see. An obvious mistake on my part?
Niels Hofmans
@hazcod
If anyone can help out on creating a netdata plugin, please do! netdata/netdata#4696
heily-desres
@heily-desres
I'm a CRDB newbie just kicking the tires, but found a problem that crashes the entire cluster. I created a table called "bucket" with 165 million rows, and when I run "DELETE FROM bucket;" the memory usage of the cockroach process increases rapidly until the machine runs out of memory and the process crashes with a std::bad_alloc error. Is this normal, or a bug worth investigating?
heily-desres
@heily-desres
The cluster is 3 virtual machines (4 vCPUs, 8GB RAM) and the size of the "bucket" table is 8GB with 462 ranges. Overcommit is disabled (vm.overcommit_memory = 2).
kena
@knz
@russmack: hard to say without more details of what you did. please ask on forum
@heily-desres: it's surprising that it crashes the entire cluster, however yes it's not surprising that it doesn't work. To delete all rows, TRUNCATE is better. CockroachDB has a limit on transaction size and you can't simply affect 165M rows in a single txn like that.
maybe you can report more details on the symptoms - it would be interesting to better understand what you mean with "crashes the entire cluster"
russ
@russmack
@knz thank you.
heily-desres
@heily-desres
@knz thanks, I created a forum topic with more details. https://forum.cockroachlabs.com/t/deleting-a-large-table-crashes-the-cluster/3018
kena
@knz
@heily-desres: you're encountering things we haven't seen before, but which we want to investigate. matt will guide you further but the gist is that we'll want to collect evidence to understand your environment and the precise steps you took to get there.
gigatexal
@gigatexal
@heily-desres a debug zip would be helpful. to create one issue a ./cockroach debug zip debug.zip --insecure
@heily-desres also while truncate works if you have foreign keys it will fail. You can temporarily shrink the GC TTL for a table which will speed up deletes and then do batched deletes. (https://www.cockroachlabs.com/docs/stable/sql-faqs.html#why-are-my-deletes-getting-slower-over-time)
gigatexal
@gigatexal
crickets today
Brian Hechinger
@bhechinger
So if I deploy cockroachdb to kubernetes using the helm chart I end up with a service account that I need to use for the secure client (specifically the init-certs container) which is all well and good. Until I want to run my services in a different namespace from the data layer which is where I've run into trouble. Is there a best practices for running cockroachdb in kube and dealing with these sorts of issues?
kena
@knz
@bhechinger: good question. it would be best to ask this on the forum
Brian Hechinger
@bhechinger
@knz I will do so then, thanks!
because I can't quite wrap my head around this
I've gotten as far as making a role and rolebinding in the new namespace but don't know how to use that in place of the serviceaccount.
Brian Hechinger
@bhechinger
So I created a database. I created a user. I granted that user all to that database: GRANT ALL ON DATABASE chremoas TO chremoas;
looking at the output of show grants I get this:
chremoas | public | NULL | admin | ALL chremoas | public | NULL | chremoas | ALL chremoas | public | NULL | root | ALL chremoas | public | alliances | admin | ALL chremoas | public | alliances | root | ALL chremoas | public | authentication_codes | admin | ALL chremoas | public | authentication_codes | root | ALL chremoas | public | authentication_scope_character_map | admin | ALL chremoas | public | authentication_scope_character_map | root | ALL chremoas | public | authentication_scopes | admin | ALL chremoas | public | authentication_scopes | root | ALL chremoas | public | characters | admin | ALL chremoas | public | characters | root | ALL chremoas | public | corporations | admin | ALL chremoas | public | corporations | root | ALL chremoas | public | user_character_map | admin | ALL chremoas | public | user_character_map | root | ALL chremoas | public | users | admin | ALL chremoas | public | users | root | ALL
that formatted poorly
basically, table NULL has ALL perms for chremoas
none of the tables have anything though
and trying to do an UPDATE fails with a permission error
pq: user chremoas does not have UPDATE privilege on relation alliances
What's the trick here?
root@roach-dev-cockroachdb-public:26257/chremoas> SHOW GRANTS ON DATABASE chremoas; database_name | schema_name | grantee | privilege_type +---------------+--------------------+----------+----------------+ chremoas | crdb_internal | admin | ALL chremoas | crdb_internal | chremoas | ALL chremoas | crdb_internal | root | ALL chremoas | information_schema | admin | ALL chremoas | information_schema | chremoas | ALL chremoas | information_schema | root | ALL chremoas | pg_catalog | admin | ALL chremoas | pg_catalog | chremoas | ALL chremoas | pg_catalog | root | ALL chremoas | public | admin | ALL chremoas | public | chremoas | ALL chremoas | public | root | ALL (12 rows)
why does it do that?
Brian Hechinger
@bhechinger
Eh, I just granted to all tables. I guess that's the way to do it then?
kena
@knz
@bhechinger: cockroachdb does not currently inherit db permissions to existing tables automatically
so yes what you did is the recommended course of action
Brian Hechinger
@bhechinger
Yay I did it right! 😁
So new tables get those perms though?
kena
@knz
@bhechinger: yes
Brian Hechinger
@bhechinger
Ok, perfect
Dolf Schimmel
@Freeaqingme
Where can one find the distributed tests?
kena
@knz
@Freeaqingme: which distributed tests?
there are many
Dolf Schimmel
@Freeaqingme
@knz good question. I realize there's the Jepsen repo, but I was just trying to get an idea of crdb does 'its distributed tests'. Do you have any pointers where I can find those? I'm hoping to find a spot where a couple of nodes are spawned up, and then queries performed on N nodes after which the consistency is checked
kena
@knz
nearly all the correctness tests in cockroachdb use at least 3 nodes
in fact, it's hard to find tests that only use 1 or 2
and there's no such thing as "checking the consistency after the test". the consistency is checked all the time, for every operation
maybe you can use git grep TestClusterArgs to find some tests already
also look at the roachtest directory
Dolf Schimmel
@Freeaqingme
Awesome. That's helpful. Thanks!