Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Patryk Kuźmicz
    @jamzed
    ERROR: apport (pid 27623) Sat Apr 18 17:10:24 2020: host pid 16859 crashed in a separate mount namespace, ignoring
    This is KeyDB 5.3.1 with some patches from 5.3.2.
    Patryk Kuźmicz
    @jamzed
    systemd only reported this:
    Apr 18 17:10:25 xxxxx systemd[1]: keydb-server.service: Main process exited, code=killed, status=6/ABRT
    Apr 18 17:10:25 xxxxx systemd[1]: keydb-server.service: Failed with result 'signal'.
    Apr 18 17:10:45 xxxxx systemd[1]: keydb-server.service: Service hold-off time over, scheduling restart.
    Apr 18 17:10:45 xxxxx systemd[1]: keydb-server.service: Scheduled restart job, restart counter is at 1.
    smartattack
    @smartattack
    I ran it from a tmux session and found this:
    -bash-4.2$ /usr/bin/keydb-server /etc/keydb/keydb-tty.conf keydb-server: fastlock.cpp:436: void fastlock_free(fastlock*): Assertion `(lock->m_ticket.m_active == lock->m_ticket.m_avail) || (lock->m_pidOwner == gettid() && (lock->m_ticket.m_active == lock->m_ticket.m_avail-1))' failed.
    smartattack
    @smartattack
    Please let me know if that helps or if I can provide more information
    smartattack
    @smartattack
    Added as issue #170 in github
    Enea
    @eni9889
    I'm having the same issue
    Any updates on this?
    @smartattack
    @jamzed
    smartattack
    @smartattack
    not yet, no
    Guillaume Lakano
    @lakano
    Hello there ! I'm interested about KeyDB, it's seems amazing! :)
    I would like to known if the "multiple master" feature works with servers in different continent. Imagine 3 servers, America, Europe, Asia, and all the 3 servers are all master and synced. Is it possible ? Is the network latency to sync between each will slow down KeyDB performances in local usage (eg: compared to a single isolated and not synced KeyDB server in each continent )
    Thanks for your help ! :-)
    smartattack
    @smartattack
    @eni9889 @jamzed Looks like there was an update on the case this weekend and the issue is fixed. I'm building the unstable branch and will cut over, today.
    varneyo
    @varneyo
    Morning, apologies in advance for the basic question as im new to keydb/redis. I am looking into the new FLASH feature on the pro-sever. I have followed the video on youtube on how to set the server config up. I have run the same code with flash enabled and without it enabled and I am seeing very slow writes. Im writing to a stream. without FLASH enabled I can write to the stream in 0.1-0.2 milliseconds consistently, with it enabled it starts out at .3 millisecond for the first few messages but then very quick starts climbing upto 50 milliseconds after a few 1000 messages have been pushed into the stream. I have a nvme ssd device. what are the expectations in terms of writes when using FLASH?
    varneyo
    @varneyo
    any advice on where to get questions answered, apologies if my question has been posed in the wrong place / forum ?
    Guillaume Lakano
    @lakano
    @varneyo You can try the forum, but with my company, we also waiting answers to our questions, posted here and the forum 10 days ago, and still waiting an answer, we think the team of KeyDB is really small and doesn't have a lot of time to answer quickly, so we need to patient.... ( BTW, it's the reason why my company have finally refused to paid the Pro version of KeyDB, because we need to be sure the KeyDB service is reactive, I'm pretty sure this will be better in the future, I see a lot of potential in KeyDB :-) )
    ustcr7
    @ustcr7
    image.png
    char *clientip = new char[128];
    strcpy(clientip, "SERVER START");
    printf("%s", clientip);
    compile failed , when i add this three line codes to server.cpp
    char *clientip = new char[128];
    strcpy(clientip, "SERVER START");
    printf("%s", clientip);

    compile failed , when i add this three line codes to server.cpp

    And error info is : " multiple definition of `operator new(unsigned long)'" thx~ @JohnSully

    Guillaume Lakano
    @lakano
    Hello!
    Is anyone can advice us on the correct way to start keydb on reboot please ?
    Sayan
    @ohsayan
    What platform are you on? @lakano
    Guillaume Lakano
    @lakano
    Hi @sntdevco ! We are on ubuntu 20.04
    Sayan
    @ohsayan
    You can create a systemd service
    That starts on boot
    Try this,
    Create a file called keydb.service and then add the following contents:
    [Unit]
    Description=KeyDB Server
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    User=root
    ExecStart=/usr/local/bin/keydb-server
    
    [Install]
    WantedBy=multi-user.target
    Be sure to replace the ExecStart with the location of your KeyDB binary
    And then save it to /etc/systemd/system
    Then run systemctl enable keydb and it will be automatically started on boot
    @lakano Tell me if it works out
    apavlis
    @apavlis

    Good evening. I was recently attempting to migrate our keydb swarm instances from old prod servers to newly repuprosed staging servers (running in docker swarm using 'latest' keydb container) and ran into something odd.

    We have a multi-master active-replica setup that runs across two sites via wan. This same set of configuration files (ansible) was used to deploy our staging env to both these servers and new servers we migrated to.

    When I bring up one site's multi-master active-replica, I receive the following in the logs:

    redis.1@HostA    | 1:20:S 11 Jun 2020 03:24:53.438 * MASTER <-> REPLICA sync started`
    redis.1@HostA   | 1:20:S 11 Jun 2020 03:24:53.477 # Error condition on socket for SYNC: Operation now in progress

    When I bring up the other sites' multi-master active-replica service, Host A's logs change to:

    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:07.506 * MASTER <-> REPLICA sync started
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:07.546 * Non blocking connect for SYNC fired the event.
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:07.585 * Master replied to PING, replication can continue...
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:07.625 # Unable to AUTH to MASTER: -WRONGPASS invalid username-password pair
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:08.508 * Connecting to MASTER HostB:6379
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:08.510 * MASTER <-> REPLICA sync started
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:08.550 * Non blocking connect for SYNC fired the event.
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:08.590 * Master replied to PING, replication can continue...
    redis.1@HostA    | 1:20:S 11 Jun 2020 03:25:08.630 # Unable to AUTH to MASTER: -WRONGPASS invalid username-password pair

    And HostB's logs show:

    redis.1@HostB    | 1:1:C 11 Jun 2020 03:04:20.177 * Notice: "active-replica yes" implies "replica-read-only no"
    ... (startup info too large for gitter) ...
    redis.1@HostB    | 1:1:C 11 Jun 2020 03:04:20.177 # oO0OoO0OoO0Oo KeyDB is starting oO0OoO0OoO0Oo
    redis.1@HostB    | 1:1:C 11 Jun 2020 03:04:20.177 # KeyDB version=6.0.9, bits=64, commit=4a3e1f3b, modified=0, pid=1, just started
    redis.1@HostB    | 1:1:C 11 Jun 2020 03:04:20.177 # Configuration loaded
    redis.1@HostB    | 1:1:S 11 Jun 2020 03:04:20.181 * Running mode=standalone, port=6379.
    redis.1@HostB    | 1:1:S 11 Jun 2020 03:04:20.181 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
    redis.1@HostB    | 1:1:S 11 Jun 2020 03:04:20.181 # Server initialized
    redis.1@HostB    | 1:1:S 11 Jun 2020 03:04:20.181 # WARNING overcommit_memory is set to 0! ...
    redis.1@HostB    | 1:19:S 11 Jun 2020 03:04:20.182   Thread 0 alive.
    redis.1@HostB    | 1:19:S 11 Jun 2020 03:04:20.183 * Connecting to MASTER HostA:6379
    redis.1@HostB    | 1:19:S 11 Jun 2020 03:04:20.237 * MASTER <-> REPLICA sync started
    redis.1@HostB    | 1:19:S 11 Jun 2020 03:04:20.278 * Non blocking connect for SYNC fired the event.
    redis.1@HostB    | 1:19:S 11 Jun 2020 03:04:20.318 * Master replied to PING, replication can continue...
    redis.1@HostB    | 1:19:S 11 Jun 2020 03:04:20.358 # Unable to AUTH to MASTER: -WRONGPASS invalid username-password pair

    Does anyone know why this condition would exist or if this is a version update bug / issue? Would pushing out docker v5.3.3 resolve this? We're deploying vi ansible via
    eqalpha/keydb keydb-server --multi-master yes --active-replica yes --replicaof "{{ keydb1 }}" 6379 --replicaof "{{ keydb2 }}" 6379 --replicaof "{{ keydb3 }}" 6379 --protected-mode no --requirepass "{{ redispass }}" --masterauth "{{ redismasterpass }}" --masteruser "{{ redismasteruser }}"

    This was running w/o issues when the staging env was on these servers, which were migrated to new servers 2 weeks ago w/o issues:
    KeyDB version=5.3.3, bits=64, commit=07cb1f45, modified=0, pid=1, just started
    Likewise, the prod env is still running w/o issues @ KeyDB version=5.3.0, RDB v5.3.3.

    Sphere-tech
    @Sphere-tech

    Hello... Guys, I'm in keydb testing now...

    I have 3 nodes c5.2xlarge (4 cores/8 threads).
    I cannot see any performance improvement with server-threads > 4

    I've tried CPU binding, common redis tunings and so on...What I have

    6 keydb server with 3 threads are 2x faster than 3 keydb servers with 7 threads on the same nodes. Is that expectable? It's launched in docker via EKS. Clients do not use pipelining at current moment. The size of key's value is about 800 bytes

    Guillaume Lakano
    @lakano
    @sntdevco Thanks for your help! How do you precise the keydb.conf file please ? And are we forced to run it in root ?
    Sayan
    @ohsayan

    @sntdevco Thanks for your help! How do you precise the keydb.conf file please ? And are we forced to run it in root ?

    That's up to you.

    nawfalhasan
    @nawfalhasan

    Is there a fix for this: antirez/redis#6048 in KeyDB?

    Short summary: Does KeyDB differentiate between an empty set stored vs no key at all?

    Was wondering if KeyDB also mimics all the design decisions like this of Redis which IMO might drag down keydb (ala TypeScript being superset of JS)
    Jesse Haka
    @zetaab
    is there way to use dns names in --replicaof parameter?
    The problem that do exist in redis is that people cannot really build HA redis to inside kubernetes, because the pod ip addresses are all the time changing. Same applies to keydb also as I see it
    so for instance if I initialize two member cluster: pod1 has replicaof 10.0.0.2 and pod2 has replicaof 10.0.0.1, but what will happen to cluster if pod1 is restarted and it will change ip address
    the cluster is broken after that?
    Ayyappa
    @ayyappa99_twitter
    Hi
    is keyDB active? I need some help regarding keyDB persistence mode.
    Greg TAPPERO
    @coulix

    Hello There,

    WIth make MALLOC=memkind on an old ubuntu kernel 3.13.0.24 I get this FICLONE error.

    CC t_stream.o
        CC listpack.o
        CC localtime.o
        CC acl.o
        CC storage.o
    storage.cpp: In function ‘int forkFile()’:
    storage.cpp:125:20: error: ‘FICLONE’ was not declared in this scope
         if (ioctl(fdT, FICLONE, memkind_fd(mkdisk)) == -1)
                        ^~~~~~~
    make[1]: *** [storage.o] Error 1
    make[1]: Leaving directory `/home/greg/keydb/KeyDB/src'
    make: *** [all] Error 2
    wentgung
    @wentgung
    Hi, I'm having an issue whereby there would be sudden spikes of client connections at irregular times, connections would spike to up to 5 times of normal connected clients, during that period, the total number of keys would drop and my cpu utillisation would spike. This causes my app servers to get an error of could not get a resource from the pool, connection timed out. I understand that there's a limit of 10k concurrent connected clients, but even the spike did not reach that. Any insights on this?
    ray wu
    @yxwu
    With node.js, I am using node-redis with keyDb, however, how can I use command such as EXPIREMEMBER, which is not available with redis ( so node-redis may not support it )?
    Kavin Chauhan
    @kavinchauhan
    can we have lpos command support in keydb?
    Greatsamps
    @Greatsamps
    Hey gang. i am having some issues with active-active replication (2 node). I have setup as per the docs here https://docs.keydb.dev/docs/active-rep/ but when i point my client to either of the instances, i get a "no master found" exception
    am i missing something?
    cinnay
    @cinnay
    Hi everyone :) are there any known issues regarding endless propagation when using XREADGROUP in active replication setup? I am using the enapter/keydb helm chart in its default configuration. And as soon as my code tries to xreadgroup from a non empty stream the whole keydb cluster goes into 100% cpu load processing "XCLAIM", "RREPLAY", "XCLAIM", "RREPLAY", "XCLAIM", "RREPLAY", ..... commands over and over. Any ideas?
    Nguyen Tan Vy
    @v0112358
    @Greatsamps what Redis client do you use ? I guess you are using Redisson ?