Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Negash
    @Negashev
    John Sully
    @JohnSully
    I saw that, looks like its another example of this bug: JohnSully/KeyDB#25
    I thought I'd got them all but I guess this one slipped through
    hmm or maybe not, interesting
    John Sully
    @JohnSully
    Yea its related. I just committed a fix. Thanks for pointing that one out
    Is this in any way related to the gitlab issues you've pointed out earlier?
    Negash
    @Negashev
    @JohnSully No way, I tested the other application) when fix will by on docker hub?
    John Sully
    @JohnSully
    I want to fix issue #36 and your gitlab bug before publishing a new image
    Actually it would be cool to make an "unstable" docker image to reduce the lag time
    Negash
    @Negashev
    Latest? On docker hub?
    John Sully
    @JohnSully
    You can publish under different labels, I might set up something automated to generate nightly builds
    Negash
    @Negashev
    @JohnSully unstable ready!)
    Negash
    @Negashev
    @JohnSully Unstabe not stable))
    127.0.0.1:6379> ZRANGEBYSCORE myzset (1 (2
    (nil)
    127.0.0.1:6379>
    John Sully
    @JohnSully
    @Negashev are you using docker or did you build locally? If docker are you specifying unstable when launching?
    Negash
    @Negashev
    I pull eqalpha/keydb:unstable
    John Sully
    @JohnSully
    run with: docker run eqalpha/keydb:unstable
    otherwise it will run the stable release.
    Negash
    @Negashev
    Yes pull it right now, error exist
    on may work machine
    docker run -it --rm eqalpha/keydb:unstable
    Unable to find image 'eqalpha/keydb:unstable' locally
    unstable: Pulling from eqalpha/keydb
    6abc03819f3e: Already exists
    05731e63f211: Already exists
    0bd67c50d6be: Already exists
    37cae7174b55: Pull complete
    64f52c62ac6e: Pull complete
    32d9c96a200e: Pull complete
    Digest: sha256:81f032d5ba78ae5daef93b6ee3fb02f4a480556ce265d3e8671e7282cbd98f69
    Status: Downloaded newer image for eqalpha/keydb:unstable
    1:C 20 May 2019 20:00:54.226 # oO0OoO0OoO0Oo KeyDB is starting oO0OoO0OoO0Oo
    1:C 20 May 2019 20:00:54.226 # KeyDB version=0.9.5, bits=64, commit=4333615b, modified=0, pid=1, just started
    1:C 20 May 2019 20:00:54.226 # Configuration loaded
    
    
                                            KeyDB 0.9.5 (4333615b/0) 64 bit
    is it correct version?
    4333615b on unstable
    John Sully
    @JohnSully
    @Negashev the docker image was built off an incorrect branch. It will be fixed soon.
    John Sully
    @JohnSully
    @Negashev should be ready now
    unstable is version 0.0.0
    Negash
    @Negashev

    is it normal
    redis

    127.0.0.1:6379> ZRANGEBYSCORE myzset (1 (2
    (empty list or set)

    keydb

    127.0.0.1:6379>  ZRANGEBYSCORE myzset (1 (2
    (empty array)
    John Sully
    @JohnSully
    I think the text just changed in the CLI, let me double check
    yea confirmed with the newer CLI talking to redis 5.0.3 it says empty array now
    but the actual data sent over the wire is the same as it was in Redis 5
    Negash
    @Negashev
    I'm testing
    Negash
    @Negashev
    @JohnSully Great, it looks great, the problem with gitlab is not yet decided. But my application earned like a clock
    John Sully
    @JohnSully
    I'm going to start looking at the gitlab issue now. I did the ZRANGE one first because it was a lot easier :)
    John Sully
    @JohnSully
    @Negashev good news! Got the gitlab issue fixed. the unstable docker image is being updated
    Its the same sort of issue as with ZRANGEBYSCORE, a regression that actually predates KeyDB from this change in Redis: antirez/redis@317f8b9
    John Sully
    @JohnSully
    hmm docker doesn't seem to be updating
    John Sully
    @JohnSully
    unstable is updated now
    Negash
    @Negashev
    @JohnSully Amazing, Great job!
    PLANTROON
    @plantroon
    is it to be expected, that after running keydb-benchmark on an active-active setup, the replication breaks?
    John Sully
    @JohnSully
    This was fixed with this change: JohnSully/KeyDB@3d0de94
    if you pull the docker image again it should pick up the fix.
    @plantroon what was happening is under high load things get queued. But it wasn’t actively flushing out the queued replication commands.
    PLANTROON
    @plantroon
    I will try updating then, thanks.
    John Sully
    @JohnSully
    @plantroon also i really recommend memtier for benchmarking. KeyDB-benchmark is really just there because it’s needed in the tests. The numbers aren’t as accurate.
    PLANTROON
    @plantroon
    I was trying to test if the replication survives the benchmark, I wasn't running it for the results :D
    John Sully
    @JohnSully
    Fair enough :). It’s a good test - I wish I did it a lot earlier.
    Negash
    @Negashev
    @JohnSully Is there progress in saving master master node information?
    John Sully
    @JohnSully
    try out "config rewrite" (and related commands) and see if it behaves how you want. I just tested and it does write out the multiple "replicaof"
    I think I did work on this a while back but my memory is a bit fuzzy, its been a while
    but I think @Negashev the config rewrite stuff is what we talked about right?
    Negash
    @Negashev
    Maybe)
    John Sully
    @JohnSully
    If its not working how you want please open an issue with the feature request. Sometimes I forget things we chat about here if it takes too long to get to it. The issue helps track it.