Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 26 2018 07:00
    RobertWHurst opened #45
  • Jun 02 2017 14:23
    pgte removed as member
  • May 13 2017 14:38
    pgte added as member
  • Feb 23 2017 23:10
    kkragenbrink opened #44
  • Feb 16 2017 09:31
    pgte closed #43
  • Feb 16 2017 09:31
    pgte commented #43
  • Feb 16 2017 00:00
    IamCarbonMan opened #43
  • Jan 25 2017 10:28
    pgte commented #41
  • Jan 25 2017 10:28
    pgte labeled #41
  • Jan 24 2017 19:30
    bhoriuchi edited #41
  • Jan 24 2017 19:30
    bhoriuchi edited #41
  • Jan 24 2017 19:13
    bhoriuchi edited #41
  • Jan 24 2017 19:12
    bhoriuchi edited #41
  • Jan 24 2017 19:06
    bhoriuchi opened #41
  • Dec 12 2016 16:13
    delaneyj commented #40
  • Dec 12 2016 09:12
    pgte closed #40
  • Dec 12 2016 09:12

    pgte on master

    docs: commands no longer need t… (compare)

  • Dec 12 2016 09:07

    pgte on 2.0

    WIP wip (compare)

  • Dec 12 2016 09:05
    pgte commented #40
  • Dec 11 2016 21:30
    delaneyj opened #40
Pedro Teixeira
@pgte
as I understood the paper, compacting is just removing unnecessary log entries
when the follower requires entries that I don't have, I stream the entire state machine to it
and I may also be confused, I've read the paper some 10 times :P
Peter Johnson
@missinglink
who what when? did someone say my name?
Matteo Collina
@mcollina
I need to check on the paper, I was going for a completely different impl
Pedro Teixeira
@pgte
yeah, we still have time to change direction if you think this is wrong
Matteo Collina
@mcollina
I'll go through the paper and check if this make sense
Pedro Teixeira
@pgte
:+1:
Peter Johnson
@missinglink
right, just read above, you CAN do streaming protocol buffers extraction and it works well and is fast.
I'm not familiar with your usecase but msgpack will probably be simpler to implement and better if you're not using a fixed schema.
either way, it's simply a layer on top of your transport layer so you should be able to switch it out seamlessly
Matteo Collina
@mcollina
@pgte how big can become the state machine representation?
I was implementing it by leaving the state machine completely up to the app
Pedro Teixeira
@pgte
yeah, completely depends on the app
I'm leaving the application of the log entry up to the app (the persistence layer implementor)
Pedro Teixeira
@pgte
being that the state machine can be implemented in memory (as it looks to be in the raft paper) or completely persisted
that way we can use raft to do master-slave replication and high-availability over a database
@mcollina ^^
Matteo Collina
@mcollina
@pgte I had a look at your skiff-level: it has a major weakness, it cannot handle buffers.
also the transport thing
plus, these are going to be slow
because it contains the full log.
(even compacted)
Pedro Teixeira
@pgte
@mcollina you think the logs should always be persisted, one by one?
@mcollina what do you mean by "cannot handle buffers"? what part of the api you think should support buffers?
(sorry about the late response, gitter didn't email me as usual)
Pedro Teixeira
@pgte
@mcollina about the logs, do you have an idea of what would be a good api for the persistence layer? handling the logs and compacting may get tricky...
@mcollina I'll stop development while we discuss this, want to get this tuned
Pedro Teixeira
@pgte
@mcollina just a little more insight:
a node needs to save all the state atomically right before replying to a request
this includes a lot of data, including the log
I think we can make the log read access asynchronous, but saving it needs to be atomic with all the rest of the state
just saying that: maybe we don't need to change the interface, but the skiff-level implementation
Pedro Teixeira
@pgte
... by treating the log as a special case
Matteo Collina
@mcollina
regarding buffers: you are using JSON both as a transport and to persist stuff. If you want a Level instance that stores buffers as values you cannot use json for both the transport and persistence.
I think that handling it inside skiff-level can do the trick

You need:

  1. a generic place to store all the other meta, which you already have
  2. a sublevel just for the log

Then, when you need to save, you build a batch with:

  1. a del for all the log entries prior to the first one in the log
  2. a put for all the log entries after the last one inserted in the log

This means level-skiff needs to cache the latest meta stored.

What do you think?

Should I open a bug?
Pedro Teixeira
@pgte
@mcollina sounds good, yes please! :)
@mcollina re buffers: ah yeah, binary data in the protocol, didn't think about it. if you can, also open a bug, I'll fix it :)
Matteo Collina
@mcollina
done :)
Pedro Teixeira
@pgte
@mcollina cool, thanks!
Matteo Collina
@mcollina
@pgte any bugs on msgpack5? :)
Pedro Teixeira
@pgte
@mcollina I'm experiencing a problem with the streaming decoder:
can you take a look? mcollina/msgpack5#2
Pedro Teixeira
@pgte
@mcollina plz check the reply :)
Matteo Collina
@mcollina
@pgte bug noticed :), I hope I explained everything
(how to fix it)
Pedro Teixeira
@pgte
ha, I see it, I'll give it a try, thanks @mcollina !
Matteo Collina
@mcollina
the fun thing is that I did not notice it :)
Oleksandr Nikitin
@wizzard0
um, this chat looks a bit abandoned :) anybody knows if the issues in skiff-algorithm are still relevant?