## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• 03:07

lni on master

node: some cleanups (compare)

• Oct 29 06:28

lni on master

config: added a new field Trans… (compare)

• Oct 17 13:12

lni on master

tests: fixed a test (compare)

• Oct 17 12:05

lni on master

tests: fixed a test (compare)

• Oct 16 08:12

lni on master

ci: install golangci-lint from … (compare)

• Oct 16 03:42

lni on master

ci: moved all arm64 builds to A… (compare)

• Oct 12 06:05

lni on master

docs: updated docs (compare)

• Oct 11 08:51

lni on master

ci: install libsnappy-dev when … (compare)

• Oct 11 08:22

lni on master

• Oct 11 06:03

lni on master

ci: fixed the example dir name (compare)

• Oct 11 05:51

lni on master

ci: added a new target for buil… (compare)

• Oct 11 05:25

lni on master

tests: fixed a few tests Also … (compare)

• Oct 10 07:19

lni on master

tests: fixed two tests Made Te… (compare)

• Oct 10 06:13

lni on master

tests: fixed a test (compare)

• Oct 10 05:51

lni on master

ci: use ubuntu 18.04 by default (compare)

• Oct 10 05:25

lni on master

mod: use updated github.com/lni… ci: run all tests using rocksdb (compare)

• Oct 10 03:20

lni on master

mod: use updated github.com/lni… ci: use ubuntu 20.04 for defaul… (compare)

• Oct 09 15:24
lni closed #143
• Oct 09 15:24
lni commented #143
• Oct 09 08:05

lni on master

tests: fixed a test (compare)

lni
@lni
@kangzhanlei lastIndex is the index of the last entry in MsgReplicate from the leader. If you read tryAppend's implementation and the raft paper, an entry with index=lastIndex will always exist in follower's log when tryAppend returns, this has nothing to do with shrinking. shrinking happens in the above tryAppend() and inmem.merge() to just make sure that we don't overwrite an existing entry using a new one with matched index/term.
kzl
@kangzhanlei
@lni , i see , thanks
xkeyideal
@xkeyideal
@lni Please see the issue #98 , how to handle appliedIndex > ents[last].Index without panic? I compact the entries, but ondisk need return entries, the compact will be some errors
lni
@lni
@xkeyideal I don't understand what you actually want to achieve here. could you please open a new issue and provide details on what is the goal, what is the procedure to reproduce the panic you mentioned.
wclssdn
@wclssdn
On the Windows there miss two files:github.com\lni\dragonboat\v3@v3.1.0\internal\utils\fileutil\flock_winapi.go flock_windows.go . I copy these files from github.com\lni\goutils@v1.0.3\fileutil\flock_winapi.go flock_windows.go . It works.
@UAvinash7
Hi @lni
I was exploring dragonboat multigroup libraries set and came across an interesting question which is: I've a leader node and 4 follower nodes where leader is proposing a transaction as valid (i.e. true) where as follower node F1 also proposing the transaction as valid, but follower nodes F3 and F4 are proposing the transaction as invalid (i.e. false) and during the same time the follower node F2 goes offline (the reason can be anything, either malicious behavior or due to some hardware/power issue). So, in this case, what is the fate of the proposed transaction i.e. whether it will be accepted and appended to the ledger or it will get rejected as the voting ratio is 50:50 and also suggest how this situation can be overcomed.
Anish Bhusal
@anisbhsl
Hi, new user of dragonboat here. I am experimenting with dragonboat for a while now. It's an amazing library. But I wanted to use custom LogDB other than current impelementations in LevelDB/RocksDB. NodeHostConfig seems to provide a LogDBFactory option to state our custom factory method for LogDB. But I haven't been successful so far. I am trying to use BaderDB v2.x as LogDB. Has anyone tried this before? I would be thankful if someone can provide example of how this can be done. Thank You!
implementations**
jkassismz
@jkassismz
happy holidays!!! i'm curious why multiple active rafts degrades performance. the docs say this reduces batching, but i would think that multiple rafts would reduce blocking as there are now multiple queues?
lni
@lni
@anisbhsl you need to implement your own raftio.ILogDB and provide a factory method that returns an instance of your raftio.ILogDB. internal/logdb/sharded_rdb.go contains the default ILogDB implementation used by dragonboat. note that the default one uses multiple shards, it also supports rocksdb/leveldb/pebble, you don't have to do any of these in your custom LogDB.
@jkassismz large number of active raft nodes degrades performance, if you have say 32-128 active nodes, you shouldn't see too much degraded performance.
Mandeep Khatry
@mandeepkhatry
@lni What I understood after going through dragonboat library is that one need to implement custom db through IKVStore. Functions such as IterateValue, GetValue etc are specific to particular db. So one need to implement custom factory method in plugins and custom IKVStore in logdb/kv. ILogDB implementations seems to be general and not specific. Am I right?
Mandeep Khatry
@mandeepkhatry
@lni Currently what I have done is I have implemented my own badgerdb factory method inside plugins folder and I have also implemented IKVStore for badger inside internal/logdb/kv just like the ones implemented in leveldb, pebbeldb and rocksdb in the same folder. Isn't that sufficient to make sure that the badgerdb implementation for logdb is correctly implemented because every other go programs including sharded_rdb.go uses the IKVStore functions such as GetWriteBatch, CommitWriteBatch etc. Isn't IKVStore interface the only required interface for implementing other db as logdb because it's the only interface that is specific to each dbs?
lni
@lni
@mandeepkhatry as mentioned above, users need to implement raftio.ILogDB to build a custom LogDB implementation. please have a look at issue #61 for reasons why IKVStore is not the answer.
jkassismz
@jkassismz
@lni i'm still a little confused... i would expect more nodes to increase network traffic, but what about multiple rafts on the same nodes? this gives multiple parallel channels, yes?
lni
@lni
more raft instances on the same node (machine) leads to more traffic, it is also more difficult to batch such traffic/operations, as a result, the total throughput will decrease.
Nob
@nobsu
How many raft groups are supported？10000？

@lni
mmh
@myrfy001
@lni hi, I want to know that is it safe to allow an observer being removed from the cluster and join again? I want to use dragonboat as a data sync middleware in a dynamic cluster, that is, I will setup a 3-nodes cluster as a stable datacenter, and other nodes run as observers in k8s pods which only read the newest data from the datacenter. In this case, the adress of the pods maybe changing without control, a new observer node in pod may get address of an previous dead node.
lni
@lni
@nobsu there is no hard limit on the number of supported raft groups. what kind of performance you can get when having 100k groups is a different story.
@myrfy001 once a node (observer or not) is removed, it is not allowed to re-join the raft group using the same nodeID. that being said, it is ok to add another observer with a different nodeid.
Nob
@nobsu
@lni 请教一下，我想用dragonboat实现一个只能单线程的处理命令的应用，需要将处理结果顺序发送给下游的kafka，那么怎么控制仅仅让leader来发送结果呢？ I want to use dragonboat to implement an application that can only process commands in a single thread. It needs to send the processing results to the downstream kafka in sequence, so how to control only the leader to send the results?
Nob
@nobsu
I got it
esonic
@esonic
@lni 麻烦请教一下，如何实现仅落地raft log之后propose就返回成功呢（不用等状态机的Update执行完成）。因为apply过程希望能够异步后台进行，这样写入时能够更快向前端返回成功。
Zhou Yicong
@Jackmrzhou
@lni hi, I’m new to dragonboat. I was confused when reading the “ondisk” example where the state machine has to persist the latest applied index, why this is necessary? Does there exist a scenario that stale entries will go into the update function? I mean these entries are already processed by the framework.
lni
@lni
@Jackmrzhou the index is kept by the state machine so the Open() method can return it when the state machine is restored (e.g. after a crash). the returned index value is used to prevent those already applied entires to be supplied to the update method again.
lni
@lni
@esonic in my own systems, they need to issue linearizable reads all the time. to do that, your described return-after-commit approach won't help much as current committed entries have to be applied before such linearizable reads. that being said, I agree that what you described can be an extra useful feature for certain applications. any plan to contribute it as a PR? let me know if you are interested, I have some ideas to share.
esonic
@esonic
@lni We are planning to build a consistent message queue system upon multi raft. High throughout and low latency are priorities whereas linearizable read is not. For a message queue, It's OK to response as soon as the WAL fsync finishes (like what Kafaka append log did), apply can be done async, and the raft log is a good choice for used as WAL. Maybe we can talk about how to contribute to support such feature in dragonboat.
lni
@lni
@esonic I've sent you a private message to chat about implementing this new feature.
Nob
@nobsu
I found that the space occupied by this directory has been increasing, for example, it shows 287M, but the actual files do not add up so much, so where did the space go?
lni
@lni
@nobsu it is caused by preallocated spaces used by some rocksdb files
Nob
@nobsu
How to release the preallocated space? Is there any parameter recommendation for the production environment?
lni
@lni
@nobsu I've sent you a private message to get more details
Jeremy Hahn
@jeremyhahn
does dragonboat currently perform any optimizations regarding network traffic (similar to multiraft in cockroachdb)?
excellent library, btw!
lni
@lni
@jeremyhahn could you be more specific?
Jeremy Hahn
@jeremyhahn
sure, im referring to network level optimizations to avoid an explosion in network traffic with each new raft group added to the server. https://www.cockroachlabs.com/blog/scaling-raft/ https://tikv.org/deep-dive/scalability/multi-raft/ cockroachdb/cockroach#20
lni
@lni
@jeremyhahn there are some similar optimizations. e.g. heartbeat messages are batched to make it more efficient to be transmitted & processed. when raft groups are idle, they can also be put into the so called quiesce mode to avoid sending heartbeats.

@blackfox1983
hi, when is v3.3 to be released? I see that the CGO option can be disabled from version 3.3. thanks. @lni
lni
@lni
hi @blackfox1983, I will wait for at least a couple more months to allow pebble to be better tested. please note that there is current no known issue relating to pebble, it has been extensively tested for many months. you can definitely start playing with the master HEAD and expect the v3.3 in Oct. or Nov.

@blackfox1983
got. thanks @lni . Looking forward to the release of version 3.3.
This is the best raft library I've ever seen
Иван Сердюк
Hi there
Иван Сердюк
Why am I getting this
\$ go test ./...
go: finding module for package github.com/petermattis/pebble
go: found github.com/petermattis/pebble in github.com/petermattis/pebble v0.0.0-20200710160639-c9a380a7f499
go: github.com/lni/dragonboat/v3/internal/logdb/kv/pebble imports
github.com/petermattis/pebble: github.com/petermattis/pebble@v0.0.0-20200710160639-c9a380a7f499: parsing go.mod:
module declares its path as: github.com/cockroachdb/pebble
but was required as: github.com/petermattis/pebble
?
Seth Yates
@sethyates
Hi. Is there any way to detect and remove a failed node? It appears the leader just keeps trying to contact the failed node and doesn't actually remove it from the cluster. And I can't find an event, channel or anything where I could do this myself.
@lni
Seth Yates
@sethyates
Will ISystemEventListener do this for me?