These are chat archives for anacrolix/torrent

27th
Jan 2018
Matt Joiner
@anacrolix
Jan 27 2018 00:25
I don't think you should use Torrent.BytesCompleted for download rate.
Matt Joiner
@anacrolix
Jan 27 2018 00:31
Denis try looking at your messageTypesReceived expvar, you want to see if there are any Interested, and Request messages sent by peers, that will let us know if peers are actually asking for data
for example here's one of my servers: "messageTypesReceived": {"0": 1039, "1": 4002, "2": 393, "20": 15692, "3": 81, "4": 16496, "5": 4576, "6": 6921, "7": 15288990, "8": 540, "9": 4305},
2s are interested, and 6 are requests
deranjer
@deranjer
Jan 27 2018 00:33
Hmm, I also use bytescompleted for my download rate... Also, it looks like you support download and upload rate limiting, correct?
Matt Joiner
@anacrolix
Jan 27 2018 00:38
yes
Denis
@elgatito
Jan 27 2018 07:42
@anacrolix , the thing is that bandwidth is totally used, 80 peers, but bytescompleted is growing slowly and pieces become completed slower, comparing to the previous version
For me it looks like library is redownloading the data again and again
Matt Joiner
@anacrolix
Jan 27 2018 12:07
hm were you previously using a reader?
Denis
@elgatito
Jan 27 2018 13:26
Was working fine before priorities/upnp were added
Matt Joiner
@anacrolix
Jan 27 2018 13:26
i'm doing some testing, and not seeing any problems
are you using file-based priorities or something else?
Denis
@elgatito
Jan 27 2018 13:28
Its fine some chunks are truncated, but when we are using 6000kb of a channel and only 200kb are added to totalbytes then somethig is not good
No, not usiv priorities
Matt Joiner
@anacrolix
Jan 27 2018 13:28
just a normal reader?
Denis
@elgatito
Jan 27 2018 13:28
Only readers for memory storage and File.Download for default storage
Yes, nothing changed on my side, usual readers
I personally seen this behavior few times, after I restarted torrent (readded) it was fine
Some people say they see it every time, no logic there
That
Those 3 pastes on pastebin ard from one of those users
I dont see messages with Debug that chunk was not validated, to it looks we are writing same chunks
Denis
@elgatito
Jan 27 2018 13:34
Can it be we are requesting same chunks from many peers?
Matt Joiner
@anacrolix
Jan 27 2018 13:35
yes, but that's always been the case. there are strategies to minimize their overlap
all the piece inclination stuff tries to minimize any overlap between connections. it works like that because it's too expensive to track every single chunk, and you can't afford to wait for bad peers
generally the overlap increases if you are always filling the readahead buffer before you use it (very fast connection), but it shouldn't exceed about 30% with my tests
do the users with this problem have very fast connections?
Denis
@elgatito
Jan 27 2018 13:48
Dont know. Mostly about 100mbps
Same users had all good before last changes
There were few users that said they have same problem about havig Dl speed about 200kbs even for memory storage, but im not sure that is the same issue, they reported it from early versions and say it stays the same ovef new releases
Matt Joiner
@anacrolix
Jan 27 2018 13:55
are the users on the memory or file storage backend?
Denis
@elgatito
Jan 27 2018 14:01
Likely both, I asked to change the storage and everybody reported there was no change
Matt Joiner
@anacrolix
Jan 27 2018 14:03
so you think this commit? 52524925d2b81d07e51c54c54f4c3660edf6ce83
Denis
@elgatito
Jan 27 2018 14:07
Hard to say, my previous release was after 0b553b29, seems that was good
Matt Joiner
@anacrolix
Jan 27 2018 14:09
so that release included 0b553b29?
coz there were bugs in that that were fixed in 21108bf6ec0547a9640f19f4ce5b64a3c391a361
i just checked my prod server. it has the problem too
about 90% wastage
shit
Denis
@elgatito
Jan 27 2018 14:16
Yes, included
Matt Joiner
@anacrolix
Jan 27 2018 14:17
ok thx
i think i found the problem
Matt Joiner
@anacrolix
Jan 27 2018 14:55
check out these /debug/vars to verify: "chunksReceived": 23330945,
"chunksReceivedUnexpected": 170922,
"chunksReceivedUnwanted": 22668178,
really need a test for this
i won't do it tonight, i'm stuffed. try removing the line t.pendingPieces.Remove(piece) from Torrent.onPieceCompleted
i'm running it in production now to verify
Denis
@elgatito
Jan 27 2018 16:32
@anacrolix trying without t.pendingPieces.Remove(piece)
looks much better
that is because of if !t.pendingPieces.Remove(piece) { return } in updatePiecePriority ?
    if newPrio == PiecePriorityNone {
        if !t.pendingPieces.Remove(piece) {
            return
        }
    } else {
        if !t.pendingPieces.Set(piece, newPrio.BitmapPriority()) {
            return
        }
    }
Matt Joiner
@anacrolix
Jan 27 2018 16:34
yes, exactly
it's not triggering other connections to update their requests
which means they don't cancel or update their priorities
i missed the line somehow in the commit, and it passed all my tests
Denis
@elgatito
Jan 27 2018 16:36
btw, using DataBytesRead to calculate real transfer rate is making sense, starnge I have not seen that before
Matt Joiner
@anacrolix
Jan 27 2018 16:36
the Torrent.Stats call?
Denis
@elgatito
Jan 27 2018 16:36
yes
Matt Joiner
@anacrolix
Jan 27 2018 16:36
i added it for someone and never used it myself either
Denis
@elgatito
Jan 27 2018 16:36
didn't see that before or probably just copy-pasted someone's code
it's update on every chunk read? even if it's not valid or duplicate of whatever?