These are chat archives for anacrolix/torrent

28th
Nov 2017
Denis
@elgatito
Nov 28 2017 06:42
@deranjer you just add a hash code and expect it to get into with GotInfo?
Denis
@elgatito
Nov 28 2017 09:02
@anacrolix , trying to resolve situation when torrent tries to read/write and the piece is already nil - http://paste.ubuntu.com/26060352/
I do reader.Close(), then torrent.Drop(), then deleting downloaded files, then storage.Stop, which truncates all the pieces, somehow the library wants to write/read after all
so I'm thinking what is better, add a check to wrapper.go that piece is not null, or do not delete pieces in the storage? Or another way?
Denis
@elgatito
Nov 28 2017 09:14
@anacrolix , and one more query from windows user, default storage - https://paste.ubuntu.com/26058870/ , you can seek to line 283, interesting error
19:25:06 T:140  NOTICE: [plugin.video.elementum] War.for.the.Planet.of.the.Apes.2017.BDRip.1.46Gb.Dub.MegaPeer (62386264636666303462643138663065326130376161316231653432633139376230633831636637): error writing chunk {219 {1064960 16384}}: open D:\backup\War.for.the.Planet.of.the.Apes.2017.BDRip.1.46Gb.Dub.MegaPeer\War.for.the.Planet.of.the.Apes.2017.BDRip.1.46Gb.Dub.MegaPeer.avi: The process cannot access the file because it is being used by another process.
Denis
@elgatito
Nov 28 2017 09:19
as I understand, storage is opening one file handle for each read/write and filesystem can panic there? bufio can help us to buffer read/write operations? or the library will can fail with the logic?
Matt Joiner
@anacrolix
Nov 28 2017 11:11
I'm not sure why Windows would do that. It's a PITA
Denis
@elgatito
Nov 28 2017 11:43
probably a limitation to open files or something. I would think of a buffered writer to collect writes and then write them at once, that should work for android as well, I have shown you messages that it could not open piece because of open handles limit
Matt Joiner
@anacrolix
Nov 28 2017 11:52
yeah i imagine android is rate limiting or slow to write for some reason
i run my servers with 5000 file descriptors
ulimit -n
i did have an issue about the cost of opening and closing file descriptors, but i think it was incomplete
caching the descriptors is the first optimization that could be applied here
but i thought you were running with your own storage implementation?
Denis
@elgatito
Nov 28 2017 11:59
that is for memory storage, for watch-and-forget. for usual use people use default way of downloading in the background, there I apply defaultstorage
Matt Joiner
@anacrolix
Nov 28 2017 11:59
Ah okay. Try switching that to mmap backed
As a work around for now
Denis
@elgatito
Nov 28 2017 12:01
btw, latest implementation is using memory+LRU, not ideal, but looks good :)
Matt Joiner
@anacrolix
Nov 28 2017 12:08
hm cool
Denis
@elgatito
Nov 28 2017 13:58
@anacrolix , not acknowledged with mmap, will it work with android and windows?
Matt Joiner
@anacrolix
Nov 28 2017 23:18
Yes, the library I use supports mmap on those platforms