Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jul 29 05:24

    dpc on master

    `cargo update` (compare)

  • Jul 29 04:36

    dpc on master

    flake.nix: cleanup (compare)

  • Jul 29 02:29

    dpc on master

    chore: add nix flake files (compare)

  • Jul 29 02:15

    dpc on master

    implement `read_metadata` for b… add `Metadata.created` field Add `Name.created` field Adds … and 2 more (compare)

  • Jul 29 02:15
    dpc closed #178
  • Jul 27 17:21
    dpc commented #198
  • Jul 27 16:45
    aemiranda7 opened #198
  • Jun 20 03:33
    dpc closed #197
  • Jun 20 03:33

    dpc on master

    Make the backblaze b2 backend o… Panic if b2 backend is requeste… Merge pull request #197 from mk… (compare)

  • Jun 20 02:14
    mkroman edited #197
  • Jun 20 02:12
    mkroman opened #197
  • Apr 25 03:59
    dpc commented #196
  • Apr 24 21:35
    dywedir commented #196
  • Apr 23 17:04
    dpc commented #196
  • Apr 20 23:55
    kevincox opened #196
  • Apr 20 23:53
    kevincox opened #195
  • Apr 19 17:24
    dpc commented #194
  • Apr 19 10:06
    DarkKirb opened #194
  • Mar 25 10:45
    invakid404 closed #193
  • Mar 25 10:45
    invakid404 commented #193
Erlend Langseth
@Ploppz
docs fails https://docs.rs/crate/rdedup-lib/3.1.0/builds/139972 "libsodium-sys v0.1.0`\nprocess didn\'t exit successfully"
the library aspect of rdedup is interesting; I'm tempted to create an application based on rdedup, that is more high-level about backups
Erlend Langseth
@Ploppz
compression is on by default? I stored some images once in an rdedup store, but rdedup du reports that they take as much space on disk in the rdedup as the originals
the pictures are 3.5 GB. Then I store the exact same folder again (using tar), and the this time it takes 2.2GB. That's rather much considering it's the exact same directory? Is tar bad then? I did it with tar -cf - /path/to/dir
matrixbot
@matrixbot
dpc Dump everything in one tar.
dpc rdedup du reports original size of the data you stored
dpc Deduplication will happen between multiple backups of the same stuff (with minor changes).
Erlend Langseth
@Ploppz
Ah I was wrong yes, it doesn't actually take that much space on disk. Besides I wonder whether I misinterpreted 0.22GB for 2.2GB
Erlend Langseth
@Ploppz
I thought "disk usage" of du referred to, well, literal disk usage
Erlend Langseth
@Ploppz

Dump everything in one tar.

@dpc You mean, all folders I want to backup, I should put in one tar?. Hm... It's just that I have like 300GB of stuff already that I want to backup - basically my whole life. I keep updating it with e.g. pictures from phone. And then I was thinking about having some "recent and relevant" store that is a bit smaller and keeps getting updated. This is turning into general backup advice :P

I was reading your "original usecase" text, about how you sync it across several devices, for redundancy. I wanted to do that, but idk if they all need everything shrug
Erlend Langseth
@Ploppz
--nesting <N> Set level of folder nesting [default: 2]
what does this mean?
matrixbot
@matrixbot
dpc Internally chunks are stored under ./firstbyte/secondbyte/restbytes path format.
Erlend Langseth
@Ploppz
oh :o ok thanks I will just assume that the default is fine then
also wondering what URI is used for. Just an identifier?
or actual URI to some remote rdedup repository?
matrixbot
@matrixbot
dpc Unless you're going to be doing terabytes of data, 2 levels should be OK.
dpc There is a WIP support for remote stores, yes.
dpc I never completed it though.
dpc The infra is there, just needs a bit of integration code for each backend.
dpc I've been using rclone instead, and I have no time left to dedicate.
Erlend Langseth
@Ploppz
I see. Thanks.
Erlend Langseth
@Ploppz
damn it takes quite a while to tar like 300GB
should I try to split it into "old things that will never be updated / archive" and rest? Several minutes have passed and not even 1 GB is stored in the rdedup store yet.
Erlend Langseth
@Ploppz
hm I think I will. Is it unexpected or not, that it takes so long time? (not thinking that it's rdedup's fault, just the use case in general)
Ethan Smith
@ethanhs
Won't tar-ing things throw off deduplication?
matrixbot
@matrixbot
dpc taring should be streaming ...
dpc And yeah, 300GB is quite a bit of data.
dpc Check your cpu usage and io usage.
Erlend Langseth
@Ploppz
cpu usage is not high, load is quite high (7-8), idk about disk usage - about 25-30 R/s and W/s
looking at ytop
@ethanhs you think so? I don't apply compression. I just do tar -C / -cf - <files>
Ethan Smith
@ethanhs
ahh then maybe not, I'm not familiar with the data layout of a tar, I think it would depend on that
matrixbot
@matrixbot
dpc 30MB/s is about right for spinning disk
Erlend Langseth
@Ploppz
well, maybe it's because my disk is about 8-9 years old... been running all night and it's only at 90GB, which makes it leses than 3 MB/s
matrixbot
@matrixbot
dpc That's very slow.
Erlend Langseth
@Ploppz
hm.. maybe I should try to write it directly to another disk. I only have an external harddisk with that capacity though.. not sure if that would be any faster
matrixbot
@matrixbot
dpc Reading and writting at the same time to the same spinning disk is usually super slow.
Erlend Langseth
@Ploppz
I see. But it's at least as slow from the HDD to an external harddisk :o oh well, just leaving my computer on
Stefan Junker
@steveeJ
hey, any idea how to run gc when the device is filled up 100% with the backups?
matrixbot
@matrixbot
dpc rdedup gc shouldn't need much extra space. Just a little bit.
dpc It only creates handful of directories and moves data files.
dpc But it will not start deleting stuff until the whole operation is complete.
dpc But that's an interesting problem, I admit.
dpc What I would do ... is I would move out some chunks to another device, and put a symlink in their place.
dpc Or whole one dir
dpc That should create enough space, without need to move everything.
matrixbot
@matrixbot
dpc I'm quite sure rdedup will just follow symlinks.
Jenda Kolena
@jendakol
Hi @dpc, I wanted to try implement PoC of some remote backend storage with rdedup but as I've found, this is currently not possible as one can only pass Url into Repo::init and that will fail for any other scheme than file:// or b2://. In other words, it's not possible to give it own Backend implementation.
Is there some way to workaround it other than compiling own version of rdedup? :-D
Thx.
matrixbot
@matrixbot
dpc I'm not sure what are you asking about ...