high-latency dumb protocols like sftp and s3 are probably clearly worse than bupstash over ssh?
bupstash can pipeline a lot of file access
its not too bad, though I need to add some tuning options for the user
w.r.t. s3 - the way it will work is it will be like a storage plugin for the repository
so if you ssh server is located close to your s3 server it will be fast
but yeah, normal bupstash doesn't do many round trips
it basically pushes data to the server in a stream and gets the 'ack' once every 20 GB
and when it downloads data the server pushes it, without any round trips or w.e.
also - regarding bupstash post 1.0
I mainly want to fine tune things and make it perform as fast as possible
and be very stable
So we will see what sort of extra features become necessary or not
I pushed the next release back a few times more than I should have - had a few things to deal with
so the project looks more dormant than it is
one difficulty with posix lock is that to be effective all tools needs to use it too. else you ends with one tool reading/writing file (without using lock) whereas the file is (properly) locked by another tool
yep - though bupstash provides exec-with-locks to help you
generally programs don't mess around around in the internal file trees of other tools
with the query-language, is it possible to get the "latest" entry (from timestamp tag point of vue) ? I am looking to extract (bupstash restore) the latest snapshot (name=backup.tar and ...)
doing external processing on "bupstash list" output isn't a very good way for me, as I am using BUPSTASH_KEY_COMMAND with explicit password asking, and it means asking password twice (one for "list", another for "restore"), or locally store the key
@semarie sorry for the late reply, was away. I don't think there is another way at the moment.
I considered adding some post processing to the queries
like limit 1
It's definitely something I will consider adding
@semarie another thing is to use something like gpg agent to retain keys in memory for some duration