Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Jim Klimov
@jimklimov
UPDATE: Hopefully fixed as part of my PR https://github.com/oetiker/znapzend/pull/492/ :)
Jim Klimov
@jimklimov
Wasn't in this channel for a while... I saw discussion above about holds or bookmarks on datasets; my take on this was that we could also portably use dataset properties (like marking certain dataset as being the last known common point of this source and that destination) and so avoid autoremoval on either side
Jim Klimov
@jimklimov

Docker image tests fail in another PR:

ERROR: unsatisfiable constraints:
perl-5.30.3-r0:
breaks: world[perl=5.30.1-r0]
satisfies: autoconf-2.69-r2[perl]
automake-1.16.1-r0[perl]

maybe some new alpine release is being rolled out and this got fetching in-between?..
Jim Klimov
@jimklimov

@Rivqua > I have a question I can't figure out the answer to, I've setup znapzend, and it works. I am wondering though, how do I configure the features? like --features=skipIntermediates,compressed ?

This usually happens on the command line, either for the standalone CLI tool like znapzend --runonce=pool/export --features=... or similarly for the service definition. Now in the multi-distro world, the specifics of passing your custom CLI arguments to the service (systemd? SMF? init-script?..) would differ, but the idea remains.

Jim Klimov
@jimklimov
@oetiker : I'm getting lost in trying to cheat around Test::More and family (for where daemonized mode is tested)
whatever I try, there are some modes of invocation that succeed and some that fail, no silver bullet yet :(
Tobias Oetiker
@oetiker
keep fighting :)
David Česal
@Dacesilian
Hello, when I type "znapzendzetup list", it says only "zLog must be specified at creation time!". How can I fix this, please?
Tobias Oetiker
@oetiker
this is fixed in master
David Česal
@Dacesilian

@oetiker Can you please write more detailed tutorial how to build it?

git clone https://github.com/oetiker/znapzend.git znapzend
cd znapzend/
autoconf
aclocal
./configure --prefix=/opt/znapzend

Can't open perl script "/root/znapzend/thirdparty/carton/bin/carton": No such file or directory

David Česal
@Dacesilian
In Debian
apt install carton
make -j4
Tobias Oetiker
@oetiker
hmmm run ./bootstrap
then configure make
David Česal
@Dacesilian

Okay, this is working fine, thanks. Important is to use only one-threaded make (not make -j4) to let carton download all dependencies.

git clone https://github.com/oetiker/znapzend.git znapzend
cd znapzend/
./bootstrap.sh
./configure --prefix=/opt/znapzend-master
make
make install
rm /usr/local/bin/znapzend
for x in /opt/znapzend-master/bin/
; do ln -s $x /usr/local/bin; done

@oetiker But error is the same, what can I do?
/opt/znapzend-master/bin# ./znapzendzetup list
zLog must be specified at creation time!
Tobias Oetiker
@oetiker
are you calling list without having anything setup ? then this is just a bad error message ...
David Česal
@Dacesilian
@oetiker No, I have setup done, backup is working. But from some point of time, znapzendzetup list does not work. On my other server, it's working fine. :\
@oetiker Maybe it's connected with fact that I've excluded some child dataset. Hmm, znapzend should be able to handle this situation, I guess.
David Česal
@Dacesilian
I'm not sure if excluded dataset is on this server. Anyway, it's working when I specify pool name: znapzendzetup list --recursive nvme
Tobias Oetiker
@oetiker
@Dacesilian can you try this patch:
diff --git a/lib/ZnapZend/Config.pm b/lib/ZnapZend/Config.pm
index 6bc4249..c616729 100644
--- a/lib/ZnapZend/Config.pm
+++ b/lib/ZnapZend/Config.pm
@@ -33,7 +33,8 @@ has zfs  => sub {
         rootExec => $self->rootExec,
         debug => $self->debug,
         lowmemRecurse => $self->lowmemRecurse,
-        zfsGetType => $self->zfsGetType
+        zfsGetType => $self->zfsGetType,
+        zLog => $self->zLog
     );
 };
 has time => sub { ZnapZend::Time->new(timeWarp=>shift->timeWarp); };
Jim Klimov
@jimklimov
I think the fix is in recent commits for one of those two PRs in review, as well. Side effect of going away from raw warn() noticed late :(
Needed to pass a trivial new Mojo::Log object so these tools spam formatted logs to stderr
David Česal
@Dacesilian
Thank you, I've compiled current master branch and it is working fine!
rm -r znapzend/
git clone https://github.com/oetiker/znapzend.git znapzend && cd znapzend/

apt-get install perl unzip autoconf carton
./bootstrap.sh
./configure --prefix=/opt/znapzend-master
make
make install
rm /usr/local/bin/znapzend*
for x in /opt/znapzend-master/bin/*; do ln -s $x /usr/local/bin; done
Jim Klimov
@jimklimov
Note: I'm just updating the README in https://github.com/oetiker/znapzend/pull/512/ so thanks for the updated apt-get bit ;)
As for the symlinks, a ln -f takes care of removing older ones if present, and a GNU ln -r can make relative symlinks that are more meaningful when you juggle many alternate roots and do not want to reference the current OS you are running (most linux systems/scripts do not know the difference... solaris had that for decades, e.g. a file server hosting roots for diskless NFS workstations, so it is sort of a built-in habit)
unfortunately a non-GNU ln such as in Solaris does not have the -r so explicit ../ have to be prepended
Jim Klimov
@jimklimov
Thanks for confirming the rest works well in the isolated system :)
Jim Klimov
@jimklimov
@oetiker : is there something remaining to be resolved urgently in https://github.com/oetiker/znapzend/pull/497/ or can you merge it? :)
Jim Klimov
@jimklimov
Jim Klimov
@jimklimov
Jim Klimov
@jimklimov
and I guess you can close oetiker/znapzend#384 as solved by 506 and later
Tobias Oetiker
@oetiker
Jim Klimov
@jimklimov
Did not before ;)
Sounds good :)
Jim Klimov
@jimklimov
Updated the two remaining hot PRs
Jim Klimov
@jimklimov
Got a strange zfs lockup on the laptop: my OI with the pool being backed up is one VM, the USB disk for backups is attached to another, a Linux VM with iSCSI target. That VM eventually claimed some device connection errors and froze, pool over iSCSI timed out but curiously: the pool and its one vdev are both ONLINE but "all operations to the pool are suspended". Zpool clear does not help (nothing to clear, all online), replug of disk and reboot of target and restart of initiator also did not, attempts to reimport the pool and a zpool export -f froze last time I saw... will see it again in the evening. Any ideas insofar? :)
Jim Klimov
@jimklimov
In the end, over the day the zpool export -f did not succeed, despite low-level I/O to rdsk (partition printout etc.) being quite snappy. Even soft reboot did not go well, had to ungracefully poweroff the OI VM after considerable wait.
Jim Klimov
@jimklimov
after the reboot syncs continued well, no new errors reported...
trijenhout
@trijenhout
is there a way to make znapzend do 1 data set after thee other insted of (for me at this moment ) 3 at a time (3xssh process + 3 zfs process and a load above 7 on raspberry pi 4)
trijenhout
@trijenhout
raspberrypi4 as a client/reciever....
Jim Klimov
@jimklimov
Do you have these 3 as separately configured backupSet schedules?
You can probably loop calling it with the --runonce mode, assuming you don't want to change the perl code. I don't think there's much otherwise out of the box - all schedule handlers strive to snapshot their datasets ASAP at the timed trigger, and proceed to replicate and clean up independently of each other.
Jim Klimov
@jimklimov
Often much time is spent waiting (state valculations, kernel locks, ...) instead of transfering data, so on bigger computers it is much faster in overall wallclock time spent to parallelize the sends.
trijenhout
@trijenhout
@jimklimov i guesses i do by 3 differtent datasets, setup true znapzentsetup tool. no i dont like to dive in to perl ;) . i do also make a backup to i7 machine no troubles over there.
Jim Klimov
@jimklimov
I meant, it is possible to set up one dataset and then recursively apply this retention schedule to its children (aka inherited config) - such datasets under one root are currently copied sequentially; I wondered about adding optional parallelicity there ;).
Three independent setups should be processed by daemonized mode in parallel and I don't think there are now any toggles about that.
They may be processed sequentially or not in --runonce mode however
Certainly would be for znapzend --runonce=pool/data/set only requesting one data set with a schedule
So in worst case you can certainly shellscript a loop over backup schedules (found by znapzend list -r pool | grep 'key chars from heading' | awk '{print $4}' iirc) run once one by one.