Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
David Česal
@Dacesilian
@oetiker But error is the same, what can I do?
/opt/znapzend-master/bin# ./znapzendzetup list
zLog must be specified at creation time!
Tobias Oetiker
@oetiker
are you calling list without having anything setup ? then this is just a bad error message ...
David Česal
@Dacesilian
@oetiker No, I have setup done, backup is working. But from some point of time, znapzendzetup list does not work. On my other server, it's working fine. :\
@oetiker Maybe it's connected with fact that I've excluded some child dataset. Hmm, znapzend should be able to handle this situation, I guess.
David Česal
@Dacesilian
I'm not sure if excluded dataset is on this server. Anyway, it's working when I specify pool name: znapzendzetup list --recursive nvme
Tobias Oetiker
@oetiker
@Dacesilian can you try this patch:
diff --git a/lib/ZnapZend/Config.pm b/lib/ZnapZend/Config.pm
index 6bc4249..c616729 100644
--- a/lib/ZnapZend/Config.pm
+++ b/lib/ZnapZend/Config.pm
@@ -33,7 +33,8 @@ has zfs  => sub {
         rootExec => $self->rootExec,
         debug => $self->debug,
         lowmemRecurse => $self->lowmemRecurse,
-        zfsGetType => $self->zfsGetType
+        zfsGetType => $self->zfsGetType,
+        zLog => $self->zLog
     );
 };
 has time => sub { ZnapZend::Time->new(timeWarp=>shift->timeWarp); };
Jim Klimov
@jimklimov
I think the fix is in recent commits for one of those two PRs in review, as well. Side effect of going away from raw warn() noticed late :(
Needed to pass a trivial new Mojo::Log object so these tools spam formatted logs to stderr
David Česal
@Dacesilian
Thank you, I've compiled current master branch and it is working fine!
rm -r znapzend/
git clone https://github.com/oetiker/znapzend.git znapzend && cd znapzend/

apt-get install perl unzip autoconf carton
./bootstrap.sh
./configure --prefix=/opt/znapzend-master
make
make install
rm /usr/local/bin/znapzend*
for x in /opt/znapzend-master/bin/*; do ln -s $x /usr/local/bin; done
Jim Klimov
@jimklimov
Note: I'm just updating the README in https://github.com/oetiker/znapzend/pull/512/ so thanks for the updated apt-get bit ;)
As for the symlinks, a ln -f takes care of removing older ones if present, and a GNU ln -r can make relative symlinks that are more meaningful when you juggle many alternate roots and do not want to reference the current OS you are running (most linux systems/scripts do not know the difference... solaris had that for decades, e.g. a file server hosting roots for diskless NFS workstations, so it is sort of a built-in habit)
unfortunately a non-GNU ln such as in Solaris does not have the -r so explicit ../ have to be prepended
Jim Klimov
@jimklimov
Thanks for confirming the rest works well in the isolated system :)
Jim Klimov
@jimklimov
@oetiker : is there something remaining to be resolved urgently in https://github.com/oetiker/znapzend/pull/497/ or can you merge it? :)
Jim Klimov
@jimklimov
Jim Klimov
@jimklimov
Jim Klimov
@jimklimov
and I guess you can close oetiker/znapzend#384 as solved by 506 and later
Tobias Oetiker
@oetiker
Jim Klimov
@jimklimov
Did not before ;)
Sounds good :)
Jim Klimov
@jimklimov
Updated the two remaining hot PRs
Jim Klimov
@jimklimov
Got a strange zfs lockup on the laptop: my OI with the pool being backed up is one VM, the USB disk for backups is attached to another, a Linux VM with iSCSI target. That VM eventually claimed some device connection errors and froze, pool over iSCSI timed out but curiously: the pool and its one vdev are both ONLINE but "all operations to the pool are suspended". Zpool clear does not help (nothing to clear, all online), replug of disk and reboot of target and restart of initiator also did not, attempts to reimport the pool and a zpool export -f froze last time I saw... will see it again in the evening. Any ideas insofar? :)
Jim Klimov
@jimklimov
In the end, over the day the zpool export -f did not succeed, despite low-level I/O to rdsk (partition printout etc.) being quite snappy. Even soft reboot did not go well, had to ungracefully poweroff the OI VM after considerable wait.
Jim Klimov
@jimklimov
after the reboot syncs continued well, no new errors reported...
trijenhout
@trijenhout
is there a way to make znapzend do 1 data set after thee other insted of (for me at this moment ) 3 at a time (3xssh process + 3 zfs process and a load above 7 on raspberry pi 4)
trijenhout
@trijenhout
raspberrypi4 as a client/reciever....
Jim Klimov
@jimklimov
Do you have these 3 as separately configured backupSet schedules?
You can probably loop calling it with the --runonce mode, assuming you don't want to change the perl code. I don't think there's much otherwise out of the box - all schedule handlers strive to snapshot their datasets ASAP at the timed trigger, and proceed to replicate and clean up independently of each other.
Jim Klimov
@jimklimov
Often much time is spent waiting (state valculations, kernel locks, ...) instead of transfering data, so on bigger computers it is much faster in overall wallclock time spent to parallelize the sends.
trijenhout
@trijenhout
@jimklimov i guesses i do by 3 differtent datasets, setup true znapzentsetup tool. no i dont like to dive in to perl ;) . i do also make a backup to i7 machine no troubles over there.
Jim Klimov
@jimklimov
I meant, it is possible to set up one dataset and then recursively apply this retention schedule to its children (aka inherited config) - such datasets under one root are currently copied sequentially; I wondered about adding optional parallelicity there ;).
Three independent setups should be processed by daemonized mode in parallel and I don't think there are now any toggles about that.
They may be processed sequentially or not in --runonce mode however
Certainly would be for znapzend --runonce=pool/data/set only requesting one data set with a schedule
So in worst case you can certainly shellscript a loop over backup schedules (found by znapzend list -r pool | grep 'key chars from heading' | awk '{print $4}' iirc) run once one by one.
Jim Klimov
@jimklimov
But first check if something like --runonce -r pool exists in your version and if it works like you like ;)
wildy
@wildy
Hi there. I have an encrypted dataset on my laptop and an encrypted dataset on my NAS. Can znapzend zend znapzhotz between laptop and the NAS? I didn't succeed in configuring it this way.
(also, znapzendzetup confuses my brain a lot XD)
wildy
@wildy
oops, sorry
wildy
@wildy
well, it seems like the place is mostly dead?
Jim Klimov
@jimklimov
I'd say, like on IRC - nobody online has a good idea to respond with...
Tobias Oetiker
@oetiker
yep :)
wildy
@wildy
@oetiker is there a way to send an encrypted snapshot cleanly to another (encrypted) device?
I just want to backup my laptop to my NAS but I couldn't get it to work if the source is encrypted
Tobias Oetiker
@oetiker
there is a new option in master yes
Linus Heckemann
@lheckemann
Hi folks. I'm trying to set up backups from "sosiego" to "thirtythree" using znapzend, where sosiego has its own SSH user on thirtythree. I'm using --autoCreation, but znapzend doesn't seem to be handling the case where the filesystem can't be mounted after creation cleanly: I get
[…]
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
filesystem successfully created, but it may only be mounted by root
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
ERROR: cannot create dataSet thirtythree/backups/sosiego/mail
ah, it seems the filesystems are created as expected and just trying over and over again, gradually all of them are created and everything starts working. Still not optimal ^^
Linus Heckemann
@lheckemann
Is this something znapzend isn't supposed to be able to handle?
Tobias Oetiker
@oetiker
are you using the version from the master branch? there have been some changes of late to make this better I think
Linus Heckemann
@lheckemann
no, an older version. I'll give it a try soon, thanks :)