Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Jim Klimov
@jimklimov
and I guess you can close oetiker/znapzend#384 as solved by 506 and later
Tobias Oetiker
@oetiker
Jim Klimov
@jimklimov
Did not before ;)
Sounds good :)
Jim Klimov
@jimklimov
Updated the two remaining hot PRs
Jim Klimov
@jimklimov
Got a strange zfs lockup on the laptop: my OI with the pool being backed up is one VM, the USB disk for backups is attached to another, a Linux VM with iSCSI target. That VM eventually claimed some device connection errors and froze, pool over iSCSI timed out but curiously: the pool and its one vdev are both ONLINE but "all operations to the pool are suspended". Zpool clear does not help (nothing to clear, all online), replug of disk and reboot of target and restart of initiator also did not, attempts to reimport the pool and a zpool export -f froze last time I saw... will see it again in the evening. Any ideas insofar? :)
Jim Klimov
@jimklimov
In the end, over the day the zpool export -f did not succeed, despite low-level I/O to rdsk (partition printout etc.) being quite snappy. Even soft reboot did not go well, had to ungracefully poweroff the OI VM after considerable wait.
Jim Klimov
@jimklimov
after the reboot syncs continued well, no new errors reported...
trijenhout
@trijenhout
is there a way to make znapzend do 1 data set after thee other insted of (for me at this moment ) 3 at a time (3xssh process + 3 zfs process and a load above 7 on raspberry pi 4)
trijenhout
@trijenhout
raspberrypi4 as a client/reciever....
Jim Klimov
@jimklimov
Do you have these 3 as separately configured backupSet schedules?
You can probably loop calling it with the --runonce mode, assuming you don't want to change the perl code. I don't think there's much otherwise out of the box - all schedule handlers strive to snapshot their datasets ASAP at the timed trigger, and proceed to replicate and clean up independently of each other.
Jim Klimov
@jimklimov
Often much time is spent waiting (state valculations, kernel locks, ...) instead of transfering data, so on bigger computers it is much faster in overall wallclock time spent to parallelize the sends.
trijenhout
@trijenhout
@jimklimov i guesses i do by 3 differtent datasets, setup true znapzentsetup tool. no i dont like to dive in to perl ;) . i do also make a backup to i7 machine no troubles over there.
Jim Klimov
@jimklimov
I meant, it is possible to set up one dataset and then recursively apply this retention schedule to its children (aka inherited config) - such datasets under one root are currently copied sequentially; I wondered about adding optional parallelicity there ;).
Three independent setups should be processed by daemonized mode in parallel and I don't think there are now any toggles about that.
They may be processed sequentially or not in --runonce mode however
Certainly would be for znapzend --runonce=pool/data/set only requesting one data set with a schedule
So in worst case you can certainly shellscript a loop over backup schedules (found by znapzend list -r pool | grep 'key chars from heading' | awk '{print $4}' iirc) run once one by one.
Jim Klimov
@jimklimov
But first check if something like --runonce -r pool exists in your version and if it works like you like ;)
wildy
@wildy
Hi there. I have an encrypted dataset on my laptop and an encrypted dataset on my NAS. Can znapzend zend znapzhotz between laptop and the NAS? I didn't succeed in configuring it this way.
(also, znapzendzetup confuses my brain a lot XD)
wildy
@wildy
oops, sorry
wildy
@wildy
well, it seems like the place is mostly dead?
Jim Klimov
@jimklimov
I'd say, like on IRC - nobody online has a good idea to respond with...
Tobias Oetiker
@oetiker
yep :)
wildy
@wildy
@oetiker is there a way to send an encrypted snapshot cleanly to another (encrypted) device?
I just want to backup my laptop to my NAS but I couldn't get it to work if the source is encrypted
Tobias Oetiker
@oetiker
there is a new option in master yes
Linus Heckemann
@lheckemann
Hi folks. I'm trying to set up backups from "sosiego" to "thirtythree" using znapzend, where sosiego has its own SSH user on thirtythree. I'm using --autoCreation, but znapzend doesn't seem to be handling the case where the filesystem can't be mounted after creation cleanly: I get
[…]
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
filesystem successfully created, but it may only be mounted by root
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
ERROR: cannot create dataSet thirtythree/backups/sosiego/mail
ah, it seems the filesystems are created as expected and just trying over and over again, gradually all of them are created and everything starts working. Still not optimal ^^
Linus Heckemann
@lheckemann
Is this something znapzend isn't supposed to be able to handle?
Tobias Oetiker
@oetiker
are you using the version from the master branch? there have been some changes of late to make this better I think
Linus Heckemann
@lheckemann
no, an older version. I'll give it a try soon, thanks :)
wforumw
@wforumw
Hi, We are using znapzend 0.20.0. Is it possible to exclude a dataset from znapzend if we enabled recursive configuration? Txs
Jim Klimov
@jimklimov
@wforumw : sorry for lag, but znapzend (at least master, though seversl past releases likely too) should support a "disabled" option in child datasets under a schedule. It should avoid certain recursive operations if this situation is detected, too.
Sorry, zfs set org.znapzend:enabled=off pool/backeduptree/childnotbackedup should be it
Zandr Milewski
@zandr
On Mac OS 11 (Big Sur) runnign OpenZFS 2.0.0-1, znapzend (installed from homebrew) segfaults immediately. How would I got about troubleshooting this?
Homebrew appears to install 0.20.0
Zandr Milewski
@zandr
Building from master seems to have done the trick. :)
Tobias Oetiker
@oetiker
uff ... glad to hear
jdrch
@jdrch
Does anyone have an example of a service manifest XML file for OpenIndiana/Illumos?
Paolo Marcheschi
@marcheschi

Does anyone have an example of a service manifest XML file for OpenIndiana/Illumos?

You can use Manifold to quickly create smf manifest https://code.google.com/archive/p/manifold/

Chris Barnes
@clbarnes
having some trouble getting systemctl to start the znapzend daemon - on my primary server it's fine, on the backup it finds no backup config, kills itself, restarts, repeat until systemctl's restart counter maxes out
i can't find any informative logs for znapzend - where might they be?
Chris Barnes
@clbarnes
having no config on the backup is intentional (i think) as that should be controlled by the strategy set out by the primary?
or does it only need to run on the primary?
Adrian Gschwend
@ktk

question: I ran into the situation that I can't create new snapshots, not enough space on the volume. I can fix that obviously but what I'm surprised about is that znapzend logs that but then still seems to continue. I see among others:

# zfs snapshot -r zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226
cannot create snapshot 'zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226': out of space
no snapshots were created
# zfs list -H -o name -t snapshot zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226
cannot open 'zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226': dataset does not exist
[2021-05-18 10:22:27.66667] [41655] [warn] taking snapshot on zones/74e519c8-0010-41fb-846b-9301f5587797-disk0 failed: ERROR: cannot create snapshot zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226 at /opt/tools/lib/ZnapZend/ZFS.pm line 272.

so right now it still does a zfs recv on the remote system but I guess that will fail at one point

I noticed because it stopped cleaning up the old snapshots so now I run out of space on the destination