Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Jim Klimov
@jimklimov
But first check if something like --runonce -r pool exists in your version and if it works like you like ;)
wildy
@wildy
Hi there. I have an encrypted dataset on my laptop and an encrypted dataset on my NAS. Can znapzend zend znapzhotz between laptop and the NAS? I didn't succeed in configuring it this way.
(also, znapzendzetup confuses my brain a lot XD)
wildy
@wildy
oops, sorry
wildy
@wildy
well, it seems like the place is mostly dead?
Jim Klimov
@jimklimov
I'd say, like on IRC - nobody online has a good idea to respond with...
Tobias Oetiker
@oetiker
yep :)
wildy
@wildy
@oetiker is there a way to send an encrypted snapshot cleanly to another (encrypted) device?
I just want to backup my laptop to my NAS but I couldn't get it to work if the source is encrypted
Tobias Oetiker
@oetiker
there is a new option in master yes
Linus Heckemann
@lheckemann
Hi folks. I'm trying to set up backups from "sosiego" to "thirtythree" using znapzend, where sosiego has its own SSH user on thirtythree. I'm using --autoCreation, but znapzend doesn't seem to be handling the case where the filesystem can't be mounted after creation cleanly: I get
[…]
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
filesystem successfully created, but it may only be mounted by root
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
ERROR: cannot create dataSet thirtythree/backups/sosiego/mail
ah, it seems the filesystems are created as expected and just trying over and over again, gradually all of them are created and everything starts working. Still not optimal ^^
Linus Heckemann
@lheckemann
Is this something znapzend isn't supposed to be able to handle?
Tobias Oetiker
@oetiker
are you using the version from the master branch? there have been some changes of late to make this better I think
Linus Heckemann
@lheckemann
no, an older version. I'll give it a try soon, thanks :)
wforumw
@wforumw
Hi, We are using znapzend 0.20.0. Is it possible to exclude a dataset from znapzend if we enabled recursive configuration? Txs
Jim Klimov
@jimklimov
@wforumw : sorry for lag, but znapzend (at least master, though seversl past releases likely too) should support a "disabled" option in child datasets under a schedule. It should avoid certain recursive operations if this situation is detected, too.
Sorry, zfs set org.znapzend:enabled=off pool/backeduptree/childnotbackedup should be it
Zandr Milewski
@zandr
On Mac OS 11 (Big Sur) runnign OpenZFS 2.0.0-1, znapzend (installed from homebrew) segfaults immediately. How would I got about troubleshooting this?
Homebrew appears to install 0.20.0
Zandr Milewski
@zandr
Building from master seems to have done the trick. :)
Tobias Oetiker
@oetiker
uff ... glad to hear
jdrch
@jdrch
Does anyone have an example of a service manifest XML file for OpenIndiana/Illumos?
Paolo Marcheschi
@marcheschi

Does anyone have an example of a service manifest XML file for OpenIndiana/Illumos?

You can use Manifold to quickly create smf manifest https://code.google.com/archive/p/manifold/

Chris Barnes
@clbarnes
having some trouble getting systemctl to start the znapzend daemon - on my primary server it's fine, on the backup it finds no backup config, kills itself, restarts, repeat until systemctl's restart counter maxes out
i can't find any informative logs for znapzend - where might they be?
Chris Barnes
@clbarnes
having no config on the backup is intentional (i think) as that should be controlled by the strategy set out by the primary?
or does it only need to run on the primary?
Adrian Gschwend
@ktk

question: I ran into the situation that I can't create new snapshots, not enough space on the volume. I can fix that obviously but what I'm surprised about is that znapzend logs that but then still seems to continue. I see among others:

# zfs snapshot -r zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226
cannot create snapshot 'zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226': out of space
no snapshots were created
# zfs list -H -o name -t snapshot zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226
cannot open 'zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226': dataset does not exist
[2021-05-18 10:22:27.66667] [41655] [warn] taking snapshot on zones/74e519c8-0010-41fb-846b-9301f5587797-disk0 failed: ERROR: cannot create snapshot zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226 at /opt/tools/lib/ZnapZend/ZFS.pm line 272.

so right now it still does a zfs recv on the remote system but I guess that will fail at one point

I noticed because it stopped cleaning up the old snapshots so now I run out of space on the destination
so my question is should it not fail in that situation and stop?
Tobias Oetiker
@oetiker
I am pretty sure we have not tested behavior with out of space situations
Adrian Gschwend
@ktk
ok that explains thanks
it's a smartos zone and there is a quota there so I ran into that
Hemanta Kumar G
@HemantaKG
How to stop the backup of a single of the datasets without disturbing the other dataset backup plans? (I set up multiple backup planes for zfs dataset of the same zpool)
Carsten John
@cjohn_system_admin_gitlab
I'm currently backing up systems via znapzend and I'm wondering howto secure the backup server against lateral movement of an attacker (nowadays ransomware attackers try to get rid of the backups first). If the primary fileserver is compromised it's an easy job for an able attacker to make sure the backups are deleted as the source needs ssh access to the critical zfs command on the destination server. Initiating the whole thing the other way round (running the daemon on the target server) would circumvent this issue. Theoretically this should be possible, but would perhaps need a complete rewrite.
gnasys
@gnasys
Maybe by doing a pull from the backup server : running znapzend on the backup server, defining the primary remote as the source and setting the local dataset as the destination. Didn't try that configuration but i see no reason why it should not work
I made the jump to 0.21 ver, since then everytime i launch a znapzend command (zetup, ztatz etc.) i get the message "Subroutine delay redefined at /opt/znapzend-0.21.0/lib/Mojo/IOLoop.pm line 68", is that something i should worry about, what does it mean ?
Tobias Oetiker
@oetiker
@cjohn_system_admin_gitlab you could add wrapper cmmand on the remote servers authorized keys file only allowing the use of zfs receive with appropriate options
when your main backup server gets compromised the remote server is still save
although I am not aware of any randsomware attacks that subverted zfs servers
I think this mostly happens in windows land
Carsten John
@cjohn_system_admin_gitlab
@oetiker, yes I also guess the usual ransomware will not address this. My concern is that this more or less a security by obscurity approach. If I limit the the asuthorized key to "zfs receive" only, how is the retention done? Snapshots need to be destroyed on the target system at some point in time, right?
Tobias Oetiker
@oetiker
this is true ... and not solved ... formsomething like this to work, the wrapper would have to be pretty smart, only allowing "legal" cleanup
eyJhb
@eyjhb:eyjhb.dk
[m]
Is there any plan to tag a new version of znapzend ? Having oetiker/znapzend#496 in a release would be nice
David Česal
@Dacesilian
Hello, I have errors in log: taking snapshot on tank/container failed: ERROR: cannot create snapshot tank/container@2021-09-19-153000 at /opt/znapzend-0.21.0/lib/ZnapZend/ZFS.pm line 339
When I try to create this snapshot, it normally works, but znapzend is failing.
Can be problem that I have one settings for whole tank/container (recursive) and then different settings for one specific dataset?
[2021-09-19 15:23:46.56695] [17078] [info] found a valid backup plan for tank/container...
[2021-09-19 15:23:46.56715] [17078] [info] found a valid backup plan for tank/container/subvol-165-disk-0...
In tank/container/subvol-165-disk-0, there are snapshots as should be. No snapshots are created in other dataset (tank/container and children).
David Česal
@Dacesilian

When I run znapzend with noaction:

WOULD # zfs snapshot tank/container/subvol-165-disk-0@2021-09-19-161500

WOULD # zfs snapshot -r tank/container@2021-09-19-161500

zfs list -H -o name -t filesystem,volume -r tank/container

Can there be a problem that at first, subvol-165-disk-0 snapshot is created and then recursive snapshot fails, because it already exists?