Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Tobias Oetiker
@oetiker
great work !
thank you
Jim Klimov
@jimklimov
I feel itchy... and scratchy.... way too often when I use stuff :)
Tobias Oetiker
@oetiker
glad you scratched it
Jim Klimov
@jimklimov
for a couple of decades now, and probably still, my "Why FOSS?" motto is "Forging the tools of my trade" which is not easy or possible in other ecosystems ;)
and since the late 90's, never had a chance to say directly: Thank you for the goodnesses of RRD(tool) and MRTG! (by the way)
Would you visit FOSDEM sometime so I can repeat that in person (assuming I'm approved this year too)?
Tobias Oetiker
@oetiker
it tends to clash with my winter holidays unfortunately ... for the last few years ... otherwhise I would yes
Jim Klimov
@jimklimov
Oh, yes... Andy said that the last time...
then if not visiting, be sure to have a good family time then :)
Jim Klimov
@jimklimov
@oetiker : I posted some ideas about recursive send into oetiker/znapzend#438 - does that seem reasonable or did I miss something glaring? :-)
also seems oetiker/znapzend#437 can be closed
kevdogg
@kevdogg
Hi - just installed znapsend on Arch Linux (installed with ZFS on Root), backing up to FreeNAS. Just a few questions since I'm a little confused about things. I believe I made my first backup correctly, however is there a way to verify? Where is configuration file stored? Web page states in the zfs file structure itself -- so how to read? Although I have the systemd znapzend service started and enabled -- what program is involved with the scheduling? I ran the first task with a runonce command - how to make this run automatically?
Tobias Oetiker
@oetiker
znapzend schedules itself
settings are stored in zfs properties
tuxwielder
@tuxwielder
Hi, I could use some help on the new feature that should allow excluding part of an recursive dataset?
tuxwielder
@tuxwielder
Looks like excluded snapshots are being deleted (after https://github.com/oetiker/znapzend/pull/379/files/8e65186d9b8a98502f192c615055d8c35aebf460 ), but we still send them. This causes ERRORs on later runs, since common snapshots are missing). Are we missing some logic?:
--- ZnapZend.pm.dist    2019-11-28 17:07:17.927267363 +0100
+++ ZnapZend.pm    2019-11-28 17:44:33.203399136 +0100
@@ -336,6 +336,22 @@
         #from being snapshot/sent by setting property "org.znapzend:enabled"
         #to "off" on them
         for my $srcDataSet (@$srcSubDataSets){
+
+            # get the value for org.znapzend property
+            my @cmd = (@{$self->zZfs->priv}, qw(zfs get -H -o value org.znapzend:enabled), $srcDataSet);
+            print STDERR '# ' . join(' ', @cmd) . "\n" if $self->debug;
+            open my $prop, '-|', @cmd;
+
+            # if the property does not exist, the command will just return. In this case,
+            # the value is implicit "on"
+            $prop = <$prop> || "on";
+            chomp($prop);
+            if ( $prop eq 'off' ) {
+                $self->zLog->debug('Skipping ' . $srcDataSet . ' due to it being explicitly disabled.');
+                next;
+            }
+
+
             my $dstDataSet = $srcDataSet;
             $dstDataSet =~ s/^\Q$backupSet->{src}\E/$backupSet->{$dst}/;
Please note the missing "-s local", so it also works on inherited properties in the recursive dataset.
tuxwielder
@tuxwielder
Also it looks like we are not properly destroying disabled snapshots when we have multiple disabled datasets:
--- lib/ZnapZend.pm.dist    2019-11-29 00:29:36.980904411 +0100
+++ lib/ZnapZend.pm    2019-11-29 00:21:08.546796322 +0100
@@ -615,7 +615,13 @@
             # removal here is non-recursive to allow for fine-grained control
             if ( @dataSetsExplicitelyDisabled ){
                $self->zLog->info("Requesting removal of marked datasets: ". join( ", ", @dataSetsExplicitelyDisabled));
-               $self->zZfs->destroySnapshots(@dataSetsExplicitelyDisabled, 0);
+
+               # We need to explicitly call destroySnapshots for each dataSet we found
+               # because it is only designed to destroy sets of snapshots _for the same filesystem_.
+               # Since we are/have a top-level recursive snapshot, recurse here too.
+               for my $dataSetToDestroy (@dataSetsExplicitelyDisabled){
+                   $self->zZfs->destroySnapshots($dataSetToDestroy, 1);
+               }
            }
         }
     }

As a side note, it think

$self->zZfs->destroySnapshots(@dataSetsExplicitelyDisabled, 0);

should have been:

$self->zZfs->destroySnapshots(\@dataSetsExplicitelyDisabled, 0);

but my "Perl"-ish is rusty :)

Tobias Oetiker
@oetiker
Hi @tuxwielder mabe its better to log these on github and tag @jimklimov
tuxwielder
@tuxwielder
Hi Tobias, thought to discuss first, but sure will do :)
Tobias Oetiker
@oetiker
yea :) it seems that the whole recursive and include/exclude/override topic proves to be quite involved
minorsatellite
@minorsatellite
I have Linux hosts that are constantly falling out of sync. I am not sure if the issue is with the send side or remote side, but this is the third time this has happened on a dataset larger than 30TB. It seems like this product is not really production-ready for Linux, at least. Is anyone else having this issue. I feel like my hands are tied and I am going to have to move over to SANOID.
minorsatellite
@minorsatellite
[Wed Dec 18 14:02:27 2019] [warn] ERROR: snapshot(s) exist on destination, but no common found on source and destination clean up destination root@192.xx.xx.xx:pool1/dr/co/share (i.e. destroy existing snapshots)
minorsatellite
@minorsatellite
Is there any provision in znapzend that would allow for 'holding' snapshots, on the sender and receiver, to prevent a total resync of the two systems.
zfs hold [-r] tag snapshot...
minorsatellite
@minorsatellite
Or I suppose that is something that can be done administratively by the sysadmin prior to going live with the automated snapshot and replication schedule.
minorsatellite
@minorsatellite
And more importantly, would there be any downside to tagging snapshots with a hold? Any negative implications for znapzend plans?
fbnielsen
@fbnielsen
Hi this is not directly related to znapzend; but maybe someone have some experience. When I create and clone a snapshot on a windows server, not all files are there - also tried to snapshot and clone while windows server is turned off, so data to "local" disk should have been flushed
fbnielsen
@fbnielsen
Hi I get this error in the log :
[Thu Jan 2 00:01:49 2020] [warn] ERROR: suspending cleanup source dataset because 1 send task(s) failed:
[Thu Jan 2 00:01:49 2020] [warn] +--> ERROR: cannot send snapshots to backup/mother/saspool/vms/vm-100-disk-1 on root@pdzfs
[Thu Jan 2 00:01:49 2020] [info] done with backupset saspool/vms in 107 seconds
but snapshot seems to be sent :
minorsatellite
@minorsatellite
I am looking for a method to prevent local and remote datasets from falling out of sync due to accidental snapshot deletions by znapzend. I have not had a chance to test this in a lab environment but I intend to. My thinking is that if I place a hold on one or more common starter (full) snapshots created using the "runonce" parameter, that should me to rollback to the starter baseline snapshot should one or more of the subsequent, and more recent, snapshots get mistakenly deleted. While the remote copy would technically fall out of sync, the ability to resync would not be lost. Is this an accurate and true assumption? If not, it would be nice to add this logic to znapzend.
Tobias Oetiker
@oetiker
using bookmarks might be the better option
is hold not a solaris only extension ?
minorsatellite
@minorsatellite
No, hold is available in all ZFS code bases (Illumos, FreeBSD, Linux). I will take a look at bookmarks, thanks.
pspikings
@pspikings
Hi..... I want to replicate my encrypted snapshots to a remote pool and have them stored encrypted without the key. I think that just adding -w to the zfs send options is all that's required but znapzend can't set that. Would you be interested in a PR to do that and is there anything to watch out for? Would you think it best just to support -w or have a config setting to set arbitrary extra flags? :)
Tobias Oetiker
@oetiker
sure
to keep consistent, an explicit option might be best .. wondering though if the option could be set automatically
pspikings
@pspikings
good point, it could be set if the source dataset is encrypted but then you might get people wanting the zfs default behavior of decrypting before sending so the backup doesn't require the key to read :)
Tobias Oetiker
@oetiker
I would rather make this behavior configurable :)
but that is probably counter intuitive
pspikings
@pspikings
PR opened.... it's working as expected here :) See PR comment about future improvement
Also.... belated thanks for RRDtool, I used that lots in a previous job, very useful! :)
Tobias Oetiker
@oetiker
glad you like it
James Crocker
@james-crocker
Hello - am enjoying znapzend! I've encrypted datasets and was messing with the code base to support sending raw streams only to do a fresh pull and see the sendRaw feature - yeah! So, while I don't have to continue pursing enabling that feature myself - I'm still looking at how best to manage the ZFS pools for send/recv. Specifically, I really like that the 'config' is stored in the zfs properties - keeping everything nicely self-contained. However, 'features' are not stored along with the other org.znapzend ZFS properties. Currently then, to enable the sendRaw feature it must be passed to the $ZNAPZENDOPTIONS for the znapzend.service unit's ExecStart (and/or configured in an EnvironmentFile). Consequently if I want one particular pool to sendRaw, while another doesn't, it would require creating multiple service unit files with or without the --featrues options sent to znapzend. I'd like take on the task of embedding the features as org.znapzend property and reference it when znapzend is called - perhaps allowing the cli --features to override any ZFS org.znapzend:features property. But, I thought before I invest any goodly time and effort to do so - Were there any reasons that the cli --features option is not already stored as an ZFS org.znapzend:features property for the datasets? Thanks!
Tobias Oetiker
@oetiker
yes having features per-destination would be great we just have no found a good sustainable way to have that yet. but maybe adding a per dest feature property with key value pairs might be a nice way of doing it ...
another much more radical aproach we thought about was to have the config as json, base64 encode it and store it in a buch of properties (no sure how much data a single property can take)