--autoCreation
, but znapzend doesn't seem to be handling the case where the filesystem can't be mounted after creation cleanly: I get […]
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
filesystem successfully created, but it may only be mounted by root
cannot open 'thirtythree/backups/sosiego/mail': dataset does not exist
ERROR: cannot create dataSet thirtythree/backups/sosiego/mail
zfs set org.znapzend:enabled=off pool/backeduptree/childnotbackedup
should be it
Does anyone have an example of a service manifest XML file for OpenIndiana/Illumos?
You can use Manifold to quickly create smf manifest https://code.google.com/archive/p/manifold/
question: I ran into the situation that I can't create new snapshots, not enough space on the volume. I can fix that obviously but what I'm surprised about is that znapzend logs that but then still seems to continue. I see among others:
# zfs snapshot -r zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226
cannot create snapshot 'zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226': out of space
no snapshots were created
# zfs list -H -o name -t snapshot zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226
cannot open 'zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226': dataset does not exist
[2021-05-18 10:22:27.66667] [41655] [warn] taking snapshot on zones/74e519c8-0010-41fb-846b-9301f5587797-disk0 failed: ERROR: cannot create snapshot zones/74e519c8-0010-41fb-846b-9301f5587797-disk0@2021-05-18-102226 at /opt/tools/lib/ZnapZend/ZFS.pm line 272.
so right now it still does a zfs recv
on the remote system but I guess that will fail at one point
When I run znapzend with noaction:
Can there be a problem that at first, subvol-165-disk-0 snapshot is created and then recursive snapshot fails, because it already exists?
Hi I'm having a problem with my zfs backups sending. There were many days that my scheduled plan didn't work, so I have a bunch of snapshots (about 4 days worth) that are not backed up. I've run the plan manually and here is the logs:
cannot receive incremental stream: dataset is busy
mbuffer: error: outputThread: error writing to <stdout> at offset 0x55120000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
warning: cannot send 'zroot/data/timemachine@10-07-2021-00:00:00': signal received
warning: cannot send 'zroot/data/timemachine@10-07-2021-06:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-07-2021-12:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-07-2021-18:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-08-2021-00:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-08-2021-06:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-08-2021-12:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-08-2021-18:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-09-2021-00:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-09-2021-06:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-09-2021-12:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-09-2021-18:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-10-2021-00:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-10-2021-06:00:00': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-10-2021-10:28:46': Broken pipe
warning: cannot send 'zroot/data/timemachine@10-10-2021-10:30:23': Broken pipe
cannot send 'zroot/data/timemachine': I/O error
[2021-10-10 10:31:06.50500] [80725] [warn] ERROR: cannot send snapshots to tank/backups/zfs_backup/arch-TM on root@10.0.1.197
[2021-10-10 10:31:06.50525] [80725] [warn] ERROR: suspending cleanup source dataset zroot/data/timemachine because 1 send task(s) failed:
[2021-10-10 10:31:06.50638] [80725] [warn] +--> ERROR: cannot send snapshots to tank/backups/zfs_backup/arch-TM on root@10.0.1.197
[2021-10-10 10:31:06.50655] [80725] [info] done with backupset zroot/data/timemachine in 43 seconds
[2021-10-10 10:31:06.50837] [80689] [debug] send/receive worker for zroot/data/timemachine done (80725)
znapzend (PID=80689) is done.
It seems to break at this step:
'/usr/local/bin/mbuffer -q -s 256k -W 600 -m 200M|zfs recv -F tank/backups/zfs_backup/arch-TM'
cannot receive incremental stream: dataset is busy
Is this a problem with the mbuffer command or something else?