Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
TheSmoker
@TheSmoker
How to specificy feature=compress in znapzendzetup?
oetiker/znapzend#334
chaerle
@chaerle
@TheSmoker The option does not work on znapzsendzetup, so the issue should not be closed. Is it correct that I have to change the source code "has compressed ...."?
minorsatellite
@minorsatellite
My replication stopped recently. Using the --runonce argument the test job fails with ERROR: snapshot(s) exist on destination, but no common found on source and destination clean up destination. Any suggestions on how to recover from this?
minorsatellite
@minorsatellite
The recommendation is to "destroy existing snapshots" on the receive side. Does that imply destroying "all" snapshots sent by znapzend since Day One? I am counting approx 2500 snapshots in total. How will this impact the state of my remote dataset and is it recommended to take a manual snapshot first, or must the dataset be entirely snapshot free for znapzend to continue sending its own snapshots?
Tobias Oetiker
@oetiker
if there is no common snapshot between local and remote zfs, then it is not possible to continue syncing ...
you have to drop the entire remote dataset and start from scratch
minorsatellite
@minorsatellite
24TB of data, needs to be rsynced? Ouch! Why did this happen, how can it be prevented?
Tobias Oetiker
@oetiker
it 'should' not be possible
since znapzend does not remove local snapshots when it fails to sync them
it might be worth investigating how the state was reached
minorsatellite
@minorsatellite
Are you suggesting that some other process might have deleted the source snapshot?
Tobias Oetiker
@oetiker
that, or manual intervention
or a bug in znapzend
minorsatellite
@minorsatellite
It must be a bug since no snapshots were manually deleted, and the API that manages the snapshots for the commercial zfs layer isn't aware of the other snapshots being created by znapzend.
Any suggestions on how best to perform a RCA?
Tobias Oetiker
@oetiker
well you environment may be different from ours in some unexpected way ... how many snapshots overlap do you have according to your configuration?
minorsatellite
@minorsatellite
By overlap you mean what exactly, from those created by the third-party APIs?
minorsatellite
@minorsatellite

I am attempting to create a new backup set while using the "--rootExec=sudo" argument, as I do not want to root SSH to the remote system to invoke "zfs receive" command. However, when I create it the backup set, the following errors occur:

sudo: no tty present and no askpass program specified
WARNING: executable '/usr/bin/mbuffer' does not exist on ... (though it does)
*** WARNING: destination 'user@10.100.10.21:pool1/fs1/fs2/fs3' does not exist, will be ignored! ***, though it does

Lastly, when I run znapzend --noaction --debug, it just hangs.

Thoughts?

minorsatellite
@minorsatellite
Temporarily adding NOPASSWD: ALL to the sudo user resolved the tty error, but znapzend --noaction --debug still hangs
minorsatellite
@minorsatellite

I have some strace output but its probably too verbose to post here, but here are the last few lines (it goes into a sleep cycle):

nanosleep({tv_sec=1, tv_nsec=0}, 0x7fff277d7a40) = 0 nanosleep({tv_sec=0, tv_nsec=0}, NULL) = 0 nanosleep({tv_sec=1, tv_nsec=0}, 0x7fff277d7a40) = 0 nanosleep({tv_sec=0, tv_nsec=0}, NULL) = 0

minorsatellite
@minorsatellite

Can't be 100% sure but I believe the issue lies with a ZoL bug #8478. With ZFS version 0.7.5 shipping with Ubuntu 18.0.4, I think its only possible to send zfs send/receive streams with root user.

zfsonlinux/zfs#8478

Tobias Oetiker
@oetiker
it seems your ssh is not allowed password free login to the destination host
minorsatellite
@minorsatellite
Yes, I later realized that, thanks

What are the recommended steps to synchronize two data sets if I want to pre-seed the remote site with a backup of the local site sent/received to a pool on an external disk array, then sent/received a second time to the remote site while attached locally over USB? Pool dataset size is around 30TB.

Note: I cannot do an initial sync across the wire due to ISP related issues.

minorsatellite
@minorsatellite
I kicked off a run-once job a few hours ago, which no snapshots in common between the two locations, other than what is created by znapzend. It seems to be sending quite a bit of data, but I cannot be sure if the sent data solely represents the delta between the pre-seed copy and latest version of the dataset, or something beyond that.
Tobias Oetiker
@oetiker
the important bit is that the sata at the remote end has been created by receive of data sent by the local site ... you can do this over an intermediary disk which you transport to the remote site if you wish ... but there is no way (afaik) that you could 'resync' data between two zfs filesystems that you KNOW are the same, but have not been transfered via send/receive
Hemanta Kumar G
@HemantaKG
Hi I am getting following error(s) in syslog and znapzend backup snapshots not created destination backup server (this issue recently stared, earliar working fine): error as follow:
znapzend[5569]: ERROR: snapshot(s) exist on destination, but no common found on source and destination clean up destination root@10.0.0.8:zasp1/backup (i.e. destroy existing snapshots)
znapzend[5569]: ERROR: suspending cleanup source dataset because at least one send task failed
its creating snapshots at source server, but failing while created at remote
Tobias Oetiker
@oetiker
in that case the only thing you can do is to remove all the snapshots at the destination and re-sync
once there is no common snapshot between src and destination recovery is not possible
Manuel Oetiker
@moetiker
you can rename the destination it needs more space but you do not lose the data
Adrian Gschwend
@ktk
Question regarding recursion: dry-run correctly states that on remote dest I do not have the datasets down the tree. Will it create them on first run or do I have to do that manually?
could not find that in docs
Tobias Oetiker
@oetiker
data sets get created as needed
Adrian Gschwend
@ktk
excellent, thanks
minorsatellite
@minorsatellite

I am having issues installing Znapzend on a variant of Xenial Ubuntu. Here is the output below:

root@D52T-1ULH-1:/tmp/znapzend-0.19.0# /usr/bin/make install Making install in thirdparty make[1]: Entering directory '/tmp/znapzend-0.19.0/thirdparty' GEN touch Successfully installed Mojolicious-6.46 Successfully installed Module-Build-0.4224 Successfully installed IO-Pipely-0.005 Successfully installed Mojo-IOLoop-ForkCall-0.17 ! Installing Scalar::Util failed. See /tmp/znapzend-0.19.0/thirdparty/work/1564546113.26736/build.log for details. Retry with --force to force install it. Successfully installed Test-Harness-3.42 (upgraded from 3.35) 5 distributions installed Makefile:408: recipe for target 'touch' failed make[1]: *** [touch] Error 123 make[1]: Leaving directory '/tmp/znapzend-0.19.0/thirdparty' Makefile:495: recipe for target 'install-recursive' failed make: *** [install-recursive] Error 1

minorsatellite
@minorsatellite
Any suggestions? Build.log wasn't very helpful.
Tobias Oetiker
@oetiker
maybe compile environment missinf
minorsatellite
@minorsatellite

Yes, that was the issue, thank you. The default mirrors on the box would not allow the build tools to be installed, so I compiled on another system.

My next issue is as follows. I did a distribution upgrade on a different system where znapzend has been running for over a year. I deleted and then re-added back nearly the same plan (with some slight changes; I cannot use the edit command because 'vi' doesnt work properly on my Mac's Bash shell for some reason). I attempted to invoke the runonce command, and got the following error back:

cannot receive new filesystem stream: destination has snapshots (eg. pool1/fs1/fs2/fs3) must destroy them to overwrite it mbuffer: error: outputThread: error writing to <stdout> at offset 0x80000: Broken pipe mbuffer: warning: error during output to <stdout>: Broken pipe

My question is, how is it possible for the snapshots to get out of sync simply by way of adding/deleting a new plan? Does this mean I have to once resync everything? If that is the case, that would be the second time in less than 6 months. With over 20TB of data, that is not a sustainable solution and then might have to look at alternatives.

There most certainly be snapshots in common between the two systems, but how do I get znapzend to recognize that?

Any suggestions would be helpful.

Tobias Oetiker
@oetiker
changing the retension times does nothave this effect. the error sounds like you changed the destination
minorsatellite
@minorsatellite
No the destination is exactly the same. The only thing I changed was the mbuffer port number.
minorsatellite
@minorsatellite
Problem solved. I realized that the notation used for the time stamp also changed. I reverted back to the previous TS format and znapzendztatz looks good again.
Anders Lagerqvist
@kroem_gitlab

So I've upgraded my Proxmox box to 6 (debian 10) and now I'm getting a problem when trying to execute Znapzend. I really know nothing about Perl so would appreciate any assistance...

root@cat:~# znapzend list
ListUtil.c: loadable library and perl binaries are mismatched (got handshake key 0xdb80080, needed 0xce00080)

trijenhout
@trijenhout
is it true znapzend dont sent "mountpoint, nfs-share etc etc" options??? not even on a empty pool..???
Tobias Oetiker
@oetiker
it does not actively do anythibg against it
trijenhout
@trijenhout
hmmm interesting i did a manual send+recv after that znapzend runonce... and worked oke now on daemon stil ok... Aldo filling a empty pool true daemon or runonce did not sent zfs- data-set propertys... also i found oetiker/znapzend#324 this look like the same issue...???
Eugene
@chuguniy
I have this src_plan: 30min=>5min. It creates snapshot every 5 minutes as intended, but keeps them for almost a day now. Do I misunderstood the configuration?
Tobias Oetiker
@oetiker
if there is an error in the backup process, no cleanup happens ... make sure to check the logs