Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 15:25
    GitLab | Daniel Salzman pushed 1 commits to Knot DNS
  • 15:25
    JP Mens closed issue #798 patch: small rephrasing in reference for unixtime/dateserial in Knot DNS
  • 14:37
    JP Mens opened issue #798 patch: small rephrasing in reference for unixtime/dateserial in Knot DNS
  • 14:09
    GitLab | Jan Hák pushed 6 commits to Knot DNS
  • 09:10
    GitLab | Daniel Salzman pushed 1 commits to Knot DNS
  • 09:07
    GitLab | Daniel Salzman pushed 2 commits to Knot DNS
  • 07:29
    GitLab | David Vasek pushed 1 commits to Knot DNS
  • 06:52
    GitLab | Daniel Salzman pushed to Knot DNS
  • May 16 21:08
    GitLab | David Vasek pushed 1 commits to Knot DNS
  • May 16 14:02
    GitLab | Jan Hák pushed 5 commits to Knot DNS
  • May 16 14:00
    David Vasek closed issue #797 Furnitures in Knot DNS
  • May 16 14:00
    biny jose opened issue #797 Furnitures in Knot DNS
  • May 16 13:42
    GitLab | David Vasek pushed 3 commits to Knot DNS
  • May 16 13:10
    GitLab | David Vasek pushed 1 commits to Knot DNS
  • May 16 12:52
    klaus-nicat opened issue #796 Inconsistent Handling of overlapping catalog zone updates in Knot DNS
  • May 16 11:49
    GitLab | Daniel Salzman pushed 16 commits to Knot DNS
  • May 16 11:07
    GitLab | David Vasek pushed 6 commits to Knot DNS
  • May 16 08:09
    GitLab | David Vasek pushed 1 commits to Knot DNS
  • May 16 06:51
    GitLab | Daniel Salzman pushed 7 commits to Knot DNS
  • May 16 06:37
    GitLab | Daniel Salzman pushed to Knot DNS
Daniel Salzman
@salzmdan
:-D
I will try to explain after some testing...
klaus-nicat
@klaus-nicat
I see in the knotc man page that zone-retransfer has a '#' which means " indicates an optionally blocking operation". So, if a retransfers is triggered with or without blocking flag, will it also change how knotd processes the retransfer (ie directly vs putting the retransfer on some worker queue) or will it in nonblocking mode just respond with "OK" and then do an direct retransfer?
Daniel Salzman
@salzmdan
It changes how the server responds to the command. If non-blocking, it responds OK immediately upon successful zone event planning. If blocking, it responds upon the event is finished.
So resp = ctl.receive_block() is needed for the synchronization.
You just have to increase the client control timeout
E.g.
ctl = libknot.control.KnotCtl()
ctl.connect(conf.knot_socket[system])
ctl.set_timeout(60)
try:
   ctl.send_block(cmd="zone-retransfer", zone=domain_name, flags="B")
   ctl.receive_block()
except libknot.control.KnotCtlError as e:                                       
    print(e)
finally:
   ctl.send(libknot.control.KnotCtlType.END)
   ctl.close()
9 replies
klaus-nicat
@klaus-nicat
Where is the timeout value used? On the client side or is the timeout signaled to knotd and used internally by knotd?
Daniel Salzman
@salzmdan
This is a client timeout only.
klaus-nicat
@klaus-nicat
what happens if in blocking mode the client timeout triggers? Will the event still processed by knotd? What will happen if dues to the timeout my client sends the next retransfer command although the previous retransfer is not yet finised?
Daniel Salzman
@salzmdan
Yes, the command will still be processed. And the next command or connection will be pending until the previous command is finished.
Timeouts are protections against unexpected situations. You should set timeouts according to your usual needs.
klaus-nicat
@klaus-nicat
And what happend, or in which cases, comes the "OS lacked necessary resources" error?
Daniel Salzman
@salzmdan
This happens when there are too many pending control connections. For example when one command times out and you try to send another command again and again.
klaus-nicat
@klaus-nicat
Feature Request: python libknot wrapper always fires libknot.control.KnotCtlError exception in case of problems. I think there should be different exceptions for every error, or at least separate connection errors from application errors, ie: connection errors (timeout, "Os lacks ressources", refused ...) versus application errors (control socket communication works fine, but the requested operation failed, ie retransfer of a zone which is not provisioned, adding a zone which is already present ...). If you ever want to implement it I can open an issue on your gitlab.
4 replies
Daniel Salzman
@salzmdan
If you like tickets, feel free to open one. I will check what is possible..
klaus-nicat
@klaus-nicat

Hi Daniel! I just noticed a small thing with catalags:

add('2.zones.catz. PTR klaus.testet.') -> klaus.testet. added to catalog
add('3.zones.catz. PTR klaus.testet.') -> klaus.testet. added to catz zone, but ignored by catalog as already existent
delete('2.zones.catz.') -> klaus.testet. removed from catalog (although still present in catz zone)
restart knotd -> klaus.testet. readded to catalog

I wonder if this should be addressed or ignored.

For example if add(3) and del(2) whould happen in the same DDNS UPDATE, then the catalog behaves correctly and purges the member and then readds the member.
Daniel Salzman
@salzmdan
Definitely, Knot behaves inconsistently. Please create a ticket for this. However, you should avoid such situations. For example using a hash function for generating PTR owner names.
klaus-nicat
@klaus-nicat
Yes and no. If a customer deletes and adds a zone, it is good to have different owner names as this enusres the zone gets deleted and added again, without any leftovers from the previous zone.
Daniel Salzman
@salzmdan
Right. But was it your intention?
Please create an issue. Libor will take a look at that when he is back from vacation.
2 replies
klaus-nicat
@klaus-nicat
Having always the same uniqu-N for the same member would solve other issues. But I can not be sure that a hash function does not have a collision. So other problems. Honestly I do not know which way is better. Right now we built it that way, that we reuse the DB domain_id which is different every time a domain is removed and readded from/to PowerDNS.
4 replies
Daniel Salzman
@salzmdan
As the input to hashing has a specific format (dname) and if you used a reasonable function (sha1 or better), I don't think a collision is possible. ~Another possibility is to used the target domain name on the owner side (prefix). In most cases it must be possible :-)~
^ I think that the latest draft allows just one label for id
1 reply
FYI Knot uses siphash for generated PTR record owners
klaus-nicat
@klaus-nicat
Why siphash? Any reason?
1 reply
bleve
@bleve
Using just hash function is not enough.
there must be unique per server salt to avoid collision.
we used hostname fqdn but it could be uuid or whatever.
That is if hash is calculated from domain name.
Daniel Salzman
@salzmdan
@bleve why the hash must be unique per server? But I agree that some additional input data might be helpful. For example if you need to reset the member metadata.
bleve
@bleve
it must be unique. so there must not be hash collisions between two catalog zones feeding one secondary server.
it's required by rfc that it is unique id.
so salt + zone is enough to generate unique enough but without salt it's guaranteed that there is collision if two knot-dns servers generate catalog zone.
Note: I haven't tried catalog zone generation because I wrote our implementation before there was automatic genreation so I haven't really checked how genration works.
we just used fqdn of primary server as salt because it was good enough for our purposes.
Daniel Salzman
@salzmdan
Hm, I'm probably missing something
bleve
@bleve
it's required that id is unique per catalog zone.
so foobar.fi. must have different id on two catalog zones.
actually. name of the catalog zone would be better for salt than fqdn of primary server :)
I didn't think of that when I wrote our catalog zone generator.
bleve
@bleve
But rfc also states that same zone can't be added twice to same catalog - that's why same zone must always get same id in same catalog zone.
The bug mentined there has already been fixed afaik.
Might be good idea to remove from docs.
Daniel Salzman
@salzmdan
Good catch!
bleve
@bleve
That wasn't me - it was my son :)