Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 15:23
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • 15:15
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • 14:30
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • 14:27
    GitLab | Vladimír Čunát pushed 2 commits to Knot DNS
  • 13:47
    GitLab | Vladimír Čunát pushed 49 commits to Knot DNS
  • 10:07
    GitLab | Daniel Salzman pushed to Knot DNS
  • 10:07
    GitLab | Daniel Salzman pushed 2 commits to Knot DNS
  • 10:03
    GitLab | David Vasek pushed 1 commits to Knot DNS
  • 09:33
    GitLab | Daniel Salzman pushed 1 commits to Knot DNS
  • Feb 16 20:58
    GitLab | Daniel Salzman pushed 4 commits to Knot DNS
  • Feb 16 19:31
    GitLab | Daniel Salzman pushed to Knot DNS
  • Feb 16 19:31
    GitLab | Daniel Salzman pushed 1 commits to Knot DNS
  • Feb 14 18:31
    GitLab | Daniel Salzman pushed to Knot DNS
  • Feb 14 18:29
    GitLab | Daniel Salzman pushed 1 commits to Knot DNS
  • Feb 14 14:38
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Feb 14 14:17
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Feb 14 13:29
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Feb 14 13:05
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Feb 14 09:33
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Feb 13 15:50
    GitLab | Jan Hák pushed 1 commits to Knot DNS
bleve
@bleve
Something like 500ms might be better default value...
Daniel Salzman
@salzmdan
Is it because of transfers?
bleve
@bleve
yes, I was forced to set bigger value because anycast dns had tcp timeout issues when doing zonexfers.
so outbound zonexfers had problems with 200ms
Daniel Salzman
@salzmdan
Ok. The primary motivation for this option and its default was to reduce possible slow loris attacks. So low values are suitable for slave servers and higher values for master servers. There is no optimal configuration. To be honest, I would rather decrease this value :-)
bleve
@bleve
Well - our slaves ARE the servers for anycast dns.
are the master servers :)
Or there should be separate setting for zonexfers for known servers using tsig.
If I could set timeout per server there wouldn't be any issue.
Daniel Salzman
@salzmdan
No, it's on a higher level of the processing. It's not that easy.
bleve
@bleve
I can imagine.
Daniel Salzman
@salzmdan
Still you can adjust the configuration! :-)
bleve
@bleve
sure - and I did for now.
@salzmdan is 2.9.2 sitll far ahead?
Daniel Salzman
@salzmdan
This week. Based on the previous release dates, you could guess when ;-)
bleve
@bleve
I haven't tracked those, I'd just think there is quite important bug fix waiting for release :)
Daniel Salzman
@salzmdan
I think there are more users who use Knot DNS on slaves than on masters. So the default for tcp-io-timeout reflects this situation. Also users with huge zones must increase the limit.
bleve
@bleve
No huge zones here :)
Daniel Salzman
@salzmdan
Probably you are on the border :-)
bleve
@bleve
Issue with anycast is some anycast service hidden masters are on the other side of the world.
when ping to slave is 119ms 200ms tcp timeout is not so good...
Daniel Salzman
@salzmdan
But this timeout doesn't count the TCP segment round trip time! There are some data available to read! For short messages (1 TCP segment) it's ok.
bleve
@bleve
ok - so network latency is not issue here?
Daniel Salzman
@salzmdan
In most cases (no transfers) no, it's not an issue.
bleve
@bleve
this is transfers.
Daniel Salzman
@salzmdan
Yes, I know. So in this case you have to tune the config.
When there are more users complaining about this default, we can change it. So far I think it's ok.
bleve
@bleve
I didn't have any problems before anycast dns service.
My own servers don't have issues with timeouts.
But those are all less than 10ms away.
bleve
@bleve
Hmh. I smell fresh software :)
In production, seem to work
bleve
@bleve
@salzmdan Back to tcp timeout. Increasing to 500ms did the trick and slaves don't get errors any more.
Micah
@micah_gitlab
hi, my slaves are having trouble refreshing one of my zones from my master, they say no usable master and I cannot see why
Daniel Salzman
@salzmdan
Hi, any logs? Try enabling debug verbosity level
Micah
@micah_gitlab
I only see refresh, remote owl not usable
and refresh, remote owl, address 0.0.0.0@53, failed (connection reset)
which is odd, the other zones between those machines are fine
Daniel Salzman
@salzmdan
Is the zone loaded on the master? Can you dig it?
Micah
@micah_gitlab
the master says, debug: TCP, send, address 0.0.0.0@45652 (connection timeout) (where the 0.0.0.0 is the ip)
I can do dig @127.0.0.1 and get a response
Daniel Salzman
@salzmdan
It sounds like the TCP timeout issue. Try increasing https://www.knot-dns.cz/docs/2.9/html/reference.html#tcp-io-timeout
on the master
Micah
@micah_gitlab
that would be weird, these machines are connected via a switch
Daniel Salzman
@salzmdan
I suspect the zone is bigger than the other ones?
Micah
@micah_gitlab
it is
huh, the tcp-timeout seemed to resolve it
bleve
@bleve
@salzmdan I really think default for tcp-io-timeout is too low :(
Daniel Salzman
@salzmdan
I understand your opinion, but you are influenced by your use case only.Probably you don't know how vulnerable TCP is :-)
There is no universal value for master servers. I can set the default to 500 ms. It will solve just your problem and will affect all other slave servers, which don't need it...