Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 03 16:46
    GitLab | Libor Peltan pushed 2 commits to Knot DNS
  • Apr 03 13:26
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Apr 03 13:22
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Apr 03 12:59
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • Apr 03 12:35
    GitLab | Jan Hák pushed 1 commits to Knot DNS
  • Apr 03 11:08
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • Apr 03 10:56
    GitLab | Libor Peltan pushed 2 commits to Knot DNS
  • Apr 03 10:56
    GitLab | Daniel Salzman pushed to Knot DNS
  • Apr 03 10:56
    Libor Peltan merged merge request #1114 Allow sockaddr_cmp() to ignore port if needed in Knot DNS
  • Apr 03 10:48
    Daniel Salzman opened merge request #1114 Allow sockaddr_cmp() to ignore port if needed in Knot DNS
  • Apr 03 10:47
    GitLab | Daniel Salzman pushed 1 commits to Knot DNS
  • Apr 03 09:45
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • Apr 02 16:54
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • Apr 02 16:08
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • Apr 02 11:32
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • Apr 01 16:58
    GitLab | Libor Peltan pushed 1 commits to Knot DNS
  • Apr 01 13:51
    GitLab | Daniel Salzman pushed 2 commits to Knot DNS
  • Apr 01 13:46
    GitLab | Daniel Salzman pushed 2 commits to Knot DNS
  • Apr 01 13:34
    GitLab | Daniel Salzman pushed 5 commits to Knot DNS
  • Apr 01 07:36
    GitLab | Daniel Salzman pushed 1 commits to Knot DNS
Narzhan
@Narzhan
@salzmdan Sorry for not checking it beforehand. I will monitor that.
Daniel Salzman
@salzmdan
No problem.
Petr Špaček
@pspacek
@Narzhan BTW Knot DNS has ability to store and modify configuration internally in database, which makes it easier to modify it one-by-one.
See command knotc and press <TAB> key to get help once kresc is running.
bleve
@bleve
I'd suggest changing default for tcp-io-timeout to something bigger than 200ms which seem to be all too little.
Something like 500ms might be better default value...
Daniel Salzman
@salzmdan
Is it because of transfers?
bleve
@bleve
yes, I was forced to set bigger value because anycast dns had tcp timeout issues when doing zonexfers.
so outbound zonexfers had problems with 200ms
Daniel Salzman
@salzmdan
Ok. The primary motivation for this option and its default was to reduce possible slow loris attacks. So low values are suitable for slave servers and higher values for master servers. There is no optimal configuration. To be honest, I would rather decrease this value :-)
bleve
@bleve
Well - our slaves ARE the servers for anycast dns.
are the master servers :)
Or there should be separate setting for zonexfers for known servers using tsig.
If I could set timeout per server there wouldn't be any issue.
Daniel Salzman
@salzmdan
No, it's on a higher level of the processing. It's not that easy.
bleve
@bleve
I can imagine.
Daniel Salzman
@salzmdan
Still you can adjust the configuration! :-)
bleve
@bleve
sure - and I did for now.
@salzmdan is 2.9.2 sitll far ahead?
Daniel Salzman
@salzmdan
This week. Based on the previous release dates, you could guess when ;-)
bleve
@bleve
I haven't tracked those, I'd just think there is quite important bug fix waiting for release :)
Daniel Salzman
@salzmdan
I think there are more users who use Knot DNS on slaves than on masters. So the default for tcp-io-timeout reflects this situation. Also users with huge zones must increase the limit.
bleve
@bleve
No huge zones here :)
Daniel Salzman
@salzmdan
Probably you are on the border :-)
bleve
@bleve
Issue with anycast is some anycast service hidden masters are on the other side of the world.
when ping to slave is 119ms 200ms tcp timeout is not so good...
Daniel Salzman
@salzmdan
But this timeout doesn't count the TCP segment round trip time! There are some data available to read! For short messages (1 TCP segment) it's ok.
bleve
@bleve
ok - so network latency is not issue here?
Daniel Salzman
@salzmdan
In most cases (no transfers) no, it's not an issue.
bleve
@bleve
this is transfers.
Daniel Salzman
@salzmdan
Yes, I know. So in this case you have to tune the config.
When there are more users complaining about this default, we can change it. So far I think it's ok.
bleve
@bleve
I didn't have any problems before anycast dns service.
My own servers don't have issues with timeouts.
But those are all less than 10ms away.
bleve
@bleve
Hmh. I smell fresh software :)
In production, seem to work
bleve
@bleve
@salzmdan Back to tcp timeout. Increasing to 500ms did the trick and slaves don't get errors any more.
Micah
@micah_gitlab
hi, my slaves are having trouble refreshing one of my zones from my master, they say no usable master and I cannot see why
Daniel Salzman
@salzmdan
Hi, any logs? Try enabling debug verbosity level
Micah
@micah_gitlab
I only see refresh, remote owl not usable
and refresh, remote owl, address 0.0.0.0@53, failed (connection reset)
which is odd, the other zones between those machines are fine
Daniel Salzman
@salzmdan
Is the zone loaded on the master? Can you dig it?
Micah
@micah_gitlab
the master says, debug: TCP, send, address 0.0.0.0@45652 (connection timeout) (where the 0.0.0.0 is the ip)
I can do dig @127.0.0.1 and get a response
Daniel Salzman
@salzmdan
It sounds like the TCP timeout issue. Try increasing https://www.knot-dns.cz/docs/2.9/html/reference.html#tcp-io-timeout
on the master
Micah
@micah_gitlab
that would be weird, these machines are connected via a switch
Daniel Salzman
@salzmdan
I suspect the zone is bigger than the other ones?