Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Jakub Ružička
@jruzicka-nic
Yeah I announced termination of knot-dns OBS repos yesterday and knot-resolver is likely to follow shortly as buildsystem should seriously not force new package release, what a bad taste :[
Jakub Ružička
@jruzicka-nic
@micah_gitlab I, too, am very pleasantly surprised by knot packaging, it's nearly state of the art AFAICT ;)
Micah
@micah_gitlab
tkrizek: i just was able to get the -2 package and it worked perfectly
Robert Šefr
@robcza
having issues on one of the resolvers accessing some of the domains on wp-hosting after yesterdays issue with authoritative servers. not able to read the debug log properly. Could I ask you for help?
https://gist.githubusercontent.com/robcza/aefbe161ed98519c8e13648529a2f690/raw/9fcc15708bdb1886c30304d2313eec64a834e226/wp-hosting.cz
Vladimír Čunát
@vcunat
@robcza: they have two IPv4 NSs and neither replies (over UDP or TCP). The same is still happening from my point of view ATM.
I assume you turned IPv6 off at that point? That one address seems to work here.
Petr Špaček
@pspacek
Well it seems (https://www.facebook.com/Subreg.CZ/posts) that they had quite serious outage so it is not exactly surprising it died :-)
Vladimír Čunát
@vcunat
Based on what I had read, I thought Subreg's DNS was up already long before I tested it.
titouwan
@titouwan
any one having issues with stats.frequent() reporting only 1 count for all entries ?
Vladimír Čunát
@vcunat
I don't.
Beware of
#define FREQUENT_PSAMPLE  10 /* Sampling rate, 1 in N */
titouwan
@titouwan
tried a script that does 1000 queries for the same domain and still the first of the list is something irrelevant and all entries have [count] => 1
Petr Špaček
@pspacek
@titouwan How many kresd instances are you running on the resolver machine?
titouwan
@titouwan
@pspacek i'm running 3, two for 53/udp+tcp, 1 for tls
Petr Špaček
@pspacek
Okay. Then I guess the traffic goes to the other instance and that's why you do not see it in stats. Connect to the other control socket and check it there.
titouwan
@titouwan
i thought of that but same on all sockets
and I tried to run only one instance
matrixbot
@matrixbot
tkrizek Could you do a quick check that you're indeed sending the queries to kresd? E.g. configure it to REFUSE all queries and verify your scripts receives REFUSE rcodes? policy.add(policy.all(policy.REFUSE))
Vladimír Čunát
@vcunat
I usually debug such stuff in an interactive session in verbose mode. That way I can see logs from any queries coupled with a CLI allowing me to inspect the internals like stats.frequent().
(you get the session by simply running kresd -v ... manually in terminal)
titouwan
@titouwan
thanks, I'll try that
git-ed
@ookangzheng
How to tell knot-resolver dont return IPV6 local ip when a domain does not have IPv6 by default.
example: dig githubstatus.com AAAA
will return:
;; ANSWER SECTION:
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6c99
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6d99
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6e99
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6f99

;; AUTHORITY SECTION:
githubstatus.com.    900    IN    SOA    ns-1330.awsdns-38.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
Vladimír Čunát
@vcunat
Eh, who would put fe80 addresses into DNS? (I can't see such nonsense records from my point of view.)
matrixbot
@matrixbot
tkrizek What's your configuration? I see NOERROR with 0 answers, not any IPv6 local IPs
Vladimír Čunát
@vcunat
Still, our rebinding module does filter the fe80 prefix...
(it's just not enabled by default)
git-ed
@ookangzheng
maybe it is my fault? missconfig?
It actually my fault, I put this into my config
modules = {
        'policy',
        'stats',
       'http',
        'hints',
       'serve_stale < cache',
       'workarounds < iterate',
        dns64 = 'fe80::21b:77ff:0:0',
}
Ahmed Mafaz
@ahmedmafaz
Hello, How do i force safesearch using knot-resolver running 5.1.3?
Vladimír Čunát
@vcunat
@ahmedmafaz: I don't know off the top of my head, but I believe the openwrt adblock script implements it for (recent versions of) knot-resolver as well: https://github.com/openwrt/packages/blob/master/net/adblock/files/adblock.sh
Ahmed Mafaz
@ahmedmafaz
Checked the documentation and this seems to work: Added to kresd.conf
policy.add(
    policy.suffix(
        policy.ANSWER(
            { [kres.type.A] = { rdata=kres.str2ip('216.239.38.120'), ttl=300 } }
        ), { todname('google.com') }))
Petr Špaček
@pspacek
Something like that.
Robert Šefr
@robcza

(kresd 5.1.3) We are using the configuration line in cases where we want to bind to all available IP addresses and it works very well:
for name, iface in pairs(net.interfaces()) do pcall(net.listen, {iface['addr'], 53 }) end

Usually the port binding is immediate, but on a particular instance we see a huge delay. It takes almost a minute before the kresd process(es) binds to the ports (as observed through netstat)
Does anyone have an idea what could be the root cause for such a behavior?

Vladimír Čunát
@vcunat
@robcza: and in the meantime those kresd processes still won't answer anything?
Robert Šefr
@robcza
@vcunat no, they basically don't even log anything - there is silence for about 40-50s and then everything just goes through, it binds to the ports and starts working as usual
Vladimír Čunát
@vcunat
Right, apparently some operation that blocks for longer time gets executed.
Maybe it will be easiest to debug by sending some signal that will cause a coredump. Then we can have a look at backtrace from time when it's stuck and hopefully determine the probably cause more easily.
sudo pkill -SIGABRT kresd perhaps
Robert Šefr
@robcza
sounds good, will try
Vladimír Čunát
@vcunat
Though I don't know what your coredump settings are. Standard coredumctl setup makes this rather easy.
Robert Šefr
@robcza
Vladimír Čunát
@vcunat
Yes, the open() syscall on data.mdb, apparently.
I can't see a reason why it might take such a long time.
Robert Šefr
@robcza
Thanks a lot. Will investigate, will be something specific to that environment. In case we find something helpful, will report back
Robert Šefr
@robcza
it seems like the ramdisk is not a ramdisk actually and the cache is set to more than 3GB - in total it takes a lot of time to allocate the mdb file
Vladimír Čunát
@vcunat
In recent versions we use posix_fallocate() to force allocation of the space. For example, in case of tmpfs that means those 3G will always "exist as reserved", either in RAM or in swap.
(Otherwise out-of-space could happen in really irrecoverable moments.)
Andreas Oberritter
@mtdcr
Hi! I'm running knot-resolver 5.1.3 and I just stumbled upon a problem. I have a host called "cloud", which has an IPv4 entry in a file loaded with hints.add_hosts(). Apparently, also a TLD called "cloud" exists, and with my entry in the hosts file DNSSEC verification fails for all .cloud domains.
Is this something I should expect? With the recent flood of gTLDs, this may become a frequent issue, I guess.
Vladimír Čunát
@vcunat
Yes, that's why you should be careful with squatting on namespace which belongs to someone else. Especially the "very nice" names are prone to collisions.