Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
titouwan
@titouwan
tried a script that does 1000 queries for the same domain and still the first of the list is something irrelevant and all entries have [count] => 1
Petr Špaček
@pspacek
@titouwan How many kresd instances are you running on the resolver machine?
titouwan
@titouwan
@pspacek i'm running 3, two for 53/udp+tcp, 1 for tls
Petr Špaček
@pspacek
Okay. Then I guess the traffic goes to the other instance and that's why you do not see it in stats. Connect to the other control socket and check it there.
titouwan
@titouwan
i thought of that but same on all sockets
and I tried to run only one instance
matrixbot
@matrixbot
tkrizek Could you do a quick check that you're indeed sending the queries to kresd? E.g. configure it to REFUSE all queries and verify your scripts receives REFUSE rcodes? policy.add(policy.all(policy.REFUSE))
Vladimír Čunát
@vcunat
I usually debug such stuff in an interactive session in verbose mode. That way I can see logs from any queries coupled with a CLI allowing me to inspect the internals like stats.frequent().
(you get the session by simply running kresd -v ... manually in terminal)
titouwan
@titouwan
thanks, I'll try that
Ed
@ookangzheng
How to tell knot-resolver dont return IPV6 local ip when a domain does not have IPv6 by default.
example: dig githubstatus.com AAAA
will return:
;; ANSWER SECTION:
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6c99
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6d99
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6e99
githubstatus.com.    900    IN    AAAA    fe80::21b:aabb:b9c7:6f99

;; AUTHORITY SECTION:
githubstatus.com.    900    IN    SOA    ns-1330.awsdns-38.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
Vladimír Čunát
@vcunat
Eh, who would put fe80 addresses into DNS? (I can't see such nonsense records from my point of view.)
matrixbot
@matrixbot
tkrizek What's your configuration? I see NOERROR with 0 answers, not any IPv6 local IPs
Vladimír Čunát
@vcunat
Still, our rebinding module does filter the fe80 prefix...
(it's just not enabled by default)
Ed
@ookangzheng
maybe it is my fault? missconfig?
It actually my fault, I put this into my config
modules = {
        'policy',
        'stats',
       'http',
        'hints',
       'serve_stale < cache',
       'workarounds < iterate',
        dns64 = 'fe80::21b:77ff:0:0',
}
Ahmed Mafaz
@ahmedmafaz
Hello, How do i force safesearch using knot-resolver running 5.1.3?
Vladimír Čunát
@vcunat
@ahmedmafaz: I don't know off the top of my head, but I believe the openwrt adblock script implements it for (recent versions of) knot-resolver as well: https://github.com/openwrt/packages/blob/master/net/adblock/files/adblock.sh
Ahmed Mafaz
@ahmedmafaz
Checked the documentation and this seems to work: Added to kresd.conf
policy.add(
    policy.suffix(
        policy.ANSWER(
            { [kres.type.A] = { rdata=kres.str2ip('216.239.38.120'), ttl=300 } }
        ), { todname('google.com') }))
Petr Špaček
@pspacek
Something like that.
Robert Šefr
@robcza

(kresd 5.1.3) We are using the configuration line in cases where we want to bind to all available IP addresses and it works very well:
for name, iface in pairs(net.interfaces()) do pcall(net.listen, {iface['addr'], 53 }) end

Usually the port binding is immediate, but on a particular instance we see a huge delay. It takes almost a minute before the kresd process(es) binds to the ports (as observed through netstat)
Does anyone have an idea what could be the root cause for such a behavior?

Vladimír Čunát
@vcunat
@robcza: and in the meantime those kresd processes still won't answer anything?
Robert Šefr
@robcza
@vcunat no, they basically don't even log anything - there is silence for about 40-50s and then everything just goes through, it binds to the ports and starts working as usual
Vladimír Čunát
@vcunat
Right, apparently some operation that blocks for longer time gets executed.
Maybe it will be easiest to debug by sending some signal that will cause a coredump. Then we can have a look at backtrace from time when it's stuck and hopefully determine the probably cause more easily.
sudo pkill -SIGABRT kresd perhaps
Robert Šefr
@robcza
sounds good, will try
Vladimír Čunát
@vcunat
Though I don't know what your coredump settings are. Standard coredumctl setup makes this rather easy.
Robert Šefr
@robcza
Vladimír Čunát
@vcunat
Yes, the open() syscall on data.mdb, apparently.
I can't see a reason why it might take such a long time.
Robert Šefr
@robcza
Thanks a lot. Will investigate, will be something specific to that environment. In case we find something helpful, will report back
Robert Šefr
@robcza
it seems like the ramdisk is not a ramdisk actually and the cache is set to more than 3GB - in total it takes a lot of time to allocate the mdb file
Vladimír Čunát
@vcunat
In recent versions we use posix_fallocate() to force allocation of the space. For example, in case of tmpfs that means those 3G will always "exist as reserved", either in RAM or in swap.
(Otherwise out-of-space could happen in really irrecoverable moments.)
Andreas Oberritter
@mtdcr
Hi! I'm running knot-resolver 5.1.3 and I just stumbled upon a problem. I have a host called "cloud", which has an IPv4 entry in a file loaded with hints.add_hosts(). Apparently, also a TLD called "cloud" exists, and with my entry in the hosts file DNSSEC verification fails for all .cloud domains.
Is this something I should expect? With the recent flood of gTLDs, this may become a frequent issue, I guess.
Vladimír Čunát
@vcunat
Yes, that's why you should be careful with squatting on namespace which belongs to someone else. Especially the "very nice" names are prone to collisions.
In your case you can probably avoid the worst by adding a simple config line
hints.use_nodata(false)
But I'd rather recommend to change the naming.
Andreas Oberritter
@mtdcr
Alright, thanks! I guess I'll remove plain host names from the file. It was just an old habit. Maybe it would be good to show a warning if somebody accidentally overrides a known TLD.
beckhamaaa
@beckhamaaa_gitlab
there are the DGA(Domain Generation Algorithm)or C& C filtered modules in the kresd?
@vcunat
Vladimír Čunát
@vcunat

No. There's nothing really specific for fighting malware.

By the way, I believe that names generated by modern DGAs are not recognizable (without knowledge of their private crypto-secret), so there's no simple way of fighting these... it's more of a research topic.

beckhamaaa
@beckhamaaa_gitlab
ok, thanks a lot !
waclaw66
@waclaw66
Zdravim, rad bych se zeptal cim muze byt zpusobena hlaska kresd[14932]: DNSSEC validation failure fedoraproject.org. DNSKEY kresd-5.1.3, zacalo to delat po upgradu na Fedoru 33 u spousty domen. nslookup fedoraproject.org vraci SERVFAIL, pri dotazu primo na 1.1.1.1 nebo jakoukoliv jinou adresu vrati uspesne adresy
Vladimír Čunát
@vcunat
F33 zapnula systemd-resolved, tak s tím to možná souvisí? Celkově tohle moc neříká, takže bych to viděl na verbose logy, nejjednodušeji dočasně zapnout pomocí verbose(true) v konfiguraci.
waclaw66
@waclaw66
systemd-resolved jsem vypnul, verbose zapnul, odtamtud je ta hlaska, mam podezreni, ze to muze souviset s SSL, vyskakujou na me ruzny SSL chyby i pri dnf update
Vladimír Čunát
@vcunat
A systémový čas je správný?