Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
CHazz (Difides)
@CHazz
hi, i'm new to Knot-resolver :) and i would like to ask some questions ..
a) is it possible to get allowed / disallowed subnets from a file list?
b) it is possible to use something like in the config: include "subconfig.conf"; ?
thank you for the perfect work, I worked with almost all dns resolvers/servers and the Knot is the best :) thanks for reply
3 replies
Tom Koch
@tomck

Hi, I've used Knot-Resolver for a bit but today we had an interesting issue which I was not able to resolve, wherein cdc.gov worked but www.cdc.gov gave NXDOMAIN. I even ran clear-cache() on kresd command line which seemed to work (to clear) but still the same issue persists. I was using 1.1.1.1 as the upstream resolver in /etc/resolv.conf and tried 8.8.8.8 to no success. I switched off knot and set up bind (which we had used previously), and resolving www.cdc.gov worked there. I would really rather use knot just because it's more "modern" than bind, but I can't argue with results (www.cdc.gov works).

DIG comparison: https://pastebin.com/gSy8pf1h

4 replies
Jörg Thalheim
@Mic92
@vcunat is it actually on purpose that knot-resolver only does http2 for doh? I just noticed it using telegraf for monitoring breaks with it.
2 replies
Jörg Thalheim
@Mic92
I am a lot happier with my setup since I switched for acme dns01 validation: https://github.com/Mic92/dotfiles/blob/b3913d8f8d399625054e662fa4c7fba539ad4ead/nixos/eve/modules/kresd.nix#L39 Than I can get certificates for kresd without having to bother nginx running on the server.
3 replies
CHazz (Difides)
@CHazz
Hi, I have a small problem maybe a bug .. I would like to use Graphite statistics .. when restarting they will remain unchanged that's fine. But when stop / start it is deleted ,, I wanted to solve this with the script stats.set (key, val), but it seems that the command doesn't work at all .. only Nil answers .. example: > stats.set ("answer.total", 12121) Nil
and stats.get still show old data ..
some solution ? Thansks :)
5 replies
Tom Koch
@tomck
Seems to be a typo here, the code does not work unless there's a comma after seconds. https://knot-resolver.readthedocs.io/en/stable/modules-prefill.html#mod-prefill
1 reply
beckhamaaa
@beckhamaaa
what is the meaning of zone cut in kresd?
1 reply
Jessy
@jvttr_gitlab
Hi, sorry if it not the best place for this. There is a typo on the doc
Accessing domains which are not available using recursion (e.g. if internal company servers return different anusers than public ones).
here : https://knot-resolver.readthedocs.io/en/stable/config-network-forwarding.html
1 reply
B. Cook
@bcookatpcsd
Hey all, newish to docker.. I thought I've been doing pretty good until czinc/knot-resolver.. docker run -Pti seems to be the only way it wants to run.. but then I'm dropped into the interactive prompt.. I tried making a docker run to only allow 53, but it still only seems to run via the -Pti..
Pavel Valach
@PaulosV
Hello everyone, I just want to confirm the behavior of Knot Resolver that I am seeing and that it's correct.
When I'm using forwarding (https://knot-resolver.readthedocs.io/en/stable/config-network-forwarding.html) for a certain suffix (e.g. subsub.dom.ain.com) to server 1.2.3.4, then I only expect the queries for that suffix to go through server 1.2.3.4.
But is it correct that if the DS records need to be fetched for the subdomains in between, let's say com, then for ain.com, then for dom.ain.com, that it also forwards those DS queries to the 1.2.3.4 server?
Pavel Valach
@PaulosV
Because I think that this behavior is currently breaking forwarding to purely authoritative nameservers, which only serve some part of the tree, specifically that lower subdomain subsub.dom.ain.com. As far as I understand, the DNSSEC wouldn't have to be broken, but the Knot Resolver forwards these key requests to 1.2.3.4 as well, and if the server refuses to provide them (because it does not serve ain.com, for example), the query is over.
In other words, does policy.FORWARD expect a resolver?
Vladimír Čunát
@vcunat
@PaulosV: yes, policy.FORWARD certainly assumes a resolver.
And what's more, in our current policy framework, you don't forward a subtree. You forward processing for the whole client's request, based on which subtree it belongs to (or based on other conditions). That's why the DS com example.
paulos
@paulos:im.su.cvut.cz
[m]
@vcunat: Ah, okay! Thanks for clarifying, I thought that was the case but wasn't sure. I've seen this happening with some of our internal subtrees, where I've been forwarding using policy.FORWARD, and from time to time, it needed to fetch the DS for one of the subs, probably because the cache has expired, and all the requests suddenly started failing. I'm using STUB now but I'm not too happy about it.
Vladimír Čunát
@vcunat
Technically, STUB is also meant for resolvers, but it should work in more cases than FORWARD.
paulos
@paulos:im.su.cvut.cz
[m]
Gotcha. I'm now thinking about this and I might be able to achieve the desired result using views and ACLs on both authoritative server and resolver. So that the authoritative server does not expose the internal subtree for the clients outside of our network, and the resolver will deny to resolve queries with a suffix...
when the subnet is not appropriate
Robert Šefr
@robcza
Is there any way to get frequent slow queries from the stats module. Would be really helpful in a slow query spike.
Vladimír Čunát
@vcunat
@robcza: no, such information is currently not collected.
git-ed
@ookangzheng
Is there any way to totally disable cache? cuz I already have cached on upstream.
Im using policy.add(policy.all(policy.FORWARD({'::1@5353', '127.0.0.1@5353'})))
Robert Šefr
@robcza
@ookangzheng just add the NO_CACHE flag like this (to all or to some specific subset):
policy.add(policy.all(policy.FLAGS({'NO_CACHE'}))
Vladimír Čunát
@vcunat

Yes. Note that you need to put that rule before the FORWARD (which is a non-chain action).

You also want to make the cache small cache.size = 1*MB; if it's in tmpfs, it always consumes its size of RAM, though it's swappable. And I'd consider just leaving cache small without disabling it.

Caching is utilized even within a single client's request, though off the top of my head I can't estimate how often that happens.
Kristian Klausen
@klausenbusk
Knot Resolver only supports HTTP2 for DoH which make it impossible to run behind Nginx (Nginx doesn't supports HTTP2 upstreams). Is running a DoH server (ex: Knot Resolver or Unbound) behind a reverse proxy generally a bad idea?
4 replies
pguizeline
@pguizeline

Hi! Sorry to bother you guys again, but I'm doing a new deploy of a Knot-Resolver and I'm getting a strange error. I'm using:

    policy.slice_randomize_psl(),
    policy.TLS_FORWARD({
        {'91.239.100.100', hostname='anycast.censurfridns.dk'},
    }),
    policy.TLS_FORWARD({
        {'198.251.90.91', hostname='uncensored.any.dns.nixnet.xyz'},
    }),
    policy.TLS_FORWARD({
        {'193.17.47.1', hostname='odvr.nic.cz'},
        {'185.43.135.1', hostname='odvr.nic.cz'},
    }),
    policy.TLS_FORWARD({
        {'95.216.24.230', hostname='fi.dot.dns.snopyta.org'},
    }),
    policy.TLS_FORWARD({
        {'45.90.57.121', hostname='dot-ch.blahdns.com'},
        {'192.53.175.149', hostname='dot-sg.blahdns.com'},
        {'78.46.244.143', hostname='dot-de.blahdns.com'},
        {'95.216.212.177', hostname='dot-fi.blahdns.com'},
    }),
    policy.TLS_FORWARD({
        {'116.202.176.26', hostname='dot.libredns.gr'},
    })

And I'm getting SERVFAIL, with every request. I've already checked my ca-certficates. Normal resolving to the root servers work without a problem. Thanks for any advice!

4 replies
beckhamaaa
@beckhamaaa
how can i insert the rdata into the kresd cache when the type=aaaa? such as:
knot_rrset_add_rdata(RRS[i], aaaa_rdata, 16, &worker->pkt_pool);
thanks a lot.
Robert Šefr
@robcza
Is there any way how to allow ANY queries through the resolver though I believe I was told it goes against the RFC?
I have tried to use policy action PASS based on the query type, but it did not go through
6 replies
beckhamaaa
@beckhamaaa
how can i insert the rdata into the kresd cache when the type=aaaa? such as:
knot_rrset_add_rdata(RRS[i], aaaa_rdata, 16, &worker->pkt_pool);
thanks a lot.
1 reply
git-ed
@ookangzheng
I got answer: NXDOMAIN, I can resolve matoken.eth via 127.0.0.1@5444, seems like something has changed
knot-resolver config
-- eth
ethTrees = policy.todnames({'eth'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), ethTrees))
policy.add(policy.suffix(policy.STUB({'127.0.0.1@5444'}), ethTrees))
dig matoken.eth @::1 -p 5554

; <<>> DiG 9.16.12-Debian <<>> matoken.eth @::1 -p 5554
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14681
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;matoken.eth.            IN    A

;; ANSWER SECTION:
matoken.eth.        283    IN    A    104.17.96.13
matoken.eth.        283    IN    A    104.17.64.14

;; Query time: 499 msec
;; SERVER: ::1#5554(::1)
;; WHEN: Wed Mar 03 03:08:10 UTC 2021
;; MSG SIZE  rcvd: 72
1 reply
Micah
@micah_gitlab

I've got a zone that is outside the normal tree (used for RBL queries). I've set it up to be queried via my kresd config like this:

extraTrees = policy.todnames({'dnsbl'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}),   extraTrees))
policy.add(policy.suffix(policy.STUB({'10.0.1.33'}), extraTrees))

The resolver at 10.0.1.33 is a "rbldnsd" resolver, its not kresd. If I query 10.0.1.33 directly for a result, I get an answer:

$ dig @10.0.1.33 2.0.0.127.zen.dnsbl +short
127.0.0.2
127.0.0.10
127.0.0.4

but if I try to query kresd for a result, I get a SERVFAIL

41 replies
Roman Kuzmitskii
@damex
hi, what could be the 'recommended' way to bundle knot-resolver with knot-dns as a standalone solution? i am looking at keepalived over 3 separate nodes that have knot-dns on localhost and knot-resolver on vips. knot resolver will forward local zone requests to localhost and the rest go to root servers.
whole configuration is expected to be static and provisioned through ansible (with thorough testing on its way)
heximcz
@heximcz

Hi, kresd spamming syslog with this:

Mar 20 11:14:47 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:47 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:47 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:47 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:47 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:47 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:47 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:48 rdns-k1 kresd[31968]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY
Mar 20 11:14:50 rdns-k1 kresd[31970]: DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY

I have last version (5.3.0) of the knot resolver. Is it an error or a property? Thank you.

6 replies
Tom
@tom:dragar.de
[m]

Hi, i'm running two Knot-Resolvers with the version 5.3.0. Currently the kres-cache-gc.service fails all the time:

root@resolver2:~# journalctl -fu kres-cache-gc.service -n 100
Mar 20 21:34:40 resolver1 systemd[1]: Started Knot Resolver Garbage Collector daemon.
Mar 20 21:34:40 resolver1 kres-cache-gc[20695]: Knot Resolver Cache Garbage Collector, version 5.3.0
Mar 20 21:34:40 resolver1 kres-cache-gc[20695]: Usage: 82.67%
Mar 20 21:34:41 resolver1 kres-cache-gc[20695]: Cache analyzed in 738 msecs, 3012224 records, limit category is 59.
Mar 20 21:34:41 resolver1 kres-cache-gc[20695]: 704902 records to be deleted using 31.72 MBytes of temporary memory, 0 records skipped due to memory limit.
Mar 20 21:34:41 resolver1 kres-cache-gc[20695]: kres-cache-gc: ../utils/cache_gc/kr_cache_gc.c:280: kr_cache_gc: Assertion `entry_type != NULL' failed.
Mar 20 21:34:41 resolver1 systemd[1]: kres-cache-gc.service: Main process exited, code=killed, status=6/ABRT
Mar 20 21:34:41 resolver1 systemd[1]: kres-cache-gc.service: Failed with result 'signal'.
Mar 20 21:34:41 resolver1 systemd[1]: kres-cache-gc.service: Consumed 1.412s CPU time.
Mar 20 21:35:11 resolver1 systemd[1]: kres-cache-gc.service: Scheduled restart job, restart counter is at 9.
Mar 20 21:35:11 resolver1 systemd[1]: Stopped Knot Resolver Garbage Collector daemon.
Mar 20 21:35:11 resolver1 systemd[1]: kres-cache-gc.service: Consumed 1.412s CPU time.
Mar 20 21:35:11 resolver1 systemd[1]: Started Knot Resolver Garbage Collector daemon.
Mar 20 21:35:11 resolver1 kres-cache-gc[20697]: Knot Resolver Cache Garbage Collector, version 5.3.0
Mar 20 21:35:11 resolver1 kres-cache-gc[20697]: Usage: 82.67%
Mar 20 21:35:12 resolver1 kres-cache-gc[20697]: Cache analyzed in 814 msecs, 3012237 records, limit category is 59.
Mar 20 21:35:13 resolver1 kres-cache-gc[20697]: 704717 records to be deleted using 31.71 MBytes of temporary memory, 0 records skipped due to memory limit.
Mar 20 21:35:13 resolver1 kres-cache-gc[20697]: kres-cache-gc: ../utils/cache_gc/kr_cache_gc.c:280: kr_cache_gc: Assertion `entry_type != NULL' failed.
Mar 20 21:35:13 resolver1 systemd[1]: kres-cache-gc.service: Main process exited, code=killed, status=6/ABRT
Mar 20 21:35:13 resolver1 systemd[1]: kres-cache-gc.service: Failed with result 'signal'.
Mar 20 21:35:13 resolver1 systemd[1]: kres-cache-gc.service: Consumed 1.535s CPU time.
Mar 20 21:35:43 resolver1 systemd[1]: kres-cache-gc.service: Scheduled restart job, restart counter is at 10.
Mar 20 21:35:43 resolver1 systemd[1]: Stopped Knot Resolver Garbage Collector daemon.
Mar 20 21:35:43 resolver1 systemd[1]: kres-cache-gc.service: Consumed 1.535s CPU time.
Mar 20 21:35:43 resolver1 systemd[1]: kres-cache-gc.service: Start request repeated too quickly.
Mar 20 21:35:43 resolver1 systemd[1]: kres-cache-gc.service: Failed with result 'signal'.
Mar 20 21:35:43 resolver1 systemd[1]: Failed to start Knot Resolver Garbage Collector daemon.

I don't think it's system memory that is full . In the settings the cache is set to cache.size = 1 * GB. Could somebody give me a hint to what's wrong here?

8 replies
Tom
@tom:dragar.de
[m]
Ah, thank you. That's good to know. Thanks a lot 👍
Tom
@tom:dragar.de
[m]
Yes, except for a lot of messages in our monitoring, I haven't noticed any impact yet.
Siva Kesava R Kakarla
@SivaKesava1
Hi, I have installed Knot resolver using the instructions in Ubuntu WSL from here: https://knot-resolver.readthedocs.io/en/stable/quickstart-install.html#installation.
When I run kresd I get the error kresd: symbol lookup error: kresd: undefined symbol: knot_eth_name_from_addr.
With the older version (installed from the Ubuntu repository before getting the upstream ones), kresd worked fine.
11 replies
Buffrr
@buffrr
Hey, i'm getting bogus/servfail for some NXDOMAIN proofs. It seems to work with unbound, bind & dnsviz. Works with 1.1.1.1 as well kdig @1.1.1.1 test.ok.rdns.dev a +dnssec. Knot resolver says "bad NXDOMAIN proof" here's the output https://debug.knot-resolver.cz/query.py?qname=test.ok.rdns.dev&qtype=A
10 replies
note that ok.rdns.dev is also NXDOMAIN and knot accepts that as secure but for some reason it's not covering test.ok.rdns.dev
Roman Kuzmitskii
@damex
hello, is there a way to export knot-resolver metrics about upstreams using prometheus endpoint? https://knot-resolver.readthedocs.io/en/stable/modules-stats.html it could be accessed via stats.upstreams() but when you scrape prometheus endpoint - no info about 'upstreams' are provided.
6 replies
Robert Šefr
@robcza
Thank you knot resolver team for the 5.3.0 and 5.3.1 releases. Could I ask for a more high level summary of the "nameserver selection algorithm" changes? What can be expected from the admin and user level?
2 replies
Vladimír Čunát
@vcunat
(continuation from /knot) @CoolCold: overriding authoritative servers isn't really well supported in kresd yet (except for the special root case), though it has been in long-term plans for some time. The closest hack I know is described in comments on https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/889
11 replies
Roman Ovchinnikov
@CoolCold
@vcunat checking, thanks
jakkarta0990
@jakkarta0990

Hi,

is there a reference for all the metrics retrieved from the knot resolver?

Example those for the "cache" and "worker" counters are not listed here:

https://knot-resolver.readthedocs.io/en/stable/modules-stats.html

Thanks

3 replies
Robert Šefr
@robcza

We have encountered a possible issue with replacing of the DNS tree https://knot-resolver.readthedocs.io/en/stable/modules-policy.html#replacing-part-of-the-dns-tree
I would like to confirm that this is an expected behavior and what could be done to change it to what the client actually expects.

Example configuration:

extraTrees = policy.todnames({'development.company.com'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}),   extraTrees))
policy.add(policy.suffix(policy.STUB({'10.0.0.1'}), extraTrees))

There are three different records setup like this:

A record (internal): database.development.company.com -> 10.0.0.5
A record (external): database.development.company.com -> 1.2.3.4
CNAME record: myrecord.company.com -> database.development.company.com

In case the record is resolved directly, it returns the expected response. Whenever this goes via the CNAME, it always ends up on the external A record.

dig @10.0.0.1 database.development.company.com -> 10.0.0.5
dig @10.0.0.1 myrecord.company.com -> 1.2.3.4
dig @8.8.8.8 myrecord.company.com -> 1.2.3.4

Expected behavior would be (the CNAME resolution would be done based on the replaced tree):

dig @10.0.0.1 myrecord.company.com -> 10.0.0.5
3 replies
Marcos de Oliveira
@markkrj
Hello guys, I would like to know if somebody could help with an unrecommended setup 😅️
Someone in the past thought that it was a good idea to have the same external and internal domain. So, we always need to maintain duplicated records in two different DNS, both being authoritative. So, We're going to implement knot-resolver to separate recursive and authoritative servers for our internal infrastructure. Then, as I know that knot-resolver is very flexible with its Lua API, I was wondering if it was not possible to query another authoritative server if the first return NXDOMAIN or query multiple servers at a time and return the first (non-NXDOMAIN) response. Is that a possibility?
1 reply
Siva Kesava R Kakarla
@SivaKesava1
Hello,
How to disable qname-minimization in knot-resolver?
I am trying to point the knot-resolver to certain custom nameservers by changing the root hints file. Those nameservers are authoritative of example.com, example1.com, and so on. For any query like foo.example.com, the Knot-resolver starts querying those nameservers from . instead of asking the whole query.
6 replies
jakkarta0990
@jakkarta0990

Hi all,

Is there a way to get the "median" Answer Latency of the resolver ?

I'm trying to monitor with graphite+grafana setup.

Thanks in advance

8 replies