Hi, I run knot-resolver on a raspberrypi (v4.2.0). My score about spr is very bad, I checked the entropy and it is up to 2.5k. https://www.dns-oarc.net/oarc/services/dnsentropy
Number of samples: 47
Unique ports: 47
Range: 33601 - 33790
Modified Standard Deviation: 57
Bits of Randomness: 8 ????
kdig @localhost +short porttest.dns-oarc.net TXT -d ;; DEBUG: Querying for owner(porttest.dns-oarc.net.), class(1), type(16), server(localhost), port(53), protocol(UDP) porttest.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. "x.x.x.x is POOR: 26 queries in 3.8 seconds from 26 ports with std dev 67"
verbose(true)logs of the server can be seen at https://gist.github.com/andir/1546e4102288cca798a731ed8ea40411 I did run the test using `kdig -4 +tls
Hi, I would like to undertsand how knot-resolver managed via systemd handle the sockets and if what I found is normal.
correct me if i'm wrong :
kresd.socket and kresd-tls.socket are enabled by default, listening on localhost.
As soon as a packet is received for that socket, a process email@example.com is launched.
However, I wonder if is it normal to have port 53 listening by systemd at boot
and as soon as a packet is received for it,
knot-resolver user listen on port 53, with systemd ?
pi@raspberrypi:~ $ sudo lsof -i :53 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME systemd 1 root 46u IPv6 10830 0t0 UDP localhost:domain systemd 1 root 47u IPv6 10831 0t0 TCP localhost:domain (LISTEN) systemd 1 root 48u IPv4 10832 0t0 UDP localhost:domain systemd 1 root 49u IPv4 10833 0t0 TCP localhost:domain (LISTEN)
after a query
pi@raspberrypi:~ $ sudo lsof -i :53 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME systemd 1 root 46u IPv6 10830 0t0 UDP localhost:domain systemd 1 root 47u IPv6 10831 0t0 TCP localhost:domain (LISTEN) systemd 1 root 48u IPv4 10832 0t0 UDP localhost:domain systemd 1 root 49u IPv4 10833 0t0 TCP localhost:domain (LISTEN) kresd 3798 knot-resolver 3u IPv6 10830 0t0 UDP localhost:domain kresd 3798 knot-resolver 4u IPv6 10831 0t0 TCP localhost:domain (LISTEN) kresd 3798 knot-resolver 5u IPv4 10832 0t0 UDP localhost:domain kresd 3798 knot-resolver 6u IPv4 10833 0t0 TCP localhost:domain (LISTEN)
DEVICEnumber in the output. (I tried to listen on the same address:port thanks to REUSEPORT, i.e. getting FD of a different file-description, and it showed a different number.)
I have few questions :-)
Question 1 : Regarding kresd via systemd. If I understand well : at boot, systemd listens first, and as soon as it receives a query, sends a copy to kresd. Then, kresd will stay listening.
After that, both systemd and kresd are listening on the same socket. So, I wonder who will handle the queries. Is it like previously, first systemd and copy to kresd. Or only kresd ?
Question 2 : There is a benefit to manage kresd with systemd, instead of running
$ sudo kresd ?
Personal DNS resolver usage, at home, on a raspberrypi.
Question 3 : Another question, just by curiosity, is it thanks to SOREUSEPORT that it is possible to share the same socket across multiple instances of kresd ? i.e
systemctl enable --now firstname.lastname@example.org email@example.com firstname.lastname@example.org
Recently stumbled upon an interesting random subdomain ddos attack pattern:
mx2.mx2.mx2.mx2.mx2.mx2.mx1.mtasts.mx2.mx1.mx1.mx1.mx1.mx2.mx1.webdisk.weaverpublishing.com. mx2.mx2.mx2.mx2.mx2.mx2.mx1.mx2.mx1.mx2.mx2.mtasts.mx2.mx1.mx2.mx2.mx2.weaverpublishing.com. mx2.mx2.mx2.mx2.mx2.mx2.mx2.mx1.mx2.mx2.mx1.mtasts.mx1.mx2.mx2.mx1.cpanel.weaverpublishing.com. mx2.mx2.mx2.mx2.mx2.mx2.mx2.mx1.mx2.mx2.mx1.mtasts.mx1.mx2.mx2.mx1.cpanel.weaverpublishing.com. mx2.mx2.mx2.mx2.mx2.mx1.mx2.mx2.mx2.mx2.mx2.mx1.mx2.mtasts.mx1.mx2.mx2.autodiscover.weaverpublishing.com.
Do you think that this is actually somehow more troublesome to the Authoritative server? And/or for the resolver (though resolver is not the target of the attack)?
With negative caching and the domain properly signed this will be stop on the resolver anyway, but does it also mean more cpu cycles to make the decision?
weaverpublishing[.]comin case you are not a fan of scam iphone contests ;)
cache_countattribute of the prometheus metrics is supposed to tell me how many entries currently exist in the resolvers cache? Right? It seems to be always zero but looking at it through the control socket shows that
cache.current_sizeis a larger number. Is that bytes vs record count?
cache.count()is the number of entries, roughly the number of RRsets.
cache.stats()– showing the counts of low-level operations
countin there is the number of