For complex issues please use https://discuss.hashicorp.com/c/consul/, https://github.com/hashicorp/consul/issues or https://groups.google.com/forum/#!forum/consul-tool.
curl
still fails# curl -vk https://localhost:8501/v1/agent/self --cacert /etc/consul/tls/<hidden>-agent-ca.pem --key /etc/consul/tls/<hidden>.pem --cert /etc/consul/tls/<hidden>-key.pem
* About to connect() to localhost port 8501 (#0)
* Trying ::1...
* Connected to localhost (::1) port 8501 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* unable to load client cert: -8018 (SEC_ERROR_UNKNOWN_PKCS11_ERROR)
* NSS error -8018 (SEC_ERROR_UNKNOWN_PKCS11_ERROR)
* Unknown PKCS #11 error.
* Closing connection 0
curl: (58) unable to load client cert: -8018 (SEC_ERROR_UNKNOWN_PKCS11_ERROR)
# curl -vk https://localhost:8501/v1/agent/self --cacert ./<hidden>-agent-ca.pem --key ./<hidden>-key.pem --cert ./<hidden>.pem
* About to connect() to localhost port 8501 (#0)
* Trying ::1...
* Connected to localhost (::1) port 8501 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* unable to load client key: -8178 (SEC_ERROR_BAD_KEY)
* NSS error -8178 (SEC_ERROR_BAD_KEY)
* Peer's public key is invalid.
* Closing connection 0
curl: (58) unable to load client key: -8178 (SEC_ERROR_BAD_KEY)
curl
from source with OpenSSL
instead of the CentOS default of NSS
and that solved my issue. The question remains, how can I overcome this with a default curl
in my CentOS distro that is compiled with NSS
and while using the built-in Consul TLS creation tool?# ./curl/curl-7.67.0/src/curl -sk https://localhost:8501/v1/status/leader --key ./<hidden>-key.pem --cert ./<hidden>.pem | jq
"<hidden>.<hidden>.<hidden>.<hidden>:8300"
server.dc1.consul
need to have the server
prefix? For ex. server.eu.mydomain.com
or can it be hostname.eu.mydomain.com
? The tutorial isn't very informative on this matter (IMHO). What's the Consul internal usage for the word server
in this case?
@epifeny I don't know what the Consul folks say about this, but when we generate server certificates, we include a laundry list of SANs (subject alternative names) to make sure that the cert includes all of the possible hostnames that we might use to reach the server, including the IP address and the Consul-based FQDN like consul-1.node.consul.
Yea, I don't think I'm gonna be able to get an answer for this. The docs are lacking detail.
consul tls cert create -server -dc dc1
create a cert that has only server.dc1.consul, localhost, and 127.0.0.1? Why specifically server.dc1.consul that it itself does not even seem to resolve?
DNS:client.dc1.consul, DNS:localhost, IP Address:127.0.0.1
/v1/catalog/register
is expecting the full service definition (api.AgentServiceRegistration
), but I couldn't find any API to get it in the first place (in order to modify it)./v1/catalog/service/
also doesn't return all required values.
dnsmasq
installed so that they will default query their own DNS instance. The rest of the servers in a separate VLAN use the standard DNS servers of the environment, which has a conditional DNS forwarder for .consul
that forwards to the 5 Consul servers.
.consul
domain.
Hello! I have a flood of the following nasty warnings on my Consul installation (v.1.11.3):
[WARN] agent: Service name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: service=sth-with_underlines
Unfortunately, in my case, service renaming is not feasible. At the same time, my setup does not use the DNS interface at all, so complete DNS disabling would be an appropriate solution, I think. I've tried to set a negative DNS port as suggested here: hashicorp/consul#3135 using CLI flag "-dns-port -1" , but it seems to have no effect.
Could you please advise if there is any way to disable DNS (or solve the warning problem)?
Hey all, we're running consul on kubernetes. We had to rotate our kubernetes certificates and everything came up fine Consul wise after the restart, however all of the consul-connect-inject sidecars cannot start due to x509 unknown authority "ca".
I restarted the agents and servers again but this did nothing and am about to attempt the cert rotation process documented.
Anyone experience this after rotating k8s certs?
connect { enabled = true }
on clients, however without this specified nomad fingerprints the node as attr.consul.connect = false
is this a nomad bug or a consul docs bug?
I have an ingress-gateway with a service-router to split the L7 traffic (following the docs for HTTP listener with Path-based Routing). But the envoy instance only ever reports "no healthy upstreams".
Curiously, envoy /clusters
shows all the configured upstream clusters (0 on all stats) and /config_dump
shows all the routing config looking sane. I'm not 100% clear on what intentions should be set (ingress name -> router or ingress-name -> final destination), but I've currently got a wildcard destination and it's having no effect. And even then I'd expect a 403 response there.
Logs clearly showing it selecting the configured final-destination cluster (the destination after the service-resolver work is done) and then complaining there are no healthy upstreams. When I look at them in /clusters, I see the correct destination IPs (mesh-gateways) are listed.
I'm at a loss as to why envoy might be considering the clusters to have no healthy upstreams here.
consul intention list
does not show missing intentions