Does consul support CA Signed certificate for tls communication and can it be integrated with vault to get certificate from vault pki? We are exploring option to use VAULT PKI Infrastructure and trying to implement consul tls communication with certificate generated by vault pki instead of inbuild consul CA?
Please suggest or help in pointing me to similar use case if exist?
Thanks in advance
api.default.dc1.internal.af617b02-1e21-52c2-d297-36b92be86af9.consul. Not sure what does this hexadecimal string signifies.
@johnnyplaydrums I suspect a lot of people end up using consul upstreams (sidecars) in Nomad simply for the convenience of it. In this scenario Nomad gives your tasks a single addr/port to communicate through.
It would be nice to have a similarly convenient setup but with the option of bypassing the security features (encryption, intentions) of consul connect. i.e. just the load balancing and service discovery pieces.
consul-write-intervalset to 1s.
add 3 cluster(s), remove 2 cluster(s). During that time I'll also see
/failed_eds_healthfor a few seconds when viewing the cluster from the envoy admin UI. The nodes show as healthy in the consul UI during time this is happening. This started happening after upgrading to Consul 1.10. Has anyone ever seen this or have any suggestions?
gcr.io/google_containers/pause-amd64:3.1: API error (500): Get https://gcr.io/v2/: net/http: TLS handshake timeout we are seeing this when we start a nomad job even though we have pause-amd64 image loaded locally but since one of our env's has strictly no internet access to outside world is there a way in nomad that you can force it to not look into google container registry?
Hi . I installed consul on K8s with the following command
helm -n consul-server install --create-namespace -g hashicorp/consul -f consul-values.yaml
cat consul-values.yaml ✔ 10:52:28 --- global: enabled: true name: consul acls: manageSystemACLs: true metrics: enabled: true enableAgentMetrics: true image: "hashicorp/consul:1.10.3" imageK8S: "hashicorp/consul-k8s-control-plane:0.36.0" prometheus: enabled: true server: replicas: 1 client: enabled: true connectInject: enabled: false transparentProxy: defaultEnabled: false ui: enabled: true service: type: LoadBalancer controller: enabled: true
I opened the Ingress endpoint. However where do i find the token to login to save data under the KV? i always get 403 since i am not logged in
Hello all! Maybe someone can point me in the right direction.
I am in the process of finalizing a Proof of Concept using Nomad and Consul.
My remaining issue is with Consul Federation.
I currently have 2 separate Nomad Clusters, and 2 separate Consul clusters
I have federates the Consul clusters, and when I use "consul members -wan" I can evidently see that all required Consul server nodes are listed across data centres.
I have deployed a nomad job (docker http-echo) named "webserver". I have deployed 1 instance of this on Nomad dc1 and Nomad dc2, and registered it to Consul using the following stanza:
name = "webserver"
tags = ["webserver"]
port = "http"
meta = "Consul Connect Test"
I used the same service stanza when deploying my job to both Nomad clusters, however, when I login to the Consul UI, dc1 Consul is showing 1 instance of webserver, whilst dc2 Consul is showing another instance of webserver.
Is there anyway that Consul would be aware that these are in fact yet another replica to the same deployment?
The idea is to use 1 single source of truth from Consul to integrate with a Load Balancer with AS3.
Hi folks, I'm trying to implement a "hackfix" solution to use consul transparent proxy within nomad, and I'm able to correctly register the connect evoy sidecar proxy and have healthchecks ok etc
However, I'm trying to curl another connect enabled service and I'm always greeted by the Empty Reply from server, despite having the clusters registered within envoy /clusters endpoint meaning that the outbound traffic grabbed by the proxy A is not correctly using mtls to communicate with the service B
Anyone tried something similar or faced a similar issue?
(moving same message I pasted in nomad group)