Sep 15 20:22:46 ip-10-2-26-216 systemd-resolved: /etc/systemd/resolved.conf.d/consul.conf:1: Assignment outside of section. Ignoring. Sep 15 20:22:46 ip-10-2-26-216 systemd-resolved: /etc/systemd/resolved.conf.d/consul.conf:2: Assignment outside of section. Ignoring.
We have Consul Cluster on VM's and we have agent deployed on VM's and K8's both:
Its been working fine but recently we saw an issue.
Due to OS upgrade one of server was down and all the vm's got in sync with current server peers but somehow agent deployed on k8s was still trying to connect to same server which was down for patching. There was some delay we saw in getting latest state of servers on k8s.
2021-09-10T08:03:59.183Z [ERROR] agent.client: RPC failed to server: method=KVS.List server=**.**.**.**:8300 error="rpc error making call: rpc error getting client: failed to get conn: dial tcp <nil>->**.**.**.**:8300: connect: connection refused"
Is there any setting in helm chart which can help in immediate sync and avoid this issue?
Does consul support CA Signed certificate for tls communication and can it be integrated with vault to get certificate from vault pki? We are exploring option to use VAULT PKI Infrastructure and trying to implement consul tls communication with certificate generated by vault pki instead of inbuild consul CA?
Please suggest or help in pointing me to similar use case if exist?
Thanks in advance
api.default.dc1.internal.af617b02-1e21-52c2-d297-36b92be86af9.consul. Not sure what does this hexadecimal string signifies.
@johnnyplaydrums I suspect a lot of people end up using consul upstreams (sidecars) in Nomad simply for the convenience of it. In this scenario Nomad gives your tasks a single addr/port to communicate through.
It would be nice to have a similarly convenient setup but with the option of bypassing the security features (encryption, intentions) of consul connect. i.e. just the load balancing and service discovery pieces.
consul-write-intervalset to 1s.
add 3 cluster(s), remove 2 cluster(s). During that time I'll also see
/failed_eds_healthfor a few seconds when viewing the cluster from the envoy admin UI. The nodes show as healthy in the consul UI during time this is happening. This started happening after upgrading to Consul 1.10. Has anyone ever seen this or have any suggestions?
gcr.io/google_containers/pause-amd64:3.1: API error (500): Get https://gcr.io/v2/: net/http: TLS handshake timeout we are seeing this when we start a nomad job even though we have pause-amd64 image loaded locally but since one of our env's has strictly no internet access to outside world is there a way in nomad that you can force it to not look into google container registry?
Hi . I installed consul on K8s with the following command
helm -n consul-server install --create-namespace -g hashicorp/consul -f consul-values.yaml
cat consul-values.yaml ✔ 10:52:28 --- global: enabled: true name: consul acls: manageSystemACLs: true metrics: enabled: true enableAgentMetrics: true image: "hashicorp/consul:1.10.3" imageK8S: "hashicorp/consul-k8s-control-plane:0.36.0" prometheus: enabled: true server: replicas: 1 client: enabled: true connectInject: enabled: false transparentProxy: defaultEnabled: false ui: enabled: true service: type: LoadBalancer controller: enabled: true
I opened the Ingress endpoint. However where do i find the token to login to save data under the KV? i always get 403 since i am not logged in