[WARN] agent.server.replication.acl.role: ACL replication error (will retry if still leader): error="failed to retrieve remote ACL roles: rpc error getting client: failed to get conn: x509: certificate signed by unknown authority" [ERROR] agent.server.connect: error performing intention migration in secondary datacenter, will retry: routine="intention config entry migration" error="rpc error getting client: failed to get conn: x509: certificate signed by unknown authority" [ERROR] agent.server.rpc: RPC failed to server in DC: server=$IP:8300 datacenter=$PRIMARY_DC method=ConfigEntry.ListAll error="rpc error getting client: failed to get conn: x509: certificate signed by unknown authority"
Hello. I'm trying to use Consul with Kubernetes (minikube). I'm trying to use the CRDs for Service Intentions, yet when I apply them, I get the following error
failed calling webhook "mutate-serviceintentions.consul.hashicorp.com": could not get REST client: unable to load root certificates: unable to parse bytes as PEM block
I followed this tutorial and I get the error both with L4 and L7. Note that via the UI and the API it works, just the CRDs don't.
It seems that the above happens mostly if all consul services are not ready yet (probably because the CRDs are actually just making HTTP calls to Consul's API, which would make sense). This is an issue since I'm using tools such as Tilt and they create the CRDs at the same time it installs the helm charts. Basically, I'd only be able to make it work if I install the CRDs manually through the terminal.
Is this a limitation of Consul's CRDs implementation? I've used CRDs from other solutions before (Gloo IIRC) and have been able to install them at the same time as other helm charts and resources, using Tilt, with no issues.
Sep 15 20:22:46 ip-10-2-26-216 systemd-resolved: /etc/systemd/resolved.conf.d/consul.conf:1: Assignment outside of section. Ignoring. Sep 15 20:22:46 ip-10-2-26-216 systemd-resolved: /etc/systemd/resolved.conf.d/consul.conf:2: Assignment outside of section. Ignoring.
We have Consul Cluster on VM's and we have agent deployed on VM's and K8's both:
Its been working fine but recently we saw an issue.
Due to OS upgrade one of server was down and all the vm's got in sync with current server peers but somehow agent deployed on k8s was still trying to connect to same server which was down for patching. There was some delay we saw in getting latest state of servers on k8s.
2021-09-10T08:03:59.183Z [ERROR] agent.client: RPC failed to server: method=KVS.List server=**.**.**.**:8300 error="rpc error making call: rpc error getting client: failed to get conn: dial tcp <nil>->**.**.**.**:8300: connect: connection refused"
Is there any setting in helm chart which can help in immediate sync and avoid this issue?
Does consul support CA Signed certificate for tls communication and can it be integrated with vault to get certificate from vault pki? We are exploring option to use VAULT PKI Infrastructure and trying to implement consul tls communication with certificate generated by vault pki instead of inbuild consul CA?
Please suggest or help in pointing me to similar use case if exist?
Thanks in advance
api.default.dc1.internal.af617b02-1e21-52c2-d297-36b92be86af9.consul. Not sure what does this hexadecimal string signifies.
@johnnyplaydrums I suspect a lot of people end up using consul upstreams (sidecars) in Nomad simply for the convenience of it. In this scenario Nomad gives your tasks a single addr/port to communicate through.
It would be nice to have a similarly convenient setup but with the option of bypassing the security features (encryption, intentions) of consul connect. i.e. just the load balancing and service discovery pieces.
consul-write-intervalset to 1s.
add 3 cluster(s), remove 2 cluster(s). During that time I'll also see
/failed_eds_healthfor a few seconds when viewing the cluster from the envoy admin UI. The nodes show as healthy in the consul UI during time this is happening. This started happening after upgrading to Consul 1.10. Has anyone ever seen this or have any suggestions?
gcr.io/google_containers/pause-amd64:3.1: API error (500): Get https://gcr.io/v2/: net/http: TLS handshake timeout we are seeing this when we start a nomad job even though we have pause-amd64 image loaded locally but since one of our env's has strictly no internet access to outside world is there a way in nomad that you can force it to not look into google container registry?
Hi . I installed consul on K8s with the following command
helm -n consul-server install --create-namespace -g hashicorp/consul -f consul-values.yaml
cat consul-values.yaml ✔ 10:52:28 --- global: enabled: true name: consul acls: manageSystemACLs: true metrics: enabled: true enableAgentMetrics: true image: "hashicorp/consul:1.10.3" imageK8S: "hashicorp/consul-k8s-control-plane:0.36.0" prometheus: enabled: true server: replicas: 1 client: enabled: true connectInject: enabled: false transparentProxy: defaultEnabled: false ui: enabled: true service: type: LoadBalancer controller: enabled: true
I opened the Ingress endpoint. However where do i find the token to login to save data under the KV? i always get 403 since i am not logged in