I need you expert comments on below error:
2021-09-08T09:18:07.191Z [ERROR] agent.server.memberlist.lan: memberlist: Failed fallback ping: EOF 2021-09-08T09:18:54.191Z [ERROR] agent.server.memberlist.lan: memberlist: Failed fallback ping: EOF 2021-09-08T09:21:29.190Z [ERROR] agent.server.memberlist.lan: memberlist: Failed fallback ping: EOF 2021-09-08T09:22:25.191Z [ERROR] agent.server.memberlist.lan: memberlist: Failed fallback ping: EOF
My network connectivity is fine with all the servers in cluster.
I am able to get server members and not facing any other issues but its still keep on logging this error message which I am unable to understand.
Please suggest whats happening here?
Thanks in advance
join -wanand the primary cluster sees the new cluster server as a gossip node
consul memberson server in new DC is yielding
403 ACL not foundbut I guess that is expected until replication is successful and that the cert issues are preventing replication)
[WARN] agent.server.replication.acl.role: ACL replication error (will retry if still leader): error="failed to retrieve remote ACL roles: rpc error getting client: failed to get conn: x509: certificate signed by unknown authority" [ERROR] agent.server.connect: error performing intention migration in secondary datacenter, will retry: routine="intention config entry migration" error="rpc error getting client: failed to get conn: x509: certificate signed by unknown authority" [ERROR] agent.server.rpc: RPC failed to server in DC: server=$IP:8300 datacenter=$PRIMARY_DC method=ConfigEntry.ListAll error="rpc error getting client: failed to get conn: x509: certificate signed by unknown authority"
Hello. I'm trying to use Consul with Kubernetes (minikube). I'm trying to use the CRDs for Service Intentions, yet when I apply them, I get the following error
failed calling webhook "mutate-serviceintentions.consul.hashicorp.com": could not get REST client: unable to load root certificates: unable to parse bytes as PEM block
I followed this tutorial and I get the error both with L4 and L7. Note that via the UI and the API it works, just the CRDs don't.
It seems that the above happens mostly if all consul services are not ready yet (probably because the CRDs are actually just making HTTP calls to Consul's API, which would make sense). This is an issue since I'm using tools such as Tilt and they create the CRDs at the same time it installs the helm charts. Basically, I'd only be able to make it work if I install the CRDs manually through the terminal.
Is this a limitation of Consul's CRDs implementation? I've used CRDs from other solutions before (Gloo IIRC) and have been able to install them at the same time as other helm charts and resources, using Tilt, with no issues.
Sep 15 20:22:46 ip-10-2-26-216 systemd-resolved: /etc/systemd/resolved.conf.d/consul.conf:1: Assignment outside of section. Ignoring. Sep 15 20:22:46 ip-10-2-26-216 systemd-resolved: /etc/systemd/resolved.conf.d/consul.conf:2: Assignment outside of section. Ignoring.
We have Consul Cluster on VM's and we have agent deployed on VM's and K8's both:
Its been working fine but recently we saw an issue.
Due to OS upgrade one of server was down and all the vm's got in sync with current server peers but somehow agent deployed on k8s was still trying to connect to same server which was down for patching. There was some delay we saw in getting latest state of servers on k8s.
2021-09-10T08:03:59.183Z [ERROR] agent.client: RPC failed to server: method=KVS.List server=**.**.**.**:8300 error="rpc error making call: rpc error getting client: failed to get conn: dial tcp <nil>->**.**.**.**:8300: connect: connection refused"
Is there any setting in helm chart which can help in immediate sync and avoid this issue?
Does consul support CA Signed certificate for tls communication and can it be integrated with vault to get certificate from vault pki? We are exploring option to use VAULT PKI Infrastructure and trying to implement consul tls communication with certificate generated by vault pki instead of inbuild consul CA?
Please suggest or help in pointing me to similar use case if exist?
Thanks in advance
api.default.dc1.internal.af617b02-1e21-52c2-d297-36b92be86af9.consul. Not sure what does this hexadecimal string signifies.
@johnnyplaydrums I suspect a lot of people end up using consul upstreams (sidecars) in Nomad simply for the convenience of it. In this scenario Nomad gives your tasks a single addr/port to communicate through.
It would be nice to have a similarly convenient setup but with the option of bypassing the security features (encryption, intentions) of consul connect. i.e. just the load balancing and service discovery pieces.
consul-write-intervalset to 1s.
add 3 cluster(s), remove 2 cluster(s). During that time I'll also see
/failed_eds_healthfor a few seconds when viewing the cluster from the envoy admin UI. The nodes show as healthy in the consul UI during time this is happening. This started happening after upgrading to Consul 1.10. Has anyone ever seen this or have any suggestions?