For complex issues please use https://discuss.hashicorp.com/c/consul/, https://github.com/hashicorp/consul/issues or https://groups.google.com/forum/#!forum/consul-tool.
consul-write-interval
set to 1s.
add 3 cluster(s), remove 2 cluster(s)
. During that time I'll also see /failed_eds_health
for a few seconds when viewing the cluster from the envoy admin UI. The nodes show as healthy in the consul UI during time this is happening. This started happening after upgrading to Consul 1.10. Has anyone ever seen this or have any suggestions?
gcr.io/google_containers/pause-amd64:3.1
: API error (500): Get https://gcr.io/v2/: net/http: TLS handshake timeout we are seeing this when we start a nomad job even though we have pause-amd64 image loaded locally but since one of our env's has strictly no internet access to outside world is there a way in nomad that you can force it to not look into google container registry?
Hi . I installed consul on K8s with the following command
helm -n consul-server install --create-namespace -g hashicorp/consul -f consul-values.yaml
cat consul-values.yaml ✔ 10:52:28
---
global:
enabled: true
name: consul
acls:
manageSystemACLs: true
metrics:
enabled: true
enableAgentMetrics: true
image: "hashicorp/consul:1.10.3"
imageK8S: "hashicorp/consul-k8s-control-plane:0.36.0"
prometheus:
enabled: true
server:
replicas: 1
client:
enabled: true
connectInject:
enabled: false
transparentProxy:
defaultEnabled: false
ui:
enabled: true
service:
type: LoadBalancer
controller:
enabled: true
I opened the Ingress endpoint. However where do i find the token to login to save data under the KV? i always get 403 since i am not logged in
Hello all! Maybe someone can point me in the right direction.
I am in the process of finalizing a Proof of Concept using Nomad and Consul.
My remaining issue is with Consul Federation.
I currently have 2 separate Nomad Clusters, and 2 separate Consul clusters
I have federates the Consul clusters, and when I use "consul members -wan" I can evidently see that all required Consul server nodes are listed across data centres.
I have deployed a nomad job (docker http-echo) named "webserver". I have deployed 1 instance of this on Nomad dc1 and Nomad dc2, and registered it to Consul using the following stanza:
service {
name = "webserver"
tags = ["webserver"]
port = "http"
meta {
meta = "Consul Connect Test"
}
}
I used the same service stanza when deploying my job to both Nomad clusters, however, when I login to the Consul UI, dc1 Consul is showing 1 instance of webserver, whilst dc2 Consul is showing another instance of webserver.
Is there anyway that Consul would be aware that these are in fact yet another replica to the same deployment?
The idea is to use 1 single source of truth from Consul to integrate with a Load Balancer with AS3.
Hi folks, I'm trying to implement a "hackfix" solution to use consul transparent proxy within nomad, and I'm able to correctly register the connect evoy sidecar proxy and have healthchecks ok etc
However, I'm trying to curl another connect enabled service and I'm always greeted by the Empty Reply from server, despite having the clusters registered within envoy /clusters endpoint meaning that the outbound traffic grabbed by the proxy A is not correctly using mtls to communicate with the service B
Anyone tried something similar or faced a similar issue?
(moving same message I pasted in nomad group)
export GOPATH=/opt/gows
git clone'd consul; make tools
make dev throws the following error
$ make dev
==> Building Consul - OSes: linux, Architectures: amd64
Building sequentially with go install
---> linux/amd64
cp: cannot stat '/opt/gows/bin/consul': No such file or directory
ERROR: Failed to build Consul for linux/amd64
make: * [GNUmakefile:150: dev-build] Error 1
anyone knows why is this failing ?
hey folks - i'm beginning to use consul-connect on kubernetes. I want to set forward_client_cert_details: ALWAYS_FORWARD_ONLY
as a default in the public and outbound envoy listeners . I'm struggling to find a straightforward way to do this. Would the only option be to use escape hatches: https://www.consul.io/docs/connect/proxies/envoy#envoy_public_listener_json and https://www.consul.io/docs/connect/proxies/envoy#envoy_listener_json ?
If i were to use the escape hatch approach, would I need to wire it up with things such as the IP address and port number which would otherwise be dynamically configured e.g.:
dynamic_listeners": [
{
"name": "public_listener:100.96.140.127:20000",
"active_state": {
"version_info": "509d3db3174c07668c164b6772525adbb945e5fcbacaeddacf9364512e06d91b",
"listener": {
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
"name": "public_listener:100.96.140.127:20000",
"address": {
"socket_address": {
"address": "100.96.140.127",
"port_value": 20000
}
},
Any alternative methods to solve this? Could it be configured via bootstrap somehow?
That way, service consuming apps don't have to worry about setting a header, and if already improperly set, the header would be properly overwritten to maintain the desired active/hot-standby service routing. I can use header injection to see maglev properly working by adding a response header with a value of envoy's %UPSTREAM_REMOTE_ADDRESS% when I manually add a static maglev hashing header, for example, with curl.
However, if I add that same static maglev hashing header in a service-router config kind which should occur earlier in the consul traffic management than a service-resolver evaluation (routing -> splitting -> resolution), maglev consistent hashing doesn't work. I've checked by tcpdump in the upstream service environment and do see that the maglev static hashing header was properly injected. It just seems like maglev doesn't recognize it when it's injected by the routing HTTPHeaderModifiers instead of manual header addition from the calling app.
Hello, when I apply a ProxyDefaults
resource on Kubernetes (1.18), this error appears in the consul-controller logs:
2021-12-01T13:15:21.855Z INFO webhooks.proxydefaults validate create {"name": "global"}
2021-12-01T13:15:21.971Z INFO controller.proxydefaults config entry not found in consul {"request": "default/global"}
2021-12-01T13:15:21.978Z INFO controller.proxydefaults config entry created {"request": "default/global", "request-time": "3.458308ms"}
2021-12-01T13:15:22.004Z ERROR controller.proxydefaults Reconciler error {"reconciler group": "consul.hashicorp.com", "reconciler kind": "ProxyDefaults", "name": "global", "namespace": "default", "error": "Operation cannot be fulfilled on proxydefaults.consul.hashicorp.com \"global\": the object has been modified; please apply your changes to the latest version and try again"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.2/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.2/pkg/internal/controller/controller.go:227
Here's what I'm applying:
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
config:
envoy_public_listener_json: "{\"@type\":\"type.googleapis.com/envoy.config.listener.v3.Listener\",\"name\":\"public_listener:0.0.0.0:20000\",\"address\":{\"socket_address\":{\"address\":\"0.0.0.0\",\"port_value\":20000}},\"filterChains\":[{\"filters\":[{\"name\":\"envoy.filters.network.http_connection_manager\",\"typed_config\":{\"@type\":\"type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\",\"stat_prefix\":\"public_listener\",\"forward_client_cert_details\":\"APPEND_FORWARD\",\"set_current_client_cert_details\":{\"subject\":true,\"dns\":true,\"uri\":true},\"route_config\":{\"name\":\"public_listener\",\"virtual_hosts\":[{\"name\":\"public_listener\",\"domains\":[\"*\"],\"routes\":[{\"match\":{\"prefix\":\"/\"},\"route\":{\"cluster\":\"local_app\"}}]}]},\"http_filters\":[{\"name\":\"envoy.filters.http.router\"}]}}]}]}"
Error aside, it does appear to have created the resource ok:
kubectl get proxydefaults.consul.hashicorp.com
NAME SYNCED LAST SYNCED AGE
global True 17m 17m
This anything to worry about?
apiVersion: consul.hashicorp.com/v1alpha1
kind: IngressGateway
metadata:
name: ingress-gateway
spec:
tls:
enabled: true
listeners:
- port: 8080
protocol: http
services:
- name: frontend
hosts:
- "frontend.xxx.xxxx.com"
- "localhost"
Internal error occurred: failed calling webhook "consul-connect-injector.consul.hashicorp.com": Post "https://consul-connect-injector-svc.consul.svc:443/mutate?timeout=10s": service "consul-connect-injector-svc" not found
kubectl exec consul-consul-server-0 -n consul -- curl -sk https://localhost:8501/v1/connect/ca/roots | jq -r .Roots[0].RootCert
Could you please advise?
Hi, needed a bit of clarification around the below,
The service name registered in Consul will be set to the name of the Kubernetes service associated with the Pod. This can be customized with the consul.hashicorp.com/connect-service annotation. If using ACLs, this name must be the same as the Pod's ServiceAccount name.
Is it mandatory for the service name to be same as service account or is it only mandatory when we use the annotation consul.hashicorp.com/connect-service
?