Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Matt Darcy
    @ikonia
    I can’t get a clear view on my head on what’s happening here, the logs show me that the connection is coming from 10.11.216.64 (the NAS box) - how is this happening with the consul agent not running
    secondly, I don’t understand what the RPC error actually is, as an RPC error this generic could mean many things
    just for completion the cluster leader is also getting the same error
    Jul 21 10:10:53 wesley consul[15814]: 2021-07-21T10:10:53.570Z [ERROR] agent.server.rpc: unrecognized RPC byte: byte=71 conn=from=10.11.216.64:52784
    Matt Darcy
    @ikonia
    on the nas I can even do a ‘consul leave’ to gracefully leave the cluster, and the 3 cluster raft servers are still getting the RPC error in their log
    Matt Darcy
    @ikonia
    ahhhhh I see the problem
    it’s prometheus running on the same node, querying the consul servers on the wrong port, consul is expecting RPC and prometheus is scraping with a standard TCP request, hence why it’s unrecognised RPC
    sgtang
    @sgtang
    Hi all, we've been testing out consul connect ca set-config to rotate between Vault CA endpoints gracefully. The issue is that while existing proxies work fine during the rotation process, new proxies can't seem to reference the new CA bundle until the Consul leader is restarted and an election is forced. Restarting the leader immediately after setting the config causes old proxies to break for a few minutes, however, so this isn't an option. Has anyone dealt with this before? We are on Consul 1.9.6, Envoy 1.16.4.
    David
    @david-antiteum
    Hi all, after upgrading a linux server from U16 to U18 (without touching consul) I'm having the error: Node name XXX is reserved by node YYY. I have tried a number of things (leave, force-leave, use the API to deregister and then register with the new ID..) without result. There is anyway to set the new ID in the cluster? Anything else? We are using consul 1.8.4 Thanks !
    Matt Darcy
    @ikonia
    what’s u16/u18 ?
    David
    @david-antiteum
    Ubuntu 16 -> Ubuntu 18
    Matt Darcy
    @ikonia
    does the node ID file show the same node ID as the conflict ?
    (did you not want to move to Ubuntu 20.04 ?
    David
    @david-antiteum
    where is the node-id file?
    We cannot upgrade to Ubuntu 20 :(
    Matt Darcy
    @ikonia
    in your datadir there is a file node-id
    David
    @david-antiteum
    Yes, same ID. The node-id has the new id.
    Stopping consul and editing node-id with the old value will solve the problem?
    David
    @david-antiteum
    Well, I did just that and now the problem is gone :)
    Thanks a lot @ikonia although I´m not sure if this was the correct way to fix the issue
    Matt Darcy
    @ikonia
    I woudn’t put money on it being the correct way…..
    but that file seems to cause lots of problem if the instance changes in some way / replaced with the same hostname
    Roi Ezra
    @ezraroi
    hi all, so we are running consul in heavy scale (around 12K nodes in a cluster). We see something very wired. We have nodes that exists in the nodes catalog but not in the consul members (serf). This is reflected also in the consul UI as those nodes appear not to have serf health check but they do appear in the ui. of course consul agent is running on those hosts and from the agent perspective all is fine, although it is not listed in the members of the cluster. Any help will be great, we are breaking our heads against the wall with this fo long time
    Pierre Souchay
    @pierresouchay
    You probably have to deregister those nodes using catalog deregister call. This kind of issues arise when cluster is too loaded usually and/or breaks.
    gc-ss
    @gc-ss
    @ezraroi What's the CPU/RAM/pressure metrics of the consul servers?
    Pierre Souchay
    @pierresouchay
    B''
    Roi Ezra
    @ezraroi
    image.png
    Thanks for your replays.
    @pierresouchay , our cluster is not that loaded from CPU or Memory perspective.Also calling the catalog deregister is problematic as those nodes are actually running and they think they are part of the cluster. The only valid solution we have found was restarting the node itself, but at our scale checking this is it not trivial. It feels like something is broken when syncing the serf members and catalog if we end up in such cases. @gc-ss Attaching CPU load on the server hosts
    Pierre Souchay
    @pierresouchay
    Deregister the node would not be an issue, the node would register again when anti-entropy would trigger (i would say around 10/15 minutes on such a big cluster). If it doez not, it means something is broken on the node itself, so restarting the node looks like the only valid option. Nothing weird on those nodes logs?
    Pierre Souchay
    @pierresouchay
    @ezraroi this could also be an ACL issue if you changed something (bit yoy should have something in the logs in such case)
    Roi Ezra
    @ezraroi
    @pierresouchay thanks. We are not using ACL. The anti-entropy should fixed the issue even if i dont deregister the node, right? This does not happen also
    Pierre Souchay
    @pierresouchay
    @ezraroi So, probably the agents without health info are broken for some reason... Do you have some logs on those agents? Try requesting the logs in debug mode using the consul monitor command on one of those agents
    John Spencer
    @johnnyplaydrums
    Hey folks - does anyone know how the hexidecimal value at the beginning of the service name that's returned from a dig SRV is generated? Is there anyway to know this value from a service running in consul (via env var, or some deterministic method). For example when you dig for an SRV record, you're given a dns name like ac1f8802.addr.dc1.consul. Where does that ac1f8802 come from, and is it possible to know that from within a service running in consul?
    Michael Aldridge
    @the-maldridge
    @johnnyplaydrums that's an IP address
    172.31.136.2
    John Spencer
    @johnnyplaydrums
    How can I go from IP address to the value? @the-maldridge
    Like if I'm a service and want to know that hostname, it looks like I can derive it from <hex_value>.addr.<dc>.<tld>. Is there an easy way to go from IP address -> hex value?
    gc-ss
    @gc-ss
    Math. You can try it out at: https://www.browserling.com/tools/ip-to-hex
    John Spencer
    @johnnyplaydrums
    I love Math.
    Thank you sir
    🙏
    gc-ss
    @gc-ss
    Willi Schönborn
    @whiskeysierra
    I didn't find anything in the docs, so I'm asking here. Does the transparent proxy support Consul's own DNS as well, instead of Kubernetes DNS? We're running multiple clusters, so Kubernetes DNS won't do any good for us. But we do have routeable pod IPs, which means two pods from different clusters can talk to one another.
    Spencer Owen
    @spuder
    Is there an automated way to upgrade consul_intentions < 1.9 to the new consul_config_entry syntax with terraform? I have hundreds of consul intentions and changing all these by hand is toing to take forever.
    Example
    # This was correct in version 2.10.0
    resource "consul_intention" "database" {
      source_name      = "api"
      destination_name = "db"
      action           = "allow"
    }
    
    # This is now the correct configuration starting version 2.11.0
    resource "consul_config_entry" "database" {
      name = "db"
      kind = "service-intentions"
    
      config_json = jsonencode({
        Sources = [{
          Action     = "allow"
          Name       = "api"
          Precedence = 9
          Type       = "consul"
        }]
      })
    }
    10 replies
    johnny101
    @johnny101:matrix.org
    [m]
    When running consul connect in Nomad with an envoy sidecar, consul agent and envoy sidecar container stderr logs show the following grpc permission related errors below. Anyone familiar with this or how to debug it?
    # From consul agent on the host (log level is trace):
    agent.envoy.xds: Incremental xDS v3: xdsVersion=v3 direction=request protobuf="{ "typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster"
    agent.envoy.xds: subscribing to type: xdsVersion=v3 typeUrl=type.googleapis.com/envoy.config.cluster.v3.Cluster
    agent.envoy.xds: watching proxy, pending initial proxycfg snapshot for xDS: service_id=_nomad-task-6227f408-bee9-77fa-529f-924164f42b80-group-api-count-api-9001-sidecar-proxy xdsVersion=v3
    agent.envoy.xds: Got initial config snapshot: service_id=_nomad-task-6227f408-bee9-77fa-529f-924164f42b80-group-api-count-api-9001-sidecar-proxy xdsVersion=v3
    agent.envoy: Error handling ADS delta stream: xdsVersion=v3 error="rpc error: code = PermissionDenied desc = permission denied"
    
    # From envoy stderr in the envoy sidecar container (log level is trace):
    DeltaAggregatedResources gRPC config stream closed: 7, permission denied
    gRPC update for type.googleapis.com/envoy.config.cluster.v3.Cluster failed
    gRPC update for type.googleapis.com/envoy.config.listener.v3.Listener failed
    1 reply
    Daniel Hix
    @ADustyOldMuffin
    getting IO timeouts on consul snapshot restore, any ideas? the port 8300 is open and I can hit it from the leader container
    3 replies
    SBeard
    @etacalpha
    Has anyone setup a mongodb atlas connection via terminating gateway?
    12 replies
    Michael Aldridge
    @the-maldridge
    @blake is there a recommendation anywhere for how to distribute certificates to consul servers when running immutably?
    6 replies
    Gaurav Shankar
    @gauravshankarcan_gitlab
    having an issue " agent.server.memberlist.wan: memberlist: Failed to resolve consul-consul-server-1.dc1/2605::::::8302: lookup 2605:::::::8302: no such host" .. the issue is tthere is no brackets on the ipv6 like [2605:::]8302 . how do i introduce this in the wan lookup .. environment is openshift ipv6 cluster
    1 reply
    kkbe
    @kkbe

    hello. I have a working mesh gateway with wan federation. from both datacenters I can curl /v1/catalog/services?dc=<other-dc> and see the services running there and "consul members -wan" shows servers in both dcs
    however, services themselves (e.g. the socat example) cannot connect between the DCs
    The only errors I see in the consul logs are on the secondary DC where there are lots of warnings:
    Err :connection error: desc = "transport: Error while dialing dial tcp <internal ip of server in primary dc>:8300: i/o timeout"

    I outlined the issue here https://discuss.hashicorp.com/t/unable-to-connect-services-between-datacenters-despite-working-mesh-gateways/28721
    I would really appreciate any help as I'm completely stuck

    1 reply