idanilt on sc-cli-tool
wip (compare)
dependabot[bot] on npm_and_yarn
Bump parse-url from 6.0.0 to 6.… (compare)
dependabot[bot] on npm_and_yarn
Bump async from 2.6.3 to 2.6.4 … (compare)
dependabot[bot] on npm_and_yarn
idanilt on develop
Bump minimist from 1.2.5 to 1.2… (compare)
dependabot[bot] on npm_and_yarn
Bump minimist from 1.2.5 to 1.2… (compare)
idanilt on develop
Upgrade RSocket (#290) * Upgra… (compare)
idanilt on rsocket-upgrade
version (compare)
{ "protocol": "tcp","host": "production-auto-deploy.api-250-production","port": 5000,"path": "" }
.
@idanilt
I miss lead you, the service call is TCP, the discovery is UDP
I'm looking into what's up with the discovery atm
{ "protocol": "tcp","host": "0.0.0.0","port": 5000,"path": "" }
so that the service would listen for all requests. This works in my docker-compose
setup, but apparently breaks the UDP service discovery (as it isn't a valid hostname, it resolves to 127.0.0.1). When I changed it to the one that I shared a few messages back it breaks the TCP server.
netstat
with the proper hostname I get,/usr/src/app # netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:5000 0.0.0.0:* 24/node
i figured out why the docker one works. The hostname I'm using as part of my docker-compose
setup is the hostname that is assigned to the seed pod that I spin up. Since it's its own hostname it binds the TCP service to its own IP address and is happy.
In k8s when I assign the hostname production-auto-deploy.api-250-production
that is the hostname of the service that is created that points to the pod that runs the microservice, and thus isn't the hostname of the pod itself. I bet there is an error binding to the IP address that is resolved, since it isn't the pod's IP
/usr/src/app # netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 172.23.0.2:5000 0.0.0.0:* LISTEN 24/node
⋮
udp 0 0 0.0.0.0:5000 0.0.0.0:* 24/node
Every micro-service instance create 2 servers:
First one is discovery (swim/gossip over udo), second one is for service calls (rsocket over tcp or WS)
The nodes connect to the seed and sink up list of servers of "service calls"
When you are creating "proxy" you actually creating a client (with the same methods of the service instance)
When you are calling proxy.someMethod()
it will try to connect the rsocket server using the address it's got from the discovery
proxy.someMethod()
) it's probably because the address isn't reachable
0.0.0.0
it fails, since that resolves to 127.0.0.1
Here is a diagram of my infrastructure setup, it appears the Proxy Pod communicates with the Microservice Pod [MSP] Service but it gets stopped there (?). The X
represents the data isn't getting through to the MSP. I've used tcpdump
on both the Proxy Pod and the MSP and the Proxy Pod makes a request via TCP but it fails, the MSP never receives the request. The UDP traffic seems to work until it gets back the MSP hostname (which I have set to 0.0.0.0
to get it to bind the TCP service.)
Proxy Pod* ══Request (TCP)══⇒ Service [MSP]† ════X════⇒ Microservice Pod‡ [MSP] (Seed)
4000/TCP 5000/TCP 5000/TCP
4000/UDP 5000/UDP 5000/UDP
* Proxy Pod
Given the service address of its K8s service, just like the MSP. It's diagram is the inverse TCP flow of the one above.
† MSP Service
Service front end for the MSP that exposes a consistant hostname that will mapp to the MSP regardless if its pod changes names, etc.
‡ MSP
Pod hosting the Microservice and acts as the seed pod. Interestingly the UDP traffic between both the MSP and the Proxy Pod seems to work, other than the wrong host name on the MSP end.