$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 0h
kube-system kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 0h
default pork NodePort 10.152.183.83 <none> 8100:30534/TCP 0h
cert-manager cert-manager ClusterIP 10.152.183.64 <none> 9402/TCP 9h
cert-manager cert-manager-webhook ClusterIP 10.152.183.78 <none> 443/TCP 9h
microk8s {stop,start}
$ kubectl logs -n ingress nginx-ingress-microk8s-controller-httk4
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.33.0
Build: git-589187c35
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.0
-------------------------------------------------------------------------------
W1229 00:03:29.628878 6 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W1229 00:03:29.629194 6 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1229 00:03:29.629963 6 main.go:218] Creating API client for https://10.152.183.1:443
I1229 00:03:29.691154 6 main.go:262] Running in Kubernetes cluster version v1.19+ (v1.19.6-34+e6d0076d2a0033) - git (clean) commit e6d0076d2a0033fd25db4a4abab19184d93d6ed4 - platform linux/arm64
I1229 00:03:30.681725 6 main.go:103] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I1229 00:03:30.690824 6 main.go:111] Enabling new Ingress features available since Kubernetes v1.18
W1229 00:03:30.707441 6 main.go:123] No IngressClass resource with name nginx found. Only annotation will be used.
I1229 00:03:30.811919 6 nginx.go:263] Starting NGINX Ingress controller
I1229 00:03:30.869911 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-ingress-udp-microk8s-conf", UID:"2b71c1c1-c29b-4789-858a-0ccecd7ed96a", APIVersion:"v1", ResourceVersion:"396271", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-ingress-udp-microk8s-conf
I1229 00:03:30.870024 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-load-balancer-microk8s-conf", UID:"314ec221-71de-44d1-a373-fb2af04eb503", APIVersion:"v1", ResourceVersion:"396262", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-load-balancer-microk8s-conf
I1229 00:03:30.921393 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-ingress-tcp-microk8s-conf", UID:"ae5a3907-1d27-4bd5-bb2b-6090bccacd7f", APIVersion:"v1", ResourceVersion:"396269", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-ingress-tcp-microk8s-conf
I1229 00:03:31.932054 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"pork", UID:"247bbfdd-898e-42b9-a64f-26e32c175ba6", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"396885", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/pork
I1229 00:03:31.932294 6 backend_ssl.go:66] Adding Secret "default/pork-tls-secret" to the local store
I1229 00:03:32.013992 6 nginx.go:307] Starting NGINX process
I1229 00:03:32.014104 6 leaderelection.go:242] attempting to acquire leader lease ingress/ingress-controller-leader-nginx...
W1229 00:03:32.016244 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
I1229 00:03:32.016486 6 controller.go:139] Configuration changes detected, backend reload required.
I1229 00:03:34.700522 6 controller.go:155] Backend successfully reloaded.
I1229 00:03:34.700696 6 controller.go:164] Initial sync, sleeping for 1 second.
W1229 00:03:35.867162 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
I1229 00:03:37.194037 6 leaderelection.go:252] successfully acquired lease ingress/ingress-controller-leader-nginx
I1229 00:03:37.194107 6 status.go:86] new leader elected: nginx-ingress-microk8s-controller-httk4
W1229 00:03:39.200750 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
W1229 00:03:42.533919 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
Something doesn't seem quite right with the firewall settings in my kubesail-agent logs:
error: Gateway closed connection, reconnecting! { "reason": "io server disconnect" }
info: Connected to gateway socket! { "KUBESAIL_AGENT_GATEWAY_TARGET": "https://usw1.k8g8.com" }
info: Agent recieved new agent-data from gateway! {
"clusterAddress": "pi-banana.jaytula.usw1.k8g8.com",
"firewall": {
"pi-banana.jaytula.usw1.k8g8.com": 1,
"kanboard.codepasta.io": "0.0.0.0/0"
}
}
I poked around and found a non-working link to the Firewall editor
on the Kubectl Config page. It leads to https://kubesail.com/cluster/undefined/settings
. Are we supposed to be able to edit the firewall? Does it matter? Or is it just stale settings from when my ingress controller was working?
kubesail-agent
deployment YAML? I don't see any new info in the docs, https://docs.kubesail.com/byoc/ . thanks for your help!
Hey @jaytula - you'll need to use the image version kubesail/agent:v0.20.1
and add the following section to your env:
on the kubesail agent:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
You can copy-paste it from https://byoc.kubesail.com/USERNAME.yaml - and you'll see a warning message in the logs about using HostPort routing :)
@erulabs I kubectl apply -f https://byoc.kubesail.com/USERNAME.yaml
and it appears to work better. But I ran into a problem. I tried spinning up another set of deployment/service/ingress with same image (but different name) and it appears the kubesail-agent did not do its magic for it. So I spun up a third deployment but had this subdomain point to my IP address and this worked. The yaml all looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: ghcr.io/jaytula/nginx-1:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- name: demo
protocol: TCP
port: 8100
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- demo.codepasta.io
secretName: demo-tls-secret
rules:
- host: demo.codepasta.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: demo
port:
number: 8100
The domain demo.codepasta.io
points to my ip address. kanboard.codepasta.io
has A record that points to kubesail gateway (not working, kubesail invalid cert.). pork.codepasta.io
has A record also pointing to kubesail (working).
Hi @erulabs , happy new year! After some weeks of inactivity I re-joined my cluster to kubesail. Looks like a few new glitches have appeared. First, after verifying the cluster , the ui throws this error:
TypeError: Cannot read property 'verified' of undefined
at https://kubesail.com/static/js/14.1c471194.chunk.js:2:958416
at Array.map (<anonymous>)
at r.value (https://kubesail.com/static/js/14.1c471194.chunk.js:2:958202)
at qa (https://kubesail.com/static/js/15.c1ca2479.chunk.js:2:331079)
at za (https://kubesail.com/static/js/15.c1ca2479.chunk.js:2:330872)
at _s (https://kubesail.com/static/js/15.c1ca2479.chunk.js:2:366611)
at gu (https://kubesail.com/static/js/15.c1ca2479.chunk.js:2:358020)
at mu (https://kubesail.com/static/js/15.c1ca2479.chunk.js:2:357945)
at su (https://kubesail.com/static/js/15.c1ca2479.chunk.js:2:354954)
at https://kubesail.com/static/js/15.c1ca2479.chunk.js:2:306325
If I refresh the ui the error goes away and the new cluster looks and works fine.
After I install the ingress-nginx and cert-manager via the ui, I try and deploy an nginx app from the template and I also create a service and ingress from the nginx App -> Network tab. Watching the agent log the agent doesn't appear to notice the new ingress. Next I restart the agent and now the agent picks up the ingress, but then there is also a warning in the log that repeats every 10 seconds or so:
(2021-01-12T05:50:36.314Z) info: kubesail-agent starting! { "version": "0.20.1" }
(2021-01-12T05:50:36.351Z) info: Connecting to Kubernetes API...
(2021-01-12T05:50:37.026Z) warn: Unable to determine Ingress Controller! You can install an ingress controller via the KubeSail.com interface!
(2021-01-12T05:50:37.980Z) info: Registering with KubeSail...
(2021-01-12T05:50:38.574Z) info: Connected to gateway socket! { "KUBESAIL_AGENT_GATEWAY_TARGET": "https://usw1.k8g8.com" }
(2021-01-12T05:50:38.645Z) info: Agent recieved new agent-data from gateway! {
"clusterAddress": "pi43x7k3s.dklinkman.usw1.k8g8.com",
"firewall": {
"pi43x7k3s.dklinkman.usw1.k8g8.com": 1
}
}
(2021-01-12T05:50:38.661Z) info: KubeSail Agent registered and ready! KubeSail support information: {
"clusterAddress": "pi43x7k3s.dklinkman.usw1.k8g8.com",
"agentKey": "a0737f613f7f19fd7bac865bb35fd8d7",
"version": "0.20.1"
}
(2021-01-12T05:50:38.983Z) info: Agent recieved new agent-data from gateway! {
"clusterAddress": "pi43x7k3s.dklinkman.usw1.k8g8.com",
"firewall": {
"pi43x7k3s.dklinkman.usw1.k8g8.com": 1,
"nginx.pi43x7k3s.dklinkman.usw1.k8g8.com": "0.0.0.0/0"
}
}
(2021-01-12T05:50:46.303Z) warn: Received request, but no Ingress controller was found! You can install one on KubeSail.com in your cluster settings page. { "host": "nginx.pi43x7k3s.dklinkman.usw1.k8g8.com" }
(2021-01-12T05:50:47.983Z) warn: Received request, but no Ingress controller was found! You can install one on KubeSail.com in your cluster settings page. { "host": "nginx.pi43x7k3s.dklinkman.usw1.k8g8.com" }
(2021-01-12T05:50:56.634Z) warn: Received request, but no Ingress controller was found! You can install one on KubeSail.com in your cluster settings page. { "host": "nginx.pi43x7k3s.dklinkman.usw1.k8g8.com" }
(2021-01-12T05:51:06.931Z) warn: Received request, but no Ingress controller was found! You can install one on KubeSail.com in your cluster settings page. { "host": "nginx.pi43x7k3s.dklinkman.usw1.k8g8.com" }
and also the TLS certificate doesn't get finalized because the cert manager is stuck in a http 503 error loop. Restarting the agent again doesn't help.
At this point I downgrade the 0.20.1 agent to 0.19.0 and then everything starts to work normally. The pending cert is finalized and I can access the application via the gateway kgk8.com url. Still with the 0.19.0 agent I can deploy other apps and expose them to the Internet. With new apps the agent picks up the ingress right away and the TLS certificate is completed in about 30 seconds.
Looks like a lot of the error have gone away. I still get this in the agent log right after joining the cluster, but the error goes away once the cert-manager is installed
(2021-01-12T15:07:29.017Z) info: kubesail-agent starting! { "version": "0.21.0" }
(2021-01-12T15:07:29.099Z) info: Connecting to Kubernetes API...
(2021-01-12T15:07:30.657Z) warn: Unable to create ClustuerIssuer, cert-manager is probably still starting up. Retrying in 30 seconds. conversion webhook for cert-manager.io/v1alpha2, Kind=ClusterIssuer failed: Post "https://cert-manager-webhook.cert-manager.svc:443/convert?timeout=30s": service "cert-manager-webhook" not found {
"code": 500,
"stack": "Error: conversion webhook for cert-manager.io/v1alpha2, Kind=ClusterIssuer failed: Post \"https://cert-manager-webhook.cert-manager.svc:443/convert?timeout=30s\": service \"cert-manager-webhook\" not found\n at /home/node/app/node_modules/kubernetes-client/backends/request/client.js:231:25\n at Request._callback (/home/node/app/node_modules/kubernetes-client/backends/request/client.js:168:14)\n at Request.self.callback (/home/node/app/node_modules/request/request.js:185:22)\n at Request.emit (events.js:315:20)\n at Request.<anonymous> (/home/node/app/node_modules/request/request.js:1154:10)\n at Request.emit (events.js:315:20)\n at IncomingMessage.<anonymous> (/home/node/app/node_modules/request/request.js:1076:12)\n at Object.onceWrapper (events.js:421:28)\n at IncomingMessage.emit (events.js:327:22)\n at endReadableNT (internal/streams/readable.js:1327:12)"
}
(2021-01-12T15:07:30.661Z) info: Registering with KubeSail...
(2021-01-12T15:07:31.422Z) info: Connected to gateway socket! { "KUBESAIL_AGENT_GATEWAY_TARGET": "https://usw1.k8g8.com" }
(2021-01-12T15:08:13.521Z) info: Agent recieved new agent-data from gateway! {
"clusterAddress": "rpi43x7k3s.dklinkman.usw1.k8g8.com",
"firewall": {
"rpi43x7k3s.dklinkman.usw1.k8g8.com": 1
}
}
(2021-01-12T15:08:13.533Z) info: KubeSail Agent registered and ready! KubeSail support information: {
"clusterAddress": "rpi43x7k3s.dklinkman.usw1.k8g8.com",
"agentKey": "a5878b6019bb003adde6ef2864d8c397",
"version": "0.21.0"
}
(2021-01-12T15:08:31.606Z) warn: Unable to create ClustuerIssuer, cert-manager is probably still starting up. Retrying in 30 seconds. conversion webhook for cert-manager.io/v1alpha2, Kind=ClusterIssuer failed: Post "https://cert-manager-webhook.cert-manager.svc:443/convert?timeout=30s": service "cert-manager-webhook" not found {
"code": 500,
"stack": "Error: conversion webhook for cert-manager.io/v1alpha2, Kind=ClusterIssuer failed: Post \"https://cert-manager-webhook.cert-manager.svc:443/convert?timeout=30s\": service \"cert-manager-webhook\" not found\n at /home/node/app/node_modules/kubernetes-client/backends/request/client.js:231:25\n at Request._callback (/home/node/app/node_modules/kubernetes-client/backends/request/client.js:168:14)\n at Request.self.callback (/home/node/app/node_modules/request/request.js:185:22)\n at Request.emit (events.js:315:20)\n at Request.<anonymous> (/home/node/app/node_modules/request/request.js:1154:10)\n at Request.emit (events.js:315:20)\n at IncomingMessage.<anonymous> (/home/node/app/node_modules/request/request.js:1076:12)\n at Object.onceWrapper (events.js:421:28)\n at IncomingMessage.emit (events.js:327:22)\n at endReadableNT (internal/streams/readable.js:1327:12)"
I still have to restart the agent to pick up a new ingress, but now I have to restart for every new ingress. With 0.19.0 I only had to restart the agent once for the first ingress, and the rest would be picked up automatically