gitea-mysql
image should probably be replaced by the standard mysql
or mariadb
images - might need a bit of configuring but is usually a better bet than trying to get some customized image to work :)
microk8s enable ingress
. The pod for the controller is in the ingress
namespace so I modified the environment variable INGRESS_CONTROLLER_NAMESPACE
for kubesail-agent
. The pod appears to be managed by a daemonset.apps
resource. I'm not sure what to se INGRESS_CONTROLLER_ENDPOINT
. Currently the agent cannot divide the Ingress controller:warn: Unable to determine Ingress Controller endpoint and namespace - trying again in 30 seconds. You can install an ingress controller via the KubeSail.com interface!
ingress-nginx-controller
$ kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 192.168.1.147:16443 40h
kube-system kube-dns 10.1.133.130:53,10.1.133.130:53,10.1.133.130:9153 40h
default pork 10.1.133.132:80 40h
cert-manager cert-manager 10.1.133.141:9402 18h
cert-manager cert-manager-webhook 10.1.133.142:10250 18h
kube-system kube-scheduler <none> 40h
kube-system kube-controller-manager <none> 40h
kubectl -n ingress get pods
?
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 0h
kube-system kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 0h
default pork NodePort 10.152.183.83 <none> 8100:30534/TCP 0h
cert-manager cert-manager ClusterIP 10.152.183.64 <none> 9402/TCP 9h
cert-manager cert-manager-webhook ClusterIP 10.152.183.78 <none> 443/TCP 9h
microk8s {stop,start}
$ kubectl logs -n ingress nginx-ingress-microk8s-controller-httk4
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.33.0
Build: git-589187c35
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.0
-------------------------------------------------------------------------------
W1229 00:03:29.628878 6 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W1229 00:03:29.629194 6 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1229 00:03:29.629963 6 main.go:218] Creating API client for https://10.152.183.1:443
I1229 00:03:29.691154 6 main.go:262] Running in Kubernetes cluster version v1.19+ (v1.19.6-34+e6d0076d2a0033) - git (clean) commit e6d0076d2a0033fd25db4a4abab19184d93d6ed4 - platform linux/arm64
I1229 00:03:30.681725 6 main.go:103] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I1229 00:03:30.690824 6 main.go:111] Enabling new Ingress features available since Kubernetes v1.18
W1229 00:03:30.707441 6 main.go:123] No IngressClass resource with name nginx found. Only annotation will be used.
I1229 00:03:30.811919 6 nginx.go:263] Starting NGINX Ingress controller
I1229 00:03:30.869911 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-ingress-udp-microk8s-conf", UID:"2b71c1c1-c29b-4789-858a-0ccecd7ed96a", APIVersion:"v1", ResourceVersion:"396271", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-ingress-udp-microk8s-conf
I1229 00:03:30.870024 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-load-balancer-microk8s-conf", UID:"314ec221-71de-44d1-a373-fb2af04eb503", APIVersion:"v1", ResourceVersion:"396262", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-load-balancer-microk8s-conf
I1229 00:03:30.921393 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"nginx-ingress-tcp-microk8s-conf", UID:"ae5a3907-1d27-4bd5-bb2b-6090bccacd7f", APIVersion:"v1", ResourceVersion:"396269", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/nginx-ingress-tcp-microk8s-conf
I1229 00:03:31.932054 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"pork", UID:"247bbfdd-898e-42b9-a64f-26e32c175ba6", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"396885", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/pork
I1229 00:03:31.932294 6 backend_ssl.go:66] Adding Secret "default/pork-tls-secret" to the local store
I1229 00:03:32.013992 6 nginx.go:307] Starting NGINX process
I1229 00:03:32.014104 6 leaderelection.go:242] attempting to acquire leader lease ingress/ingress-controller-leader-nginx...
W1229 00:03:32.016244 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
I1229 00:03:32.016486 6 controller.go:139] Configuration changes detected, backend reload required.
I1229 00:03:34.700522 6 controller.go:155] Backend successfully reloaded.
I1229 00:03:34.700696 6 controller.go:164] Initial sync, sleeping for 1 second.
W1229 00:03:35.867162 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
I1229 00:03:37.194037 6 leaderelection.go:252] successfully acquired lease ingress/ingress-controller-leader-nginx
I1229 00:03:37.194107 6 status.go:86] new leader elected: nginx-ingress-microk8s-controller-httk4
W1229 00:03:39.200750 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
W1229 00:03:42.533919 6 controller.go:909] Service "default/pork" does not have any active Endpoint.
Something doesn't seem quite right with the firewall settings in my kubesail-agent logs:
error: Gateway closed connection, reconnecting! { "reason": "io server disconnect" }
info: Connected to gateway socket! { "KUBESAIL_AGENT_GATEWAY_TARGET": "https://usw1.k8g8.com" }
info: Agent recieved new agent-data from gateway! {
"clusterAddress": "pi-banana.jaytula.usw1.k8g8.com",
"firewall": {
"pi-banana.jaytula.usw1.k8g8.com": 1,
"kanboard.codepasta.io": "0.0.0.0/0"
}
}
I poked around and found a non-working link to the Firewall editor
on the Kubectl Config page. It leads to https://kubesail.com/cluster/undefined/settings
. Are we supposed to be able to edit the firewall? Does it matter? Or is it just stale settings from when my ingress controller was working?
kubesail-agent
deployment YAML? I don't see any new info in the docs, https://docs.kubesail.com/byoc/ . thanks for your help!
Hey @jaytula - you'll need to use the image version kubesail/agent:v0.20.1
and add the following section to your env:
on the kubesail agent:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
You can copy-paste it from https://byoc.kubesail.com/USERNAME.yaml - and you'll see a warning message in the logs about using HostPort routing :)
@erulabs I kubectl apply -f https://byoc.kubesail.com/USERNAME.yaml
and it appears to work better. But I ran into a problem. I tried spinning up another set of deployment/service/ingress with same image (but different name) and it appears the kubesail-agent did not do its magic for it. So I spun up a third deployment but had this subdomain point to my IP address and this worked. The yaml all looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: ghcr.io/jaytula/nginx-1:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- name: demo
protocol: TCP
port: 8100
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- demo.codepasta.io
secretName: demo-tls-secret
rules:
- host: demo.codepasta.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: demo
port:
number: 8100
The domain demo.codepasta.io
points to my ip address. kanboard.codepasta.io
has A record that points to kubesail gateway (not working, kubesail invalid cert.). pork.codepasta.io
has A record also pointing to kubesail (working).