Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    jflamy-dev
    @jflamy-dev
    That did it. Now tackling the connection to my custom domain.
    jflamy-dev
    @jflamy-dev
    @erulabs gave up on microk8s on a 2GB Digital Ocean basic droplet. k3s seems less invasive. I installed k3s without traefik, and then used the kubesail interface to install nginx and cert-manager.
    I get some strange 404 errors after adding my two ingresses (I have two webapps in the cluster).
    pod.go:58] cert-manager/controller/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="officials.jflamy.dev" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-6j7r9" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="owlcms-ingress-1581768128-2354231461-1002489940" "resource_namespace"="default" "type"="http-01" 
    service.go:43] cert-manager/controller/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="officials.jflamy.dev" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-57tw9" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="owlcms-ingress-1581768128-2354231461-1002489940" "resource_namespace"="default" "type"="http-01" 
    ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="officials.jflamy.dev" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-sm8gv" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="owlcms-ingress-1581768128-2354231461-1002489940" "resource_namespace"="default" "type"="http-01" 
    sync.go:185] cert-manager/controller/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="officials.jflamy.dev" "resource_kind"="Challenge" "resource_name"="owlcms-ingress-1581768128-2354231461-1002489940" "resource_namespace"="default" "type"="http-01"
    Jean-François Lamy
    @jflamy
    @erulabs after fixing the cname which was broken and not pointing to the correct cluster, I now get 503 errors instead of 404 when a challenge is made.
    Seandon Mooy
    @erulabs
    Ack :/ I wonder if restarting the agent again does the trick... If you can browse to the domain and get your app (and not a 503) thats a good indication its not the agent. If you get a 503 then the cert will not be able to work. The certs use the same domain name as your app but with a /.well-known sort of URL path.
    If restarting the agent -does- work, thats a sort of urgent issue I'll try to fix asap... We actually have a couple patches in the pipeline for the agent that might improve that.
    Carl J. Mosca
    @carljmosca
    how can we clean up what appear to be “stale” domains? I have created/deleted/recreated (with a new name) a cluster and noticed that the old dynamic domain is hanging around - is there a way for me to delete it?
    specifically: k3s1.carljmosca.dns.k8g8.com
    Seandon Mooy
    @erulabs
    Ah - thats good idea - those should really be deleted when a cluster is deleted... our Dynamic DNS domains are pretty new
    ill clean that up for you and add some code to delete those when a cluster is deleted
    Carl J. Mosca
    @carljmosca
    Excellent. Thank you.
    jflamy-dev
    @jflamy-dev

    @erulabs .BYOC cluster on a DigitalOcean standard droplet. Installed latest version of k3s, without traefik. Using KubeSail to install nginx, cert-manager, and define ingresses (since you've figured out all the issuer magic and suchlike).

    NAME                        CLASS    HOSTS                  ADDRESS          PORTS     AGE
    publicresults               <none>   public.jflamy.dev      142.93.154.114   80, 443   89s
    cm-acme-http-solver-xb74s   <none>   public.jflamy.dev      142.93.154.114   80        84s
    owlcms                      <none>   officials.jflamy.dev                    80, 443   25s
    cm-acme-http-solver-jmstz   <none>   officials.jflamy.dev                    80        23s

    If I describe the solvers, the http paths are indeed correct for the challenges, and there are annotations to whitelist any possible source. What is peculiar is that I get a 503 error when attempting to reach the challenge. Normally the longest path takes precedence, so the challenges should go first. Is there a particular way to tell my two main ingresses to NOT listen on port 80 ? Any other idea as to why the error would occur? The problem is the same whether or not I use A records directly to the cluster or a CNAME through the kubesail tunnel.

    Dan Pastusek
    @PastuDan
    @jflamy-dev those conflicts shouldn't be a problem, as you said, the most specific path takes precedence. But I don't think the nginx we install works with A records pointed directly to the cluster. That requires nginx listening on a NodePort (this creates conflicts with Managed Digital Ocean clusters, I assume it also does with K3s)
    go ahead and point your domains back to the kubesail CNAME, and I'll check some config settings on our end
    Dan Pastusek
    @PastuDan
    Can you try deleting and recreating the ingresses? I don't see those host names pointed to your cluster on our end. Sometimes if they were created before the kubesail agent was installed then it can get into a weird state
    Also, feel free to add me (pastudan) to your cluster, I would be happy to dig in a bit
    jflamy-dev
    @jflamy-dev
    @PastuDan I'm at wits end. I've deleted the cluster. Recreated it using the built-in names, trying to make sure that works. Now I get something slightly different.
    Log below is the bottom part of kubectl describe challenge followed by the ingresses
    Status:
      Presented:   true
      Processing:  true
      Reason:      Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://officials.owlcms.jflamy-dev.dns.k8g8.com/.well-known/acme-challenge/XGo8MW96CGKL4NSOxxlk356C7v9kjfRbUBEcERxDiXw': Get "http://officials.owlcms.jflamy-dev.dns.k8g8.com/.well-known/acme-challenge/XGo8MW96CGKL4NSOxxlk356C7v9kjfRbUBEcERxDiXw": dial tcp 142.93.154.114:80: connect: connection refused
      State:       pending
    Events:
      Type    Reason     Age   From          Message
      ----    ------     ----  ----          -------
      Normal  Started    25s   cert-manager  Challenge scheduled for processing
      Normal  Presented  23s   cert-manager  Presented challenge using http-01 challenge mechanism
    root@owlcms-tor1-01:~# kubectl get ingress
    Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
    NAME                        CLASS    HOSTS                                      ADDRESS          PORTS     AGE
    publicresults               <none>   results.owlcms.jflamy-dev.dns.k8g8.com     142.93.154.114   80, 443   22m
    owlcms                      <none>   officials.owlcms.jflamy-dev.dns.k8g8.com   142.93.154.114   80, 443   21m
    cm-acme-http-solver-5tz6q   <none>   officials.owlcms.jflamy-dev.dns.k8g8.com   142.93.154.114   80        77s
    cm-acme-http-solver-dmvxh   <none>   results.owlcms.jflamy-dev.dns.k8g8.com     142.93.154.114   80        79s
    the funny part if you scroll to the end of the challenge part is 142.93.154.114:80: connect: connection refused
    Dan Pastusek
    @PastuDan
    ah yes, the dns.k8g8.com addresses won't work unless you have a NodePort as mentioned above. Can you select one of the domains ending in usw1.k8g8.com?
    that ensures they are proxied through our gateway
    jflamy-dev
    @jflamy-dev
    @PastuDan using the built-in usw1 address seems to work, indeed. Do you have an example of a nginx with a node port? Am running k3s straight on the vm, direct behind a firewall. I will read up on the concept -- had only used a nodeport it to provide direct access to a non-standard port before, would have thought that was what nginx was all about in a baremetal scenario...
    Seandon Mooy
    @erulabs
    @jflamy-dev - Most of the time nginx ingress controller will bind a port on the host system automatically, but it depends (there are plenty of options there). The "dns.k8g8.com" address is a "dynamic DNS" address, so it will point at the IP address of your firewall. You can forward a port on your router to make that work (ie: 142.93.154.114:80). The advantage of the gateway addresses is that we tunnel the traffic to the agent directly, so you can avoid all the messy bits about port-forwarding. We're going to make that more clear in the UI soon to indicate which domain is "tunneled" and which one would require port-forwarding (ie: points at your public IP instead of our gateway/agent system). You're right that the host-port is more or less the "bare-metal" option, which is in Kubernetes terms the same as saying "not using a cloud provider". You can checkout a node-port example here: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
    A home-cluster is, in Kubernetes terms, "bare-metal" - that term is a bit promlematic tho - but essentially just means "I dont have a magic cloud-load balancer that Kubernetes knows how to talk to and program automatically"
    jflamy-dev
    @jflamy-dev
    @erulabs I read that nginx+nodeport example, and I haven't quite figured it out yet. 142.93.154.114 is the public address of my Digital Ocean VM. There is a DO-provided firewall that blocks the traffic except for ports 22,80, and 443, so it's not a port forwarding issue.
    What reinforces my confusion is https://sysadmins.co.za/https-using-letsencrypt-and-traefik-with-k3s/ which makes no reference to a load balancer sitting between the firewall and the nodes. It seems that traefik does things differently, and would actually be simpler for me.
    Seandon Mooy
    @erulabs
    Most people assume Kubernetes is running a cloud somewhere, so the load balancer usually has its own public IP and directly sends traffic to the pods (containers) in the cluster. When you're using your own machines, its up to you to get the internet to nginx somehow - so typically that involves telling nginx to bind to the host's port 80 and 443 (ie: hostport/nodeport) and then a firewall rule that forwards traffic to your pc from the internet.
    Of course, our gateway/agent does that all for you, and acts somewhat like a like balancer that way too.
    For home use, the gateway/agent is probably recommended :)
    jflamy-dev
    @jflamy-dev
    @erulabs I have two ingresses configured through the agent, and only one is working. Not sure why, they were both configured through the interface. Backend, services, endpoints all appear correct.
    (https://officials.owlcms.jflamy-dev.usw1.k8g8.com/ is giving "bad gateway", as if nginx was waiting to connect to the backend). Whereas https://public.owlcms.jflamy-dev.usw1.k8g8.com works as expected.
    Seandon Mooy
    @erulabs
    so the "nginx/1.19.1" is a good hint that everything is working from the internet -> kubesail -> kubesail-agent -> nginx
    but the problem is between nginx and the pod that runs your app itself
    most of the time, this is because either the Ingress points at a service that doesn't exist, or the service selects no pods, or the pod isn't running properly (isn't hosting a web-app for example)
    You can try doing kubectl describe service service-name and seeing if there are "Endpoints" - if there is an IP address there, then the problem is on the Pod side, if there are none, then the Service doesn't select any app properly.
    The KubeSail dashboard somewhat helps with this, if you click on the service at https://kubesail.com/resources - it should tell you what pods it selects
    Most of the time though, assuming you clicked thru the interface to do this, the ingress -> service -> pod selection is good, and the problem is the container itself isn't actually hosting the service as expected (crashing, or not starting, etc etc)
    Hopefully that helps - you might try checking the logs of the app that isnt working properly.
    You can also read the logs of the nginx container (usually in the ingress-nginx namespace, and make a request - you'll see it will say something like "no upstream" or some error between nginx and your app)
    Seandon Mooy
    @erulabs
    Hello everyone! We're transitioning this chat to Discord - you can join here: https://discord.gg/N3zNdp7jHc - We'll still be checking and answering questions here in the Gitter for a while until people have moved over, but feel free to join us there for faster responses! Thanks everyone!
    dklinkman
    @dklinkman
    cool. thanks for the heads up
    RyzeNGrind
    @RyzeNGrind

    Hello guys, I tried to install KubeSail via instructions here: https://kubesail.com/blog/microk8s-raspberry-pi/
    Failed for some reason and I am not too sure why.
    Pasted the terminal output here in Privatebin instance: https://bin.idrix.fr/?8af6888534539c5d#98BWKBM7dyzZ7tH7CccR8mhSffPCfmY4gr7wywJDRyCC

    Any help would be appreciated. I am trying to test microk8s, kubesail, and some other tools on my development single node Raspberry Pi v4 4GB RAM before i try anything with my 5 node RPI 8GB Cluster

    Seandon Mooy
    @erulabs
    Hi @RyzeNGrind - No worries! That's a fairly old link (the ".full." link has issues on some types of Clusters). Would you mind trying microk8s.kubectl delete namespace kubesail-agent; sleep 5; microk8s.kubectl apply -f https://byoc.kubesail.com/ryzengrind.yaml - that should get your cluster online.
    Ah, I see the "full" links are still on that blog post - I'll go ahead and edit that now to just the "standard" install.
    (just for your knowledge, the "full" links include more tools like "cert-manager" for example, but you can install those on the https://kubesail.com/clusters page as well)
    Unfortunately, installing everything at once (the "full" link) can cause some issues (race conditions, etc) :crying_cat_face: - installing those tools via the dashboard should work much more reliably. Hope that helps!
    RyzeNGrind
    @RyzeNGrind

    Ok thanks for that I appreciate the help. I also replied to a post earlier as a thread instead of a reply. If you get a chance to reply thisas well I would appreciate the clarification regarding the subject. Thanks again @erulabs

    @RyzeNGrind, You mention "without relying on an external website" - can you explain more what you mean there? KubeSail will give you a domain name you can use to communicate with your clusters, but certainly you can point a domain at your cluster yourself as well. Do you mean a custom domain for apps hosted on your cluster? If so, you should be able to configure that on our site as well. Let me know if I'm misunderstanding - would love to help get you setup and working properly!

    Seandon Mooy
    @erulabs

    Ah sorry, I missed that message. Gitter threads are bit hard to find sometimes.

    We provide Dynamic DNS already - at https://kubesail.com/domains you'll see a "dynamic DNS" domain, which points at your public IP address like any dynamic DNS service would. The other address is a "Gateway address", which "tunnels" traffic to you so you don't have to mess around with firewalls or port-forwarding. You're free to use either one (of course).

    As for securing your cluster, under the "Settings" page of your cluster at https://kubesail.com/clusters - you can add a Firewall rule which allows remote access. Kubernetes is pretty secure, but we do recommend adding a rule there to only enable your address only.

    We're working pretty hard to solve these problems for you, so ideally you don't have to do much besides use the Gateway address (for example, the one that ends with usw1.k8g8.com and set a firewall rule if you want one).

    Other than that, we for sure recommend keeping Kubernetes up-to-date, and making sure not to invite users you don't trust as more than what are called "namespaced" users (Admin users have access to -everything-!)

    Hope that answers your questions - we're working on more tools like monitoring and rate-limiting and such, which will help protect you even more. Certainly let us know if you have serious concerns about that - for the record, our Gateway/Agent system does not decrypt any of the web-traffic that goes thru it - HTTPS traffic is never decrypted by anyone other than your cluster (not even the open-source agent codebase that lives on your cluster will decrypt the traffic). We do take security very seriously and for sure think long and hard about these things when we design the tools :)
    (If you're curious about the technical details there, we purely route traffic thru our gateway and agent using "SNI" headers, so we never need to decrypt the web-requests in order to route them properly)
    Seandon Mooy
    @erulabs
    @RyzeNGrind We just improved the "Domains" page to be a bit more helpful re: explaining the dynamic dns versus proxied domain :)
    Seandon Mooy
    @erulabs
    Hello everyone! We've replaced our "dns.k8g8.com" domains with "ksdns.io" - this way if you see "k8g8.com" you'll know traffic will be tunneled via our Gateway/Agent system, and if you see "ksdns.io" you'll know you're using a Dynamic DNS address. The older "dns.k8g8.com" domains will continue to resolve, but please go ahead and migrate to using the new "ksdns.io" domains for your dynamic DNS needs! Thanks everyone!