by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Nabil Ameziane
    @nameziane_gitlab
    Error: Invalid variable name
    
      on .terraform/modules/publiccloud-k8s_install-k8s/terraform-ovh-publiccloud-k8s-0.5.0/modules/install-k8s/variables.tf line 1, in variable "count":
       1: variable "count" {
    
    The variable name "count" is reserved due to its special meaning inside module
    blocks.
    any one can help me ? or show me a begin of solution to the issue ?
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @nameziane_gitlab
    This Terraform module isn't compliant with our Managed Kubernetes Service.
    7 replies
    Suleiman Ali
    @somaliz
    Hi, @jlecorre_gitlab we are having an issue with LoadBalancers in a managed k8s cluster the loadbalancer get provisioned but never works and don't appear in the UI under loadbalancers, we have also opened a support ticket with no luck so far can you please help ?
    
    Name:                     hello-world
    Namespace:                default
    Labels:                   app=hello-world
    Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                                {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello-world"},"name":"hello-world","namespace":"default"...
    Selector:                 app=hello-world
    Type:                     LoadBalancer
    IP:                       10.3.199.57
    LoadBalancer Ingress:     6ddruj6qoo.lb.c1.gra7.k8s.ovh.net
    Port:                     http  80/TCP
    TargetPort:               80/TCP
    NodePort:                 http  30416/TCP
    Endpoints:                10.2.0.11:80
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:
      Type     Reason                        Age                    From                       Message
      ----     ------                        ----                   ----                       -------
      Normal   EnsuredLoadBalancer           18h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          18h                    service-controller         Ensuring load balancer
      Normal   EnsuringLoadBalancer          18h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           18h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          17h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           17h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          17h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           17h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          16h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           16h                    service-controller         Ensured load balancer
      Warning  FailedToUpdateEndpointSlices  15h (x7 over 15h)      endpoint-slice-controller  Error updating Endpoint Slices for Service default/hello-world: node "mvx-k8s-3" not found
      Normal   EnsuredLoadBalancer           15h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          15h                    service-controller         Ensuring load balancer
      Normal   EnsuringLoadBalancer          15h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           15h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          13h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           13h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          13h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           13h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          12h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           12h                    service-controller         Ensured load balancer
      Normal   EnsuredLoadBalancer           12h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer          12h                    service-controller         Ensuring load balancer
      Normal   EnsuredLoadBalancer           12h                    service-controller         Ensured load balancer
      Normal   EnsuringLoadBalancer
    10 replies
    ericjeangirard
    @ericjeangirard
    Hello !
    I would need some assistance from someone from OVH : on one of my cluster, i'm getting 400 Errors : HTTPStatus.BAD_REQUEST
    Simon Guyennet
    @sguyennet
    @ericjeangirard Hi, could you send me your cluster ID in private private?
    Robert Keck
    @rkeck

    Hello, I think we have some problems with cluster networking, I have some services in a namespace and they stopped working/became unreachable some time last night. I tried restarting them, even redeployed some (endpoints are available) but they don't work. Even if I do port-forward of pod and service I get no response.

    When I run the container locally it works. Also I have identical setups on other clusters where it works also.

    Is there a possible solution that this could be an issue with k8s networking?

    Simon Guyennet
    @sguyennet
    @rkeck Hi, could you send your cluster ID in private please?
    Dave
    @dave-b_gitlab
    Hi, I'm currently unable to add nodes to my cluster or view my quota in public cloud (same error for both). I'm happy to send the error code in private. Thanks in advance.
    3 replies
    Pierrick Gicquelais
    @Kafei59
    Hi @dave-b_gitlab, @nameziane_gitlab, can you send me the kind of errors you get, and your cluster ID in private please :)
    Pierrick Gicquelais
    @Kafei59
    @/all, we are facing an issue with our public cloud quota system, we are investigating. We will tell you when everything get back to normal. You can see the issue here: http://travaux.ovh.net/?do=details&id=45531&
    Sorry for the inconvenance
    Dave
    @dave-b_gitlab
    @Kafei59 I'm happy to report that I've just added a node to my cluster and it seems to be working again, I've sent you my cluster id in private and further details in case it helps with debugging :)
    Pierrick Gicquelais
    @Kafei59
    @/all, everything should be back to normal with quota
    Maxime Hurtrel
    @crazyman_twitter
    Hi everyone ! a small update to tell you our colleague Horacio updated the Velero tutorial and published a new stash tutorials with details on how to backup your Kubernetes cluster, including peristsant storage with stash : https://docs.ovh.com/gb/en/kubernetes/backing-up-volumes-using-stash/ I remember some of you were asking for it.
    2 replies
    Hugo Denizart
    @ThePooN

    Hi,
    I am currently meeting an issue with a GitLab deployment on my Kubernetes cluster:

    Error from server (Forbidden): namespaces "avg-19538409-review-home-k8s-a-q87n4c" is forbidden: User "system:serviceaccount:avg-19538409-review-home-k8s-a-q87n4c:avg-19538409-review-home-k8s-a-q87n4c-service-account" cannot get resource "namespaces" in API group "" in the namespace "avg-19538409-review-home-k8s-a-q87n4c"
    Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:avg-19538409-review-home-k8s-a-q87n4c:avg-19538409-review-home-k8s-a-q87n4c-service-account" cannot create resource "namespaces" in API group "" at the cluster scope

    This is fairly specific and hard to debug, but I managed to reproduce it by trying to deploy the dashboard, following the OVH guide: https://docs.ovh.com/ca/en/kubernetes/installing-kubernetes-dashboard/

    $ kubectl logs -n kubernetes-dashboard kubernetes-dashboard-64999dbccd-lp9nv                                    
    2020/07/08 08:53:17 Starting overwatch
    2020/07/08 08:53:17 Using namespace: kubernetes-dashboard
    2020/07/08 08:53:17 Using in-cluster config to connect to apiserver
    2020/07/08 08:53:17 Using secret token for csrf signing
    2020/07/08 08:53:17 Initializing csrf token from kubernetes-dashboard-csrf secret
    panic: secrets "kubernetes-dashboard-csrf" is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot get resource "secrets" in API group "" in the namespace "kubernetes-dashboard"
    
    goroutine 1 [running]:
    github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc000502b60)
            /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b0
    github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
            /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
    github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0004c0080)
            /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:499 +0xc6
    github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0004c0080)
            /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:467 +0x47
    github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
            /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:548
    main.main()
            /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d

    Cluster ID: a4c87169-9912-4405-9f7d-6ce60c5b69e2

    18 replies
    Pascal Maria
    @p.maria_gitlab
    Hi,
    OVH premium support asks me to post my problem on this forum because the response time is too long.
    I am trying to install:
    How to monitor your Kubernetes Cluster with OVH Observability :
    https://www.ovh.com/blog/how-to-monitor-your-kubernetes-cluster-with-ovh-metrics/
    but I have this error :
    NAME READY STATUS RESTARTS AGE
    metrics-daemon-l8ts9 1/2 CrashLoopBackOff 517 43h
    metrics-daemon-sdpfs 1/2 CrashLoopBackOff 516 43h
    They asked me to send them the result of these commands :
    kubectl describe pod metrics-daemon-l8ts9
    kubectl describe pod metrics-daemon-sdpfs
    but I have been waiting for several days for their answers.
    Pierre PÉRONNET
    @holyhope
    Hello @p.maria_gitlab and welcome
    Can you dm the id of your cluster please?
    The doc may be outdated, we will check that
    Hugo Denizart
    @ThePooN
    Is anyone available to try diagnosing my issue? Since this issue happens even when deploying the dashboard following the guide, I wouldn't be surprised there's an issue at OVH.
    Nicolas Bonnel
    @nicolas-bonnel
    Hi, we are experiencing occasionnal issues with DNS resolution (getaddrinfo syscall) since 40H, and we have no clues about the cause. Our nodes have kubernetes v1.17.5, we are thinking about upgrading them to 1.18 to set up core-dns as daemonset but i m not sure we have control on core-dns deployment.
    If someone has information or links to debug & solve such problems, it would be much appreciated
    Nicolas Bonnel
    @nicolas-bonnel
    if we upgrade to 1.18, can we use nodelocaldns ? (https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/)
    Simon Guyennet
    @sguyennet
    @nicolas-bonnel Hi, you can already use node local DNS on 1.17. You need to have a cluster >= 1.15.
    Nicolas Bonnel
    @nicolas-bonnel
    @sguyennet : thanks for the reply
    Nicolas Steinmetz
    @nsteinmetz
    Adrien Ferrand
    @adferrand
    Hello !
    I tried an upgrade of my cluster yesterday, and this morning most of the services are failing to start
    Every pod that has a PVC fails to start in fact with errors of type
    ```
    Unable to attach or mount volumes: unmounted volumes=[redis-data], unattached volumes=[redis-tmp-conf default-token-c7fcb health redis-password redis-data config]: timed out waiting for the condition
    Or
    MountVolume.MountDevice failed for volume "ovh-managed-kubernetes-k27e6b-pvc-fec40133-653b-4aca-b01a-5813a4e8791d" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
    Could you help me?
    Nicolas Bonnel
    @nicolas-bonnel
    hi, I ve got a problem on a cluster with dns look up
    got i/o timeout in node-local-dns pod logs
    Nicolas Bonnel
    @nicolas-bonnel
    ok, sorry, the problem was a mistake on our side, wrong config in manifest
    Vincent Garny
    @nygar120
    Hi, I have a quick technical question : for the loadbalancer objetcs on ovh kube, is it possible to keep an external ip onece given ?
    For example, if I want to migrate my application from one managed kube cluster to another, can we keep the external IP assigned on the source cluster, or we do not have any control over that and every time it will be a new IP ?
    Thank you for your time
    Nicolas Steinmetz
    @nsteinmetz
    @nygar120 I don't think so but to be confirmed by ovh team
    dleurs
    @dleurs
    Hello ! Do you have an idea of when node autoscaling will be available ? (or does OVH do not want to implement it)
    Mazhejiayu
    @mzhejiayu

    Hello!

    I try to expand the PVC but got a strange error.

    Events:
      Type     Reason              Age                    From                                       Message
      ----     ------              ----                   ----                                       -------
      Normal   ExternalExpanding   8h                     volume_expand                              CSI migration enabled for kubernetes.io/cinder; waiting for external resizer to expand the pvc
      Warning  VolumeResizeFailed  4h29m (x481 over 30h)  external-resizer cinder.csi.openstack.org  resize volume pvc-f3b67adf-841d-4e89-84b2-713353b2133b failed: rpc error: code = Internal desc = Could not resize volume "24241adf-b96b-4901-9116-f8c961186b67" to size 2048: Expected HTTP response code [202] when accessing [POST https://volume.compute.gra5.cloud.ovh.net/v3/0e381419a10d405f8b90b6477a64caea/volumes/24241adf-b96b-4901-9116-f8c961186b67/action], but got 406 instead
    {"computeFault": {"message": "Version 3.42 is not supported by the API. Minimum is 3.0 and maximum is 3.15.", "code": 406}}
      Normal  Resizing  3m39s (x561 over 30h)  external-resizer cinder.csi.openstack.org  External resizer is resizing volume pvc-f3b67adf-841d-4e89-84b2-713353b2133b
    Could someone help? Thanks in advance.
    the Kubernetes versions: v1.15.11
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @mzhejiayu
    Could you refer to this documentation please? https://docs.ovh.com/gb/en/kubernetes/resizing-persistent-volumes/
    I think that you have forgot to downscale your deployment before to resize the PV, so the volume is still used.
    This "obscure" error appear often in this case.
    Philippe Vienne
    @PhilippeVienne_gitlab
    Hello, we have an issue on cluster node creation with error "409 Not enough RAM quotas"
    Anyone on OVH side can help us ?
    Thomas Garcia
    @tomgarcia__twitter
    Hi @PhilippeVienne_gitlab Could you give me your cluster id in private please ?
    Jeremy TRUFIER
    @Tronix117
    We have some DNS resolution errors on a specific cluster, we already have an open issue since 1 month (started by @hessman), and it should have been resolved by scaling the master node (managed by ovh). We have a critical system in production since 1 week (health related), we already pushed back this release due to those issues, that were supposed to be fixed, and we can definitely not afford for it to go randomly rogue time to time. Please give us some way to fix that once and for all, it can not happen anymore, our client are starting to be really unhappy and our credibility is starting to be on the balance. Especialy when it happens when resolving mail provider or payment provider, it's a really critical issue
    Thomas Garcia
    @tomgarcia__twitter
    Hi @Tronix117, could you please give me your cluster ID in private ?
    Jérémie MONSINJON
    @jMonsinjon
    Hi @Tronix117
    I’m not sure that threats are appropriate on this channel, it's a "community".
    Can we start with presentations and informations before we tackle it directly?
    I'm sure we will find a solution, but for now I really don't know what you're talking about.
    1 reply
    Mazhejiayu
    @mzhejiayu
    @jlecorre_gitlab thanks for you help, it worked.
    1 reply