Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Christian
    @christian:rinjes.me
    [m]
    I haven't kept track if any automatic updates happened, but I've had the same setting and seen a fair share of force update buttons
    LudoP
    @LudoPL
    Hello !
    I tried to set up a load balancer on a new managed k8s cluster (with only one compute node for now).
    I followed these instructions to do so : https://docs.ovh.com/au/en/kubernetes/installing-nginx-ingress/#before-you-begin
    In the ovh portal manager, the loadbalancer is visible with "CREATED" status but kubebctl indicates that the external ip of the LoadBalancer is still "pending" since this morning.
    I have seen in other message the advice to restart the API server through https://api.ovh.com/console/#/cloud/project/%7BserviceName%7D/kube/%7BkubeId%7D/restart~POST, but it hasn't solve the issue.
    Any idea what I could do ?
    Christian
    @christian:rinjes.me
    [m]
    See if describe shows any errors attached, check events, delete and recreate while checking events and a little patience, hope someone here from ovh is around to check it out, otherwise use regular support channels
    1 reply
    Pádraig Galvin
    @PadraigGalvin
    I had a similar issue on a new cluster, deleting and reinstalling nginx-ingress seemed to fix it.
    3 replies
    Grounz
    @Grounz
    Hello OVH, it's possible to make available a tunnel ipsec on Kubernetes managed service ?
    4 replies
    Iván Anticona
    @ryam4u
    Hello, I have deployed a cluster, but I cannot expose services through NodePort, I get the "time out" error, is it possible to expose services through NodePort or only through Loadbalancer?
    1 reply
    Maxime Hurtrel
    @crazyman_twitter

    TLDR : We are moving to https://discord.gg/m9Mwqd74q4

    A bit more than 2 years after opening this channel during Managed Kubernetes services beta, we want to gather the OVHcloud user community on its official Discord.
    The server just passed 1000 members and you will be able to discuss there not only around Kubernetes but also exchange tips and discuss challenges with users and OVHcloud staff around other OVHcloud services suchs as our Databases or our IaaS solutions.

    We invite you to join our #kubernetes channel via this direct link : https://discord.gg/m9Mwqd74q4
    Like here, me and my OVHcloud colleagues will regularly visit the channel and discuss you challenges and update you around new features, but keep in mind this remains a community tool and is not part of our official support. We will close the OVH Gitter channel in the upcoming weeks to make sure you all new discussions in the same place.

    kiorky
    @kiorky:matrix.org
    [m]
    @crazyman_twitter: hi, is there a way you maintain the matrix bridge ? ( https://ems-docs.element.io/integrations/Discord-Bridge.html )
    1 reply
    Ben
    @bend
    Bonjour, j'ai un code qui est NotReachable sur mon cluster K8S
    Taints: node.kubernetes.io/unreachable:NoSchedule
    une idée ?
    1 reply
    node*
    kiorky
    @kiorky:matrix.org
    [m]
    Is there a way to ensure that a k8s loadbalancer keeps a reserved and static IP like we can do with IP FO on baremetals or like other providers do with prior reservation ?
    Should i missed the documentation topic or is there something i dont understand ?
    Idea is to have a stable IP, or prefered a stable pool of IP from which i will be sure they will never change. The best would be that they are affected somehow to my openstack projet (like IPFO) (for DNS purpose, and those DNS, i dont have them in control so it is not easy to update them).
    1 reply
    Krystian Kruk
    @KrysKruk
    Hi. I setup k8s using Terraform. Recently, k8s version has been automatically updated to 1.22.2-2 and Terraform recreated the whole cluster because the configuration went out of sync. Is it a known issue? How to keep k8s always update policy with Terraform?
    5 replies
    Bernhard J. M. Grün
    @bernhardgruen_twitter
    Hello,
    I think I found a severe regression with Kubernetes 1.22.2. It seems the fsGroup handling in the SecurityContext is not working correctly. It still works correctly on 1.21.5. Problem is that the mounted /data volume does have the permissions 0755 and not 2775 as it should have. This breaks every rootless container that uses a volume.
    Example code:
    apiVersion: v1
    kind: Pod
    metadata:
      name: task-pv-pod
    spec:
      securityContext:
        fsGroup: 1001
        fsGroupChangePolicy: "Always"
        runAsGroup: 1001
        runAsNonRoot: true
        runAsUser: 1001
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
            claimName: test-pvc
      containers:
        - name: task-pv-container
          image: nginxinc/nginx-unprivileged:stable-alpine
          # image: nginx:stable-alpine
          securityContext:
            readOnlyRootFilesystem: false
          ports:
            - containerPort: 8080
              name: "http-server"
          volumeMounts:
            - mountPath: "/data"
              name: task-pv-storage
    3 replies
    Christian
    @christian:rinjes.me
    [m]
    Great, this requires giving discord my phone number. You also cannot remove it after verification because you will immediately get locked out again and then unable to verify it again, even on the same account...
    1 reply
    Thomas Pedot
    @thomas.pedot1_gitlab

    Hi, I try to deploy this charts. https://github.com/linogics/helm-charts/tree/master/charts/n8n but it fails to create a mounted volume in the pods.

    As you can see there is : podSecurityContext and securityContext in values.yaml that I can override.

    The logs inside the pod is

    There was an error initializing DB: "EACCES: permission denied, mkdir '/data/.n8n'"

    Just in case, the Dockerfile of the image : https://hub.docker.com/r/n8nio/n8n

    2 replies
    Hospital_Project
    @melekbenammar

    Hi

    I installed the nginx ingress service through the ovh documentation (through helm => https://docs.ovh.com/au/en/kubernetes/installing-nginx-ingress/)
    the service was installed fine but when I try to put the IP filtering (nginx.ingress.kubernetes.io/whitelist-source-range)
    I can no longer access even by authorizing my ip I receive 403.

    Looking at the ingress logs, I found that the IPs in the logs are private IPs.

    By digging in the fourum (https://lifesaver.codes/answer/do-proxy-protocol-broken-header-3996) I found that it would be necessary to add the following proxy parameters:

    externalTrafficPolicy: Local
    use-proxy-protocol: "true"
    real-ip-header: "proxy_protocol"
    proxy-real-ip-cidr: "10.0.0.0/20"

    I added these parameters but unfortunately I get the error broken header: while reading PROXY protocol, client: 10.244.41.0, server: 0.0.0.0:443

    can you help me ?

    Christian
    @christian:rinjes.me
    [m]
    You need to enable proxy protocol on the loadbalancer as well if you want to use that, via the service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol annotation on the service
    1 reply
    Obviously the DigitalOcean annotation isn't gonna work with ovh
    Alexis Mathey
    @amathey_gitlab

    Hi all,
    I'm currently experiencing issues with cluster autoscaling. I've enabled autoscaling on my nodepool from the OVH web console.
    kubectl get nodepools shows that autoscaling is indeed enabled.
    Then, I scale one of my deployments up, resulting in a few pods in Pendingstate due to insufficient cpu as expected. The issue is then that no new nodes are added to the node pool which did not raise its value of "desired" nodes.

    Can someone help me figure out this issue?
    Thanks :)

    1 reply
    golivhub
    @golivhub

    Hello @OVH,

    We have just upgraded from 1.19to 1.20 and we are facing problem, firstly on mount volumes :

    MountVolume.NewMounter initialization failed for volume "ovh-managed-kubernetes-5tpjxr-pvc-f5f434b7-3be2-45e3-9369-34ad3b9735a1" : kubernetes.io/csi: expected valid fsGroupPolicy, received nil value or empty string
    8 replies
    proble is located on all mounted volumes on the cluster
    golivhub
    @golivhub

    Hello @OVH,

    We have just upgraded from 1.19to 1.20 and we are facing problem, firstly on mount volumes :

    MountVolume.NewMounter initialization failed for volume "ovh-managed-kubernetes-5tpjxr-pvc-f5f434b7-3be2-45e3-9369-34ad3b9735a1" : kubernetes.io/csi: expected valid fsGroupPolicy, received nil value or empty string

    Hello, any update on this problem ? I have opened a ticket 5283935 with support team

    Thomas Pedot
    @thomas.pedot1_gitlab

    Hello, I have three different instance in k8s cluster and I wonder if they can be accessed by <servicename>.<namespace>.svc.cluster.local ?
    the ports are open
    logged into another pods, I tried to scan another service/pods with the url with nmap -p 80 myservice.mynamespace.svc.cluster.local without sucess but with the real IP adress it is not accessed.

    Do I need to specify a network or something like this ?

    SebastianW
    @sebastianwagner:matrix.org
    [m]
    Morning folks. just a super quick question: I have a k8s Service type LoadBalancer on 141.95.96.6. That works already. The cluster url http://l68j19.c1.de1.k8s.ovh.net/ doesn't work for mw. I think I'm missing the url of the service instead of the cluster, right?
    I.e. there is still a gap in the docs https://docs.ovh.com/gb/en/kubernetes/using-lb/#testing-your-service still doesn't give you any hint about how to get the "service url"
    SebastianW
    @sebastianwagner:matrix.org
    [m]
    Nicolas Antoniazzi
    @nantoniazzi:matrix.org
    [m]

    Hello 👋 We are using services with many pods replicas, where the users are connected via websockets.

    The problem is that the current load balancing strategy is round robin. It's a problem because the oldest pod receives most of the traffic while the most recent one doesn't. I would prefer to use the "least connection" strategy.

    => I need to enable the IPVS mode on the kube-proxy for that but apparently, the nodes are not ready for it. Is it something planned? Is there a better approach?

    1 reply
    Nicolas Antoniazzi
    @nantoniazzi:matrix.org
    [m]
    No answer to this question from a member of the OVH team by chance :)?
    EnergieZ
    @EnergieZ

    Hi,
    I have a new ingress wich does not grab any LB IP, so I can't access it.
    In events, I can see this error :

    Error syncing load balancer: failed to ensure load balancer: error waiting for load balancer to be active: load balancer creation for "xxx-xxx-xxx" timed out

    Any idea how to fix this ?

    I have already restart K8S API, still the same problem...

    2 replies
    Nicolas Antoniazzi
    @nantoniazzi:matrix.org
    [m]
    Oh, I wasn't aware of the discord server! Thanks!
    ROUINEB
    @rouineb_gitlab

    Hi,
    We have some serious issues on OVH managed clusters, we can't provision volumes, there seems to be an issue with rights

    AttachVolume.Attach failed for volume "ovh-managed-kubernetes-es9myr-pvc-24a8c410-c488-4380-a11c-827eab2d80fa" : rpc error: code = NotFound desc = [ControllerPublishVolume] Volume 646912cc-6a41-4a53-82b9-ceb037b543f3 not found

    To be more precise

    MountVolume.WaitForAttach failed for volume "ovh-managed-kubernetes-es9myr-pvc-24a8c410-c488-4380-a11c-827eab2d80fa" : volume 646912cc-6a41-4a53-82b9-ceb037b543f3 has GET error for volume attachment csi-a0086e2b5df62c20914961836b35f02b0bbc17911434ce0e5191b293c94249c3: volumeattachments.storage.k8s.io "csi-a0086e2b5df62c20914961836b35f02b0bbc17911434ce0e5191b293c94249c3" is forbidden: User "system:node:kube-sqoolsi-02-node-0dd741" cannot get resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope: no relationship found between node 'kube-sqoolsi-02-node-0dd741' and this object

    Any clue ?

    ROUINEB
    @rouineb_gitlab
    Saw it was related to csi-cinder-controller, we're receiving 403 the whole time
    https://gitter.im/ovh/kubernetes?at=624ecda6257a357825680fcf
    Nathanaël H
    @nathanael:isidorus.fr
    [m]
    Hello, I have two node pools, the first is paid monthly and the second is paid hourly. Both have autoscaler enabled. Is there an easy way to set pod to be scheduled on the monthly node pool, and then if there is not enough ressources to schedule on the second node pool which is billed hourly?
    1 reply
    Nathanaël H
    @nathanael:isidorus.fr
    [m]
    Thanks will check this tomorrow
    schantier
    @schantier

    Hello,

    I would like to know if there is a way to keep the ip of a node if it is replaced.

    Mat
    @matmicro
    I there a way to assign an IP Failover to a MKS Loadbalancer ?
    Mat
    @matmicro
    I am running a website on Managed Kubernetes intances. I can feel a big difference on performances while querying my website during the night than during the day. Maybe because of less load on network, less cpu used on VMs because of shared resources ?
    Are you experiencing the same ?
    2 replies
    Mat
    @matmicro
    I am using OVH cert-manager-webhook. This works fine on MKS GRA and SBG. But this does not work from SYD datacenter.
    Is it because my clusterIssuer use "ovh-eu" endpoint ?
    Arthur LUTZ
    @arthurzenika_gitlab
    Hi, we're having problems with the octavia ingress in kubernetes, we're getting errors about name or id exceeds maximum length of 60, see gitlab-org/cluster-integration/auto-deploy-image#203 & https://storyboard.openstack.org/#!/story/2010006 for details. Any chance this is a configuration on the OVHCloud OpenStack/Neutron/Octavia side ?
    1 reply
    mhurtrel
    @mhurtrel
    Hello @/all As shared a few weeks ago, OVH is closing its Gitter Channels. We invite you to join OVHcloud's Discord to discuss with other Kubernetes user and the service teams : https://discord.gg/27yHfTpv9z
    Nathanaël H
    @nathanael:isidorus.fr
    [m]
    Oh guys it is sad that you leave an open chat discussion gitter-matrix to a closed and proprietary (and non-EU) service.
    Arthur LUTZ
    @arthurzenika_gitlab
    I agree with @nathanael:isidorus.fr this is a step back in my opinion. @mhurtrel do you have an official blog post talking about this ?
    Christian glacet
    @cglacet

    Hello ! I have a pod asking for a persistent volume (spec.ressources.requests.storage) of 200Mi, but OVH seems to ignore whatever value I ask for and always mount a 1Gi volume (status.capacity.storage differs from spec.ressources.requests.storage). It works well for volumes larger than 1Gi.

    Location and volume type are : Gravelines (GRA7) & high-speed

    Any idea if that’s a not bug? Or did I missed some configuration that would allow volumes with size < 1Gi?
    *known bug
    Christian glacet
    @cglacet
    Ok, it seems like volumes are “block storage” (which can’t be smaller than 1Gi), I wonder why the internal cluster disk is not used. Anyway, if you have any clue about this that would be great (for example how to use the cluster’s disk instead of using additional space).
    mhurtrel
    @mhurtrel
    Hello @/all As shared a few weeks ago, OVH is closing its Gitter Channels. We invite you to join OVHcloud's Discord to discuss with other Kubernetes user and the service teams : https://discord.gg/27yHfTpv9z
    Arthur LUTZ
    @arthurzenika_gitlab
    Hi @mhurtrel do you know if it is possible to use discord without providing a phone number ? For privacy reasons I would like to be able to not provide this personal information just to interact with a company I work with...
    It seems that https://support.discord.com/hc/en-us/articles/6181726888215?input_string=is+it+possible+to+use+discord+without+providing+a+phone+number+%3F indicates a phone is required, this is huge regression from my point view to being able to interact here on gitter using a gitlab or github account or via matrix... Again: can we see an official statement by OVH that promotes this migration to a 3rd party service that has privacy issues ?