describe
shows any errors attached, check events, delete and recreate while checking events and a little patience, hope someone here from ovh is around to check it out, otherwise use regular support channels
TLDR : We are moving to https://discord.gg/m9Mwqd74q4
A bit more than 2 years after opening this channel during Managed Kubernetes services beta, we want to gather the OVHcloud user community on its official Discord.
The server just passed 1000 members and you will be able to discuss there not only around Kubernetes but also exchange tips and discuss challenges with users and OVHcloud staff around other OVHcloud services suchs as our Databases or our IaaS solutions.
We invite you to join our #kubernetes channel via this direct link : https://discord.gg/m9Mwqd74q4
Like here, me and my OVHcloud colleagues will regularly visit the channel and discuss you challenges and update you around new features, but keep in mind this remains a community tool and is not part of our official support. We will close the OVH Gitter channel in the upcoming weeks to make sure you all new discussions in the same place.
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
securityContext:
fsGroup: 1001
fsGroupChangePolicy: "Always"
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: task-pv-container
image: nginxinc/nginx-unprivileged:stable-alpine
# image: nginx:stable-alpine
securityContext:
readOnlyRootFilesystem: false
ports:
- containerPort: 8080
name: "http-server"
volumeMounts:
- mountPath: "/data"
name: task-pv-storage
Hi, I try to deploy this charts. https://github.com/linogics/helm-charts/tree/master/charts/n8n but it fails to create a mounted volume in the pods.
As you can see there is : podSecurityContext and securityContext in values.yaml that I can override.
The logs inside the pod is
There was an error initializing DB: "EACCES: permission denied, mkdir '/data/.n8n'"
Just in case, the Dockerfile of the image : https://hub.docker.com/r/n8nio/n8n
Hi
I installed the nginx ingress service through the ovh documentation (through helm => https://docs.ovh.com/au/en/kubernetes/installing-nginx-ingress/)
the service was installed fine but when I try to put the IP filtering (nginx.ingress.kubernetes.io/whitelist-source-range)
I can no longer access even by authorizing my ip I receive 403.
Looking at the ingress logs, I found that the IPs in the logs are private IPs.
By digging in the fourum (https://lifesaver.codes/answer/do-proxy-protocol-broken-header-3996) I found that it would be necessary to add the following proxy parameters:
externalTrafficPolicy: Local
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
proxy-real-ip-cidr: "10.0.0.0/20"
I added these parameters but unfortunately I get the error broken header: while reading PROXY protocol, client: 10.244.41.0, server: 0.0.0.0:443
can you help me ?
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol
annotation on the service
Hi all,
I'm currently experiencing issues with cluster autoscaling. I've enabled autoscaling on my nodepool from the OVH web console.kubectl get nodepools
shows that autoscaling is indeed enabled.
Then, I scale one of my deployments up, resulting in a few pods in Pending
state due to insufficient cpu as expected. The issue is then that no new nodes are added to the node pool which did not raise its value of "desired" nodes.
Can someone help me figure out this issue?
Thanks :)
Hello @OVH,
We have just upgraded from 1.19to 1.20 and we are facing problem, firstly on mount volumes :
MountVolume.NewMounter initialization failed for volume "ovh-managed-kubernetes-5tpjxr-pvc-f5f434b7-3be2-45e3-9369-34ad3b9735a1" : kubernetes.io/csi: expected valid fsGroupPolicy, received nil value or empty string
Hello @OVH,
We have just upgraded from 1.19to 1.20 and we are facing problem, firstly on mount volumes :
MountVolume.NewMounter initialization failed for volume "ovh-managed-kubernetes-5tpjxr-pvc-f5f434b7-3be2-45e3-9369-34ad3b9735a1" : kubernetes.io/csi: expected valid fsGroupPolicy, received nil value or empty string
Hello, any update on this problem ? I have opened a ticket 5283935 with support team
Hello, I have three different instance in k8s cluster and I wonder if they can be accessed by <servicename>.<namespace>.svc.cluster.local ?
the ports are open
logged into another pods, I tried to scan another service/pods with the url with nmap -p 80 myservice.mynamespace.svc.cluster.local
without sucess but with the real IP adress it is not accessed.
Do I need to specify a network or something like this ?
Hello 👋 We are using services with many pods replicas, where the users are connected via websockets.
The problem is that the current load balancing strategy is round robin
. It's a problem because the oldest pod receives most of the traffic while the most recent one doesn't. I would prefer to use the "least connection" strategy.
=> I need to enable the IPVS
mode on the kube-proxy
for that but apparently, the nodes are not ready for it. Is it something planned? Is there a better approach?
Hi,
I have a new ingress wich does not grab any LB IP, so I can't access it.
In events, I can see this error :
Error syncing load balancer: failed to ensure load balancer: error waiting for load balancer to be active: load balancer creation for "xxx-xxx-xxx" timed out
Any idea how to fix this ?
I have already restart K8S API, still the same problem...
Hi,
We have some serious issues on OVH managed clusters, we can't provision volumes, there seems to be an issue with rights
AttachVolume.Attach failed for volume "ovh-managed-kubernetes-es9myr-pvc-24a8c410-c488-4380-a11c-827eab2d80fa" : rpc error: code = NotFound desc = [ControllerPublishVolume] Volume 646912cc-6a41-4a53-82b9-ceb037b543f3 not found
To be more precise
MountVolume.WaitForAttach failed for volume "ovh-managed-kubernetes-es9myr-pvc-24a8c410-c488-4380-a11c-827eab2d80fa" : volume 646912cc-6a41-4a53-82b9-ceb037b543f3 has GET error for volume attachment csi-a0086e2b5df62c20914961836b35f02b0bbc17911434ce0e5191b293c94249c3: volumeattachments.storage.k8s.io "csi-a0086e2b5df62c20914961836b35f02b0bbc17911434ce0e5191b293c94249c3" is forbidden: User "system:node:kube-sqoolsi-02-node-0dd741" cannot get resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope: no relationship found between node 'kube-sqoolsi-02-node-0dd741' and this object
Any clue ?
exceeds maximum length
of 60
, see gitlab-org/cluster-integration/auto-deploy-image#203 & https://storyboard.openstack.org/#!/story/2010006 for details. Any chance this is a configuration on the OVHCloud OpenStack/Neutron/Octavia side ?
Hello ! I have a pod asking for a persistent volume (spec.ressources.requests.storage) of 200Mi, but OVH seems to ignore whatever value I ask for and always mount a 1Gi volume (status.capacity.storage differs from spec.ressources.requests.storage). It works well for volumes larger than 1Gi.
Location and volume type are : Gravelines (GRA7) & high-speed