We still have constant and unnecessary resize events in our cluster: 15588 Resizing: External resizer is resizing volume pvc-87468728-9df0-43aa-b5c4-aa6a0e2264e5
Events emitted by the external-resizer cinder.csi.openstack.org seen at 2021-01-15 14:56:31 +0000 UTC since 2020-12-10 11:09:24 +0000 UTC
Hello @all !
After a couple of months of private beta, the vRack is now available to everyone (in Public Beta)
More details and exhaustive documentation is available here : https://github.com/ovh/public-cloud-roadmap/issues/15#issuecomment-761001758
The feature will graduate to GA late February/ early March (as soon as control panel and LBaaS public IP to private backend are done. (Both are being developed as we speak ).
Error updating Endpoint Slices for Service ingress-controllers/restricted-traefik: failed to update restricted-traefik-dsfcz EndpointSlice for Service ingress-controllers/restricted-traefik: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "restricted-traefik-dsfcz": the object has been modified; please apply your changes to the latest version and try again
Hi, pretty new in these kubernetes stuff followed the dashboard setup but i run into
secrets is forbidden: User "system:serviceaccount:kubernetes-dashboard:admin-user" cannot list resource "secrets" in API group "" in the namespace "default"
cluster version 1.17
Events:
Type Reason Age From Message
Normal NodeHasNoDiskPressure 16m (x3 over 7d18h) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 16m (x3 over 7d18h) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeHasSufficientPID
Normal NodeNotReady 16m (x2 over 17m) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeNotReady
Normal NodeHasSufficientMemory 16m (x3 over 7d18h) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeHasSufficientMemory
Normal NodeNotReady 10m kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeNotReady
Normal Starting 10m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 10m (x2 over 10m) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m (x2 over 10m) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m (x2 over 10m) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 10m kubelet Updated limits on kube reserved cgroup /system.slice
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 10m kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeReady
Normal NodeHasSufficientMemory 3m40s (x8 over 3m53s) kubelet Node node-ce60a59e-cc49-478d-8824-b32435e40ca1 status is now: NodeHasSufficientMemory
Isn't there anyone from OVH that can help here??????
Must not be that difficult.
Here is what I was using for helm ingress config (was based on a gist that was published by someone from OVH if I can remember well)
controller:
service:
externalTrafficPolicy: "Local"
annotations: {"service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol": "v1"}
config:
use-proxy-protocol: "true"
proxy-real-ip-cidr: "10.108.0.0/14"
use-forwarded-headers: "false"
http-snippet: |
geo $realip_remote_addr $is_lb {
default 0;
10.108.0.0/14 1;
}
server-snippet: |
if ($is_lb != 1) {
return 403;
}
How do I need to change this for the production LB ?
Please make an effort and answer this question.
Thansk in advance
@Chaya56 Unfortunately, what you are saying is not clear at all for me.
What should I change?proxy-real-ip-cidr
?http-snippet
?
With what values?
If those values are specific to each LB. How do I get those information concretely.
I have to whitelist internal and external IP of your loadbalancer. Where ? How ?
Nobody from OVH anymore on this channel?
kubelet wanted to free 10369671987 bytes, but freed 0 bytes space with errors in image deletion: [rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "registry.kubernatine.ovh/public/csi-node-driver-registrar:v1.1.0" (must force) - container 0665581f7a59 is using its referenced image a93898755322, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "registry.kubernatine.ovh/public/csi-cinder-plugin:v0.1.0" (must force) - container f2975c65cb7b is using its referenced image 25beabf0a35a]
kubelet wanted to free 10369671987 bytes, but freed 0 bytes space with errors in image deletion: [rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "registry.kubernatine.ovh/public/csi-node-driver-registrar:v1.1.0" (must force) - container 0665581f7a59 is using its referenced image a93898755322, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "registry.kubernatine.ovh/public/csi-cinder-plugin:v0.1.0" (must force) - container f2975c65cb7b is using its referenced image 25beabf0a35a]
on 4 of my production nodes
Hi, guys do you know where can i get the official communication about old LB release in march ??? i guess our hosting team didnt notice the email ??
what i read on gitter is we will live with 2 LB during a period, then after some time the previous LB will be deleted, so if we dont react and update DNS to new IP we will experience downtime. Did i understand well ?
Could someone from OVH clarify the following:
My questions:
Thanks in advance
ip-51-xxx-xx-xx.bhs.lb.ovh.net