Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Pierre PÉRONNET
    @holyhope
    Smallest are currently B2-7 instances
    Sascha Ormanns
    @s-ormanns
    Hello @holyhope, so an sandbox instance is only available as a virtual machine but not as a node in kubernetes?
    Pierre PÉRONNET
    @holyhope

    Yes that is it. But so you know:

    We will also offer smaller and cheaper nodes this year, but with no overcommit and with SLA, stay tuned :)

    • @crazyman_twitter
    Sascha Ormanns
    @s-ormanns
    Alrighty, thanks @holyhope
    testpresta2
    @testpresta2
    Hello, i have problem with Storage Object. I have create a user with full rights in "Projet Management" > "Users and Roles" in "Public Cloud" ovh manager. I have downloaded and run openrc.sh script. When i am working with swift, i get this error: Authorization Failure. Authorization Failed: The resource could not be found. (HTTP 404). OVH support does not answer to my tickets... Thanks for your help
    Pierrick Gicquelais
    @Kafei59
    Hello @testpresta2, we can't help you with that, we are not able to handle issues on Public Cloud products expect Managed Kubernetes. I recommend you to contact the support back again for your ticket. I hope you will get help soon though
    testpresta2
    @testpresta2
    There is something i do not understand about managed Kubernetes. Maybe you can explain me: Public Cloud shows Kubernetes items on the left menu. But there is an horizon entry too and i do not understand the goal of this openstack tools in this menu.
    Guillaume
    @dracorpg
    Managed Kubernetes is an additional service running on top of a Public Cloud project, you still have access to the underlying Public Cloud tooling (but you're not given SSH keys to the managed k8s cluster nodes & are kindly asked to not interfere too much with k8s-created resources at OpenStack level)
    Joël LE CORRE
    @jlecorre_gitlab
    @testpresta2
    The PublicCloud page is here to show you all information about your Openstack products and "cloud native" products as "Managed Kubernetes Service" and "Private Registry". Horizon is an Openstack’s Dashboard, which provides a web based user interface to OpenStack services.
    And indeed, you are not able to SSH onto yours "full" managed instances used by your Kubernetes service.
    testpresta2
    @testpresta2
    This is very strange to put Horizon inside this menu. It is very confused because there are instance of k8s project, and pure openstack instance
    Sorry i dit not answer to your question: Volumes are working fine. Thanks a lot
    Guillaume
    @dracorpg
    You can have from zero to N managed Kubernetes clusters in any Public Cloud project, independently from what else you do in the project
    (for instance we currently do use OpenStack API to create snapshots/backups of k8s-managed resources)
    testpresta2
    @testpresta2
    When i launch horizon, i do not see my k8s instances. Is it normal ?
    Simon Guyennet
    @sguyennet
    @testpresta2 You should select the correct region next to the OVH logo.
    Guillaume
    @dracorpg
    yeah it's very unintuitive, it always somehow seems to default to a region where you have nothing running whatsoever
    tsn77130
    @tsn77130
    hi guys. Got this error on a CT restart when trying to mount attached volume :
    MountVolume.SetUp failed for volume "pvc-366c32dd-5920-4fc1-bde1-9e4107ea48ec" : kubernetes.io/csi: mounter.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock: connect: connection refused"
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @tsn77130
    Could you send me your private ID in private please?
    tsn77130
    @tsn77130
    @jlecorre_gitlab yes
    testpresta2
    @testpresta2
    @sguyennet : Thanks a lot !
    Guillaume
    @dracorpg
    hmm looks to me the CSI implementation has more regular failures than the old Cinder plugin used to have :/
    Martin Lévesque
    @martinlevesque
    Hi, I resized a PersistentVolumeClaim, from 1 G to 2 G, when I check describe pvc, I get Normal ExternalExpanding 6m17s volume_expand CSI migration enabled for kubernetes.io/cinder; waiting for external resizer to expand the pvc
    Normal Resizing 6m17s external-resizer cinder.csi.openstack.org External resizer is resizing volume ovh-managed-kubernetes-pyhfu9-pvc-9a032d31-7e3e-4749-b905-172b339f5c77
    Normal FileSystemResizeRequired 6m16s external-resizer cinder.csi.openstack.org Require file system resize of volume on node
    However when I get pvc, the capacity remains 1 Gi: get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    main-pvc Bound ovh-managed-kubernetes-pyhfu9-pvc-9a032d31-7e3e-4749-b905-172b339f5c77 1Gi RWX cinder-classic 8m58s
    Martin Lévesque
    @martinlevesque
    also with kubectl command line it gives 1 Gi of capacity, and in the administration dashboard (https://ca.ovh.com/manager/public-cloud/) it shows 2 Gi
    Gilles TOURREAU
    @GillesTourreau
    Hello! How can we subscribe to the beta program to test the Managed Private Registry feature of OVH Public Cloud ?
    Guillaume
    @dracorpg
    @GillesTourreau there's a section for it directly in the OVH Manager for your Public Cloud project (also questions about this are better asked in the OVH registry channel)
    Frank Hoeben
    @ijzerbroot
    Hello! I also have a problem with resizing volumes. Logged a ticket for it. My problem is that when I follow the guide at https://docs.ovh.com/gb/en/kubernetes/resizing-persistent-volumes/ I get an API-compatibility error like so: Warning VolumeResizeFailed 9m53s (x20 over 30m) external-resizer cinder.csi.openstack.org resize volume ovh-managed-kubernetes-1utz98-pvc-8b910527-ac3d-45bf-bcd7-5034ef5a98ee failed: rpc error: code = Internal desc = Could not resize volume "8a506e6b-e144-4ce9-88de-554597d9b2d2" to size 5: Expected HTTP response code [202] when accessing [POST https://volume.compute.gra7.cloud.ovh.net/v3/1390ed61f43c42c0b166eb39950d9f89/volumes/8a506e6b-e144-4ce9-88de-554597d9b2d2/action], but got 406 instead
    {"computeFault": {"message": "Version 3.42 is not supported by the API. Minimum is 3.0 and maximum is 3.15.", "code": 406}}
    The PVC then remains stuck in: Normal Resizing 3m49s (x22 over 34m) external-resizer cinder.csi.openstack.org External resizer is resizing volume ovh-managed-kubernetes-1utz98-pvc-8b910527-ac3d-45bf-bcd7-5034ef5a98ee
    but no resize is happening and I don't dare take the volume in use again (it is currently not attached to any pod)
    Cameron Redmore
    @CameronRedmore
    Hi there, yesterday we added a node selector to a Kubernetes service in our OVH Managed Kubernetes cluster. The service had a persistent volume claim on it and since we have added this node selector, the volume now refuses to mount (everything was running perfectly fine beforehand). We now get the following error instead: OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory. We created a support ticket yesterday which has yet to receive a response so I was hoping we may be able to get a faster response here? Many thanks in advance.
    Simon Guyennet
    @sguyennet
    @CameronRedmore Could you send me your cluster ID and the node used in the node selector in private please? I will have a look.
    Stephen Moloney
    @stephenmoloney
    I'm just wondering what kind of isolation exists between k8s nodes inside the OVH network ?
    Gilles TOURREAU
    @GillesTourreau
    @dracorpg can you give me the link to the gitter OVH registry channel ?
    Ton Wittenberg
    @yctn
    Enrico
    @enrico1985
    Hello, Velero question: i followed this guide https://docs.ovh.com/gb/en/kubernetes/backing-up-cluster-with-velero/ but i still cannot save the pvc during backup create, no errors but these messages:
    msg="label \"failure-domain.beta.kubernetes.io/zone\" is not present on PersistentVolume"
    msg="No volume ID returned by volume snapshotter for persistent volume"
    msg="Persistent volume is not a supported volume type for snapshots, skipping."
    any advice please?
    Louis GOUNOT
    @louis-gounot
    Hello,
    What is the difference between "normal" and "FLEX" instance types (except SSD size) ?
    Pierre PÉRONNET
    @holyhope
    Hello @louis-gounot,
    No differences except that you can enlarge disks on Flex instances, but keep in mind that data on ssd are volatile (can be erased at any time during update). You should use persistent volumes.
    So it should not be a major choice normal vs flex
    Guillaume
    @dracorpg
    @stephenmoloney nothing more in terms of isolation than between other Public Cloud VMs, AFAIK
    so yeah, at the mercy of meltdown/spectre-like hypervistor-bypassing attacks ... just as any other shared-hardware, VM-based "cloud" hosting
    (although "bare metal" nodes are coming to Public Cloud & managed Kubernetes this year, which will alleviate this concern)
    Ton Wittenberg
    @yctn
    that would be nice. but it would be also nice if OVH would release the k8s cloud provider
    Guillaume
    @dracorpg
    and obviously you have to trust your hosting provider that their infra is not compromised (at all levels: hardware, low-level hardware management, OpenStack and k8s control planes)
    Louis GOUNOT
    @louis-gounot
    @holyhope Thanks, I know about PVC/PV and use it already.
    Will move to FLEX instances I think except if I need node local transient storage
    Guillaume
    @dracorpg
    @holyhope isn't the point of FLEX instances to have one unique (smallest common denominator) disk size for an entire "flavor" family, to allow a seamless downgrade of a high-tier instance to a smaller (less CPUs/RAM) member of the same family? (which often has less disk than the "non-flex" higher tiers)
    Louis GOUNOT
    @louis-gounot
    Hello,
    I have loads of TCP requests coming from 5.135.168.46
    As of RIPE info, this is an OVH address with following comment :
    remarks:         *********************************************************************
    remarks:         * This block is used for internal scanning purposes. If you've been *
    remarks:         * scanned by us, sorry! This is a mistake and your IP ended on our *
    remarks:         * list by error. Please contact us so we can promptly remove it! *
    remarks:         *********************************************************************
    When I say loads, it is thousands in a few minutes
    Guillaume
    @dracorpg
    Hey @crazyman_twitter, so what about this CoreOS -> Ubuntu migration for worker nodes? ;)
    Guillaume
    @dracorpg
    (also my cluster upgrade from 1.15 to 1.16 does seem quite unwilling to actually update the nodes, 1 hour after initiating the upgrade process...)