Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Thomas Coudert
    @thcdrt
    @arduinio why did you delete metrics-server ?
    arduinopepe
    @arduinopepe
    to reinitialize
    dial tcp: lookup serco-node1 on 10.3.0.10:53: no such host]
    why i have this error ?
    Guillaume
    @dracorpg
    @samuel-girard my need is exactly the same:
    Right now we manage our Docker Swarms on Public Cloud projects, persistent volumes are bind-mounted into the containers from OpenStack Cinder block volumes attached to the corresponding hosts. We have an independent periodic script that snapshots these volumes through the OpenStack API.
    We are going to migrate from these self-managed Swarms to OVH Managed Kubernetes clusters. Since the PersistentVolumes are backed 1:1 by Cinder volumes, I'd like to keep using the snapshotting script as long as a better/simpler/more robust/more elevant solution doesn't exist (k8s-level snapshotting?)
    Guillaume
    @dracorpg

    [continued] when the need comes for restoring data to a new PersistentVolume, my general idea is the following :

    1. you should be able to create a new OpenStack Cinder volume from an existing volume snapshot (neither from the OVH manager nor from Horizon, however, so through API only if possible at all)
    2. you should then be able to create a PersistentVolume that is backed by this Cinder volume <=== ????
    3. include the proper selector in your PersistentVolumeClaim so it ends up bound to said PersistentVolume instead of getting a blank one dynamically provisioned for it

    I'm not entirely sure 1 can be done, and really not sure 2 can be done now that I'm writing it down... your inputs @jlecorre_gitlab ?

    samuel-girard
    @samuel-girard
    @dracorpg From what I found and the information I got here, I would like to do something similar for our project: database data in Persistent Volumes and creation of snapshots using OpenStack API (create Volume Snapshots)
    The only thing I cannot get is how to restore a Volume Snapshot back into a Persistent Volume available in K8S (we also need to restore the database data in some way).
    I haven't tried OpenStack API yet (waiting for the project user creation to complete...), do you know if it is possible to restore the data from a volume snapshot back into a persistent volume?
    We seem to have the same questions regarding what can be done yet
    Thomas Coudert
    @thcdrt
    @arduinio because no service must not exist with this name or in another namespace
    samuel-girard
    @samuel-girard
    Is it not possible from OpenStack API to revert a Volume data back in the Snapshot state?
    samuel-girard
    @samuel-girard
    Guillaume
    @dracorpg
    yep, but it won't be a new openstack/k8s volume - you'll just rollback the original volume to the snapshot state :)
    (which, I do agree, is probably enough for most cases like accidental data deletion or corruption from the software side)
    my "1.", creating a fresh new openstack cinder volume from any snapshot (or volume, for that matter) is indeed possible https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=create-a-snapshot-detail,create-a-volume-detail#create-a-volume
    but I'm afraid the "2." is not
    Dennis van der Veeke
    @MrDienns

    could someone look into my RBAC query from earlier? just trying to understand how kubernetes works in a few specific scenarios (ill copy paste it again here for ease of reading);

    I am currently setting up RBAC permissions for all of my services, and I've just restarted my influxdb service without giving it permission to anything (see below)

    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    rules: []
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: influxdb
    subjects:
    - kind: ServiceAccount
      name: influxdb
      namespace: my-namespace
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: influxdb
      namespace: my-namespace

    however, whenever I deploy this influxdb instance, it can still read the secrets that are used in the deployment. Is this because kubernetes automatically gives access to the secret that is referenced in the deployment/statefulset, or did I do something wrong?

    Frédéric Falquéro
    @fred_de_paris_gitlab
    How long does an upgrade take? One of my clusters remains in 'mise à jour' state for a while ...
    Christian
    @zeeZ
    @MrDienns rbac rules are additive, do you have any rolebindings or clusterrolebindings giving that influxdb serviceaccount more permissions?
    also if the secret is used somewhere in the deployment it can read from there, like environment or as a file, but it shouldn't be able to query via the api
    unless you're running something like minikube with rbac disabled I guess
    Frédéric Falquéro
    @fred_de_paris_gitlab
    the upgrade is over but i can't delete a pod .... Unable to connect to the server: net/http: TLS handshake timeout
    Chaya56
    @Chaya56
    Hello !
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @fred_de_paris_gitlab can we discuss together in private please?
    Hi @Chaya56
    Chaya56
    @Chaya56
    I want to use PVC with ReadWriteMany mode (to share a snapshot repo between elasticsearch nodes) but it seems not available in OVH kubernetes:
    ReadWriteMany : le volume peut être installé en lecture/écriture par plusieurs nœuds. Ce mode d'accès n'est pas disponible pour les ressources 
    PersistentVolume reposant sur des disques persistants Compute Engine.
    Joël LE CORRE
    @jlecorre_gitlab
    Indeed, this feature isn't available currently.
    Chaya56
    @Chaya56
    Does anyone know a way to accomplish that ? (share a directory in read/write between pod ?)
    samuel-girard
    @samuel-girard
    @dracorpg Yes, restoring snapshot to the original volume is just what we need for now.
    But the revert action is not yet available... (need API v3.40 when OVH is still at 3.15)
    Is there any plan on upgrading OpenStack soon?
    Dennis van der Veeke
    @MrDienns
    @zeeZ alright, that explains why. thank you for the info
    samuel-girard
    @samuel-girard
    @dracorpg Let me know if you find a way to create a K8S Persistent Volume from a Cinder Volume (your n°2), as it is now the only solution I see to reach our goal
    Ghost
    @ghost~5bc6039cd73408ce4faba551
    Hello,
    I wanted to know if it is possible to put a label on a node when it is added via the OVH API (at the time of add)?
    Thomas Coudert
    @thcdrt
    Hello @bmagic , unfortunately it's not possible
    Dennis van der Veeke
    @MrDienns
    hello, is it possible to create a volume with ReadWriteMany accessmode? i seem to only be able to create ReadWriteOnce volumes, as the ReadWriteMany ones stay on pending and no volume is created... I'm trying to setup ACME for Traefik, and would like to create a volume mount so that my traefik daemonset can access the ACME certificate files from every node, is this possible or do i need to use some kind of key-value service instead?
    Guillaume
    @dracorpg
    nope, no ReadWriteMany available with the current OpenStack Cinder version AFAIK
    (yeah, sadly this means Traefik on one node only)
    if only traefik was able to use k8s Secrets for its SSL certificates!
    indeed Traefik 1.x can use a KV-store instead of plainfile, though https://docs.traefik.io/user-guide/kv-config/#store-configuration-in-key-value-store
    Thomas Coudert
    @thcdrt
    @MrDienns indeed you can't, you can have more details here: https://docs.ovh.com/gb/en/kubernetes/setting-up-a-persistent-volume/#access-modes
    Guillaume
    @dracorpg
    is making RWmany available at k8s-level on track, though, @thcdrt? AFAIK Cinder supports multi-attached volumes since the Queens release (when running on a backend capable of it, but I guess Ceph allows it? ohmygod so many stacked infrastructural middleware...)
    Guillaume
    @dracorpg
    @samuel-girard for curiosity's sake I tried changing a PersistentVolume's cinder volumeID manually to make it point to another Cinder volume, but this is not a config item that can be changed :) ... but you can absolutely manually create a PersistentVolume that points to a Cinder volume created previously (through OpenStack Horizon in my case)
    Ghost
    @ghost~5bc6039cd73408ce4faba551
    Hello, I have issue on some socket connection between my pods on certains nodes.
    Wormhole message is :
    Error closing connection: close tcp IP:40272->IP2:10250: use of closed network connection"
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @bmagic
    How did you get this error?
    Guillaume
    @dracorpg

    @samuel-girard ... so once you have manually created a PersistentVolume that points on the Cinder volume of your choice, you can get a PersistentVolumeClaim to bind to this manually-created PersistentVolume through its spec.volumeName attribute (tried to do it more elegantly with label matching using spec.selectors.matchLabels but selectors are appanrently not supported on the k8s<->cinder config OVH uses)

    @jlecorre_gitlab is this whole approach something you'd advise against for some reason I'm missing?

    Dennis van der Veeke
    @MrDienns
    @dracorpg so, reading the traefik ACME docs, they seem to recommend Consul from what I can see. Is this something I need to deploy as a daemonset (just like traefik), or is a single deployment/stateful set enough?
    Guillaume
    @dracorpg
    couldn't be of very good advice as I've never worked with it, but I don't see why you'd need more than 1 instance
    Dennis van der Veeke
    @MrDienns
    alright thank you, ill try it out
    Guillaume
    @dracorpg
    note that you don't have to deploy traefik as a DaemonSet neither... it's just a matter of how much "true HA" you need :)
    Nicolas Steinmetz
    @nsteinmetz
    and if you need volumes for certificates, you can delegate them to cert-manager
    Dennis van der Veeke
    @MrDienns
    oh, really? okay. yeah here and there i'm still figuring out how kubernetes nodes connect to each other