Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Guillaume
    @dracorpg
    Okay, great, that's what I wanted to know :) I don't mind sharing too much, as long as the underlying infra is built just as resilient!
    We currently use "IP Load Balancer" pack1 in front of our Docker Swarm nodes for SSL termination + actual-node-IP-obfuscation + general-peace-of-mind , but switching to managed Kubernetes I'd rather move to the much more flexible solution of an ingress controller exposed as LoadBalancer and handling the SSL termination (it's become so easy now with traefik).
    Dennis van der Veeke
    @MrDienns

    I am currently setting up RBAC permissions for all of my services, and I've just restarted my influxdb service without giving it permission to anything (see below)

    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    rules: []
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: influxdb
    subjects:
    - kind: ServiceAccount
      name: influxdb
      namespace: my-namespace
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: influxdb
      namespace: my-namespace

    however, whenever I deploy this influxdb instance, it can still read the secrets that are used in the deployment. Is this because kubernetes automatically gives access to the secret that is referenced in the deployment/statefulset, or did I do something wrong?

    samuel-girard
    @samuel-girard
    Hello. Is there a way to create a Volume Snapshot from a Kubernetes Persistent Volume?
    arduinopepe
    @arduinopepe
    hi
    Guillaume
    @dracorpg
    @samuel-girard they are regular OpenStack Nova volumes on the back-side
    arduinopepe
    @arduinopepe
    hi team
    someone could help me with metric server
    in metric server cobtainer i foud thise erorr
    1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:serco-node1: unable to get CPU for container "hpautoscale" in pod hpautoscale/hpautoscale-647478dcc-kmnzg on node "51.68.41.164", discarding data: missing cpu usage metric
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @samuel-girard
    Indeed as @dracorpg said, you can create snapshots of your volumes through your Openstack tenant, but this isn't recommended.
    Snapshots can blocked volumes removing action and stuck the whole cluster.
    To make backup of your volumes, you can check these links if you want:
    https://github.com/heptio/velero
    https://github.com/pieterlange/kube-backup
    Vincent DAVY
    @vincentdavy_gitlab
    do you plan to support this feature a day ?
    Guillaume
    @dracorpg
    Oh damn! I didn't think acting at OpenStack Cinder level (for non-destructive actions) would be that dangerous... good thing this was asked then
    Joël LE CORRE
    @jlecorre_gitlab
    @dracorpg Oh my bad, I meant that a snapshot is blocking only for the removing action of a Kubernetes volume.
    (I have edited my previous post)
    Guillaume
    @dracorpg
    @jlecorre_gitlab by removing do you mean complete deletion of the volume (which is not a very common use case for persisted data) or even de-attaching from a node?
    Joël LE CORRE
    @jlecorre_gitlab
    @dracorpg Yes. and it's not such a rare use case. If you used a cluster to test some deployments, you can, for instance, reset it to remove all traces of your previously works.
    arduinopepe
    @arduinopepe
    help me ??
    samuel-girard
    @samuel-girard
    Thanks @dracorpg @jlecorre_gitlab for your answer
    My need here is to have periodic backups of a database, so I would like to snapshot the database Persistent Volume (possible from OpenStack API according to what you said), but is it possible to create a K8s Persistent Volume from a Volume Snapshot (when I need to restore my database)? - if I understand well, the Volume Snapshot is not mounted in my Kubernetes cluster as a Persistent Volume
    arduinopepe
    @arduinopepe
    any help ?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    Hello @arduinio, to be honest, first time that I see that error :s
    arduinopepe
    @arduinopepe
    my autoscale do not work
    could you tell me which was the default deployment tath i should found in a fresh situation ?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    Metrics server deployment? 2 sec
    arduinopepe
    @arduinopepe
    i have deleted :-8
    arduinopepe
    @arduinopepe
    i must deploy its ?
    E0912 14:05:36.662163 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:serco-node1: unable to fetch metrics from Kubelet serco-node1 (serco-node1): Get https://serco-node1:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup serco-node1 on 10.3.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:serco-node2: unable to fetch metrics from Kubelet serco-node2 (serco-node2): Get https://serco-node2:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup serco-node2 on 10.3.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:serco-node3: unable to fetch metrics from Kubelet serco-node3 (serco-node3): Get https://serco-node3:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup serco-node3 on 10.3.0.10:53: no such host]
    Thomas Coudert
    @thcdrt
    @arduinio why did you delete metrics-server ?
    arduinopepe
    @arduinopepe
    to reinitialize
    dial tcp: lookup serco-node1 on 10.3.0.10:53: no such host]
    why i have this error ?
    Guillaume
    @dracorpg
    @samuel-girard my need is exactly the same:
    Right now we manage our Docker Swarms on Public Cloud projects, persistent volumes are bind-mounted into the containers from OpenStack Cinder block volumes attached to the corresponding hosts. We have an independent periodic script that snapshots these volumes through the OpenStack API.
    We are going to migrate from these self-managed Swarms to OVH Managed Kubernetes clusters. Since the PersistentVolumes are backed 1:1 by Cinder volumes, I'd like to keep using the snapshotting script as long as a better/simpler/more robust/more elevant solution doesn't exist (k8s-level snapshotting?)
    Guillaume
    @dracorpg

    [continued] when the need comes for restoring data to a new PersistentVolume, my general idea is the following :

    1. you should be able to create a new OpenStack Cinder volume from an existing volume snapshot (neither from the OVH manager nor from Horizon, however, so through API only if possible at all)
    2. you should then be able to create a PersistentVolume that is backed by this Cinder volume <=== ????
    3. include the proper selector in your PersistentVolumeClaim so it ends up bound to said PersistentVolume instead of getting a blank one dynamically provisioned for it

    I'm not entirely sure 1 can be done, and really not sure 2 can be done now that I'm writing it down... your inputs @jlecorre_gitlab ?

    samuel-girard
    @samuel-girard
    @dracorpg From what I found and the information I got here, I would like to do something similar for our project: database data in Persistent Volumes and creation of snapshots using OpenStack API (create Volume Snapshots)
    The only thing I cannot get is how to restore a Volume Snapshot back into a Persistent Volume available in K8S (we also need to restore the database data in some way).
    I haven't tried OpenStack API yet (waiting for the project user creation to complete...), do you know if it is possible to restore the data from a volume snapshot back into a persistent volume?
    We seem to have the same questions regarding what can be done yet
    Thomas Coudert
    @thcdrt
    @arduinio because no service must not exist with this name or in another namespace
    samuel-girard
    @samuel-girard
    Is it not possible from OpenStack API to revert a Volume data back in the Snapshot state?
    samuel-girard
    @samuel-girard
    Guillaume
    @dracorpg
    yep, but it won't be a new openstack/k8s volume - you'll just rollback the original volume to the snapshot state :)
    (which, I do agree, is probably enough for most cases like accidental data deletion or corruption from the software side)
    my "1.", creating a fresh new openstack cinder volume from any snapshot (or volume, for that matter) is indeed possible https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=create-a-snapshot-detail,create-a-volume-detail#create-a-volume
    but I'm afraid the "2." is not
    Dennis van der Veeke
    @MrDienns

    could someone look into my RBAC query from earlier? just trying to understand how kubernetes works in a few specific scenarios (ill copy paste it again here for ease of reading);

    I am currently setting up RBAC permissions for all of my services, and I've just restarted my influxdb service without giving it permission to anything (see below)

    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    rules: []
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: influxdb
    subjects:
    - kind: ServiceAccount
      name: influxdb
      namespace: my-namespace
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: influxdb
      namespace: my-namespace

    however, whenever I deploy this influxdb instance, it can still read the secrets that are used in the deployment. Is this because kubernetes automatically gives access to the secret that is referenced in the deployment/statefulset, or did I do something wrong?

    Frédéric Falquéro
    @fred_de_paris_gitlab
    How long does an upgrade take? One of my clusters remains in 'mise à jour' state for a while ...
    Christian
    @zeeZ
    @MrDienns rbac rules are additive, do you have any rolebindings or clusterrolebindings giving that influxdb serviceaccount more permissions?
    also if the secret is used somewhere in the deployment it can read from there, like environment or as a file, but it shouldn't be able to query via the api
    unless you're running something like minikube with rbac disabled I guess
    Frédéric Falquéro
    @fred_de_paris_gitlab
    the upgrade is over but i can't delete a pod .... Unable to connect to the server: net/http: TLS handshake timeout
    Chaya56
    @Chaya56
    Hello !
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @fred_de_paris_gitlab can we discuss together in private please?
    Hi @Chaya56