Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    yctn
    @yctn
    pods are not services
    Frédéric Falquéro
    @fred_de_paris_gitlab
    I usually deploy this way. yesterday it was ok, today it's ko
    Guillaume
    @dracorpg
    We also observed Deployment updates not being honored by the controller (i.e. ReplicaSets & Pods remaining at the previous version) in the last few days
    Thomas Coudert
    @thcdrt
    We detected some scheduling problems in Kubernetes 1.11 for a few weeks now. If you are in this version and seem to have scheduling trouble, please upgrade
    Guillaume
    @dracorpg
    this is quite annoying to say the least :) since it happened to a dev cluster, we can afford creating a new one (now that multiple k8s clusters per Public Cloud project are supported) and redeploying to it - however I won't fancy this happening in my production environment :|
    Oh yes indeed, said cluster is still on 1.11 (and new one on 1.15 works fine)
    Thomas Coudert
    @thcdrt
    As Kubernetes only support 3 last versions, I advise you (and all Kubernetes users) to upgrade at least to 1.13
    To benefit for bugfixes and security updates
    Guillaume
    @dracorpg
    I believe the previous manager UI used to require wiping the cluster for such k8s version updates? I see that it now allows rolling upgrades :)
    Thomas Coudert
    @thcdrt
    Yes before you was forced to reset your cluster using UI but for a few weeks now you can upgrade properly :)
    Guillaume
    @dracorpg
    This is great, thanks! Maybe you guys should communicate more about this kind of stuff? Had I known about this combination of "scheduling issues detected on 1.11" + "1.15 available therefore anything <1.13 is now unsupported" + "painless upgrades are now possible" my user experience would have been different ;)
    arduinopepe
    @arduinopepe
    Good Morning
    is there anyway to use custom metrics for horizontalpodautoscaler ? for example http request ?
    Thomas Coudert
    @thcdrt
    Yes @dracorpg , I agree with you, our communication was not enough. About unsupported version and upgrade I know, our PM is working on it, and you should soon have a communication about it.
    I think he posted a message on gitter when upgrade was available, but we should have sent a mail too.
    Guillaume
    @dracorpg
    Maybe you could have a communication channel through notifications in the manager UI? I didn't pay attention if such a thing was designed in the new manager
    Thomas Coudert
    @thcdrt
    @dracorpg yes it could be a way too
    Guillaume
    @dracorpg
    email is fine too though! as long as important info gets through :)
    We are in the process of moving our prod stack from self-managed Docker Swarm on CoreOS on Public Cloud to a managed Kubernetes cluster, so I'll be sure to monitor this gitter channel in the meantime :P
    Any news about the paying LoadBalancer transition BTW?
    Thomas Coudert
    @thcdrt
    About LoadBalancer, work is still in progress on OVH side, so there is some time before it will be paying.
    Guillaume
    @dracorpg
    Alright, thanks! I was wondering how much of the architecture is actually shared with the nextgen IP Load Balancer offering?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    Hello @dracorpg , are you talking about LoadBalancer typed service in Kubernetes, or IP Load Balancer OVH product ?
    Guillaume
    @dracorpg
    Precisely about both :)
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    About the second one, all are dedicated to a client: dedicated IP + dedicated Infrastructure
    The first one is based one the OVH product, but in order to be free, we build a solution with shared components: Same IP LoadBalancer infrastructure for multiple kubernetes cluster.
    Guillaume
    @dracorpg
    Formulated differently: do/will k8s LoadBalancer services benefit from the same level of HA + DoS protection offered by the non-k8s IPLB product?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    But a dedicated IP + DNS per clients
    @dracorpg these are based on the same product so we have quitely the same Quality Of Service. We only have to rework a part of the solution to have the same SLA.
    Guillaume
    @dracorpg
    Okay, great, that's what I wanted to know :) I don't mind sharing too much, as long as the underlying infra is built just as resilient!
    We currently use "IP Load Balancer" pack1 in front of our Docker Swarm nodes for SSL termination + actual-node-IP-obfuscation + general-peace-of-mind , but switching to managed Kubernetes I'd rather move to the much more flexible solution of an ingress controller exposed as LoadBalancer and handling the SSL termination (it's become so easy now with traefik).
    Dennis van der Veeke
    @MrDienns

    I am currently setting up RBAC permissions for all of my services, and I've just restarted my influxdb service without giving it permission to anything (see below)

    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    rules: []
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: influxdb
    subjects:
    - kind: ServiceAccount
      name: influxdb
      namespace: my-namespace
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: influxdb
      namespace: my-namespace

    however, whenever I deploy this influxdb instance, it can still read the secrets that are used in the deployment. Is this because kubernetes automatically gives access to the secret that is referenced in the deployment/statefulset, or did I do something wrong?

    samuel-girard
    @samuel-girard
    Hello. Is there a way to create a Volume Snapshot from a Kubernetes Persistent Volume?
    arduinopepe
    @arduinopepe
    hi
    Guillaume
    @dracorpg
    @samuel-girard they are regular OpenStack Nova volumes on the back-side
    arduinopepe
    @arduinopepe
    hi team
    someone could help me with metric server
    in metric server cobtainer i foud thise erorr
    1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:serco-node1: unable to get CPU for container "hpautoscale" in pod hpautoscale/hpautoscale-647478dcc-kmnzg on node "51.68.41.164", discarding data: missing cpu usage metric
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @samuel-girard
    Indeed as @dracorpg said, you can create snapshots of your volumes through your Openstack tenant, but this isn't recommended.
    Snapshots can blocked volumes removing action and stuck the whole cluster.
    To make backup of your volumes, you can check these links if you want:
    https://github.com/heptio/velero
    https://github.com/pieterlange/kube-backup
    Vincent DAVY
    @vincentdavy_gitlab
    do you plan to support this feature a day ?
    Guillaume
    @dracorpg
    Oh damn! I didn't think acting at OpenStack Cinder level (for non-destructive actions) would be that dangerous... good thing this was asked then
    Joël LE CORRE
    @jlecorre_gitlab
    @dracorpg Oh my bad, I meant that a snapshot is blocking only for the removing action of a Kubernetes volume.
    (I have edited my previous post)
    Guillaume
    @dracorpg
    @jlecorre_gitlab by removing do you mean complete deletion of the volume (which is not a very common use case for persisted data) or even de-attaching from a node?
    Joël LE CORRE
    @jlecorre_gitlab
    @dracorpg Yes. and it's not such a rare use case. If you used a cluster to test some deployments, you can, for instance, reset it to remove all traces of your previously works.
    arduinopepe
    @arduinopepe
    help me ??
    samuel-girard
    @samuel-girard
    Thanks @dracorpg @jlecorre_gitlab for your answer
    My need here is to have periodic backups of a database, so I would like to snapshot the database Persistent Volume (possible from OpenStack API according to what you said), but is it possible to create a K8s Persistent Volume from a Volume Snapshot (when I need to restore my database)? - if I understand well, the Volume Snapshot is not mounted in my Kubernetes cluster as a Persistent Volume
    arduinopepe
    @arduinopepe
    any help ?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    Hello @arduinio, to be honest, first time that I see that error :s
    arduinopepe
    @arduinopepe
    my autoscale do not work