Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Thomas Coudert
    @thcdrt
    Yep, that's planned, we have it only on clusters for now
    Guillaume
    @dracorpg
    @thcdrt regarding k8s node deployment delay, I believe the new Public Cloud manager UI is simply updated much more slowly than the actual node state
    I created a node yesterday, it was showing in kubectl get node (and actually starting to schedule pods) about 3~4 min later but the UI was still showing "Installation in progress" like... 10 min later?
    Michał Frąckiewicz
    @SystemZ
    I've seen it's like 1 min lag max during installation
    Guillaume
    @dracorpg
    anyway the "OpenStack Nova instance ready" to "Kubernetes node ready" delay overhead seems minimal - that's really nice
    Frédéric Falquéro
    @fred_de_paris_gitlab
    I have something strange. I deploy using a apply -f xxxx .... i have : deployment.apps/xxxxx configured
    service/xxxxx unchanged
    yctn
    @yctn
    this means nothing changed in the service. therefor it did nothing
    Frédéric Falquéro
    @fred_de_paris_gitlab
    but if a look at my pods nothing append
    but the deployment (last image version change as expected)
    yctn
    @yctn
    then show the full full maby
    pods are not services
    Frédéric Falquéro
    @fred_de_paris_gitlab
    I usually deploy this way. yesterday it was ok, today it's ko
    Guillaume
    @dracorpg
    We also observed Deployment updates not being honored by the controller (i.e. ReplicaSets & Pods remaining at the previous version) in the last few days
    Thomas Coudert
    @thcdrt
    We detected some scheduling problems in Kubernetes 1.11 for a few weeks now. If you are in this version and seem to have scheduling trouble, please upgrade
    Guillaume
    @dracorpg
    this is quite annoying to say the least :) since it happened to a dev cluster, we can afford creating a new one (now that multiple k8s clusters per Public Cloud project are supported) and redeploying to it - however I won't fancy this happening in my production environment :|
    Oh yes indeed, said cluster is still on 1.11 (and new one on 1.15 works fine)
    Thomas Coudert
    @thcdrt
    As Kubernetes only support 3 last versions, I advise you (and all Kubernetes users) to upgrade at least to 1.13
    To benefit for bugfixes and security updates
    Guillaume
    @dracorpg
    I believe the previous manager UI used to require wiping the cluster for such k8s version updates? I see that it now allows rolling upgrades :)
    Thomas Coudert
    @thcdrt
    Yes before you was forced to reset your cluster using UI but for a few weeks now you can upgrade properly :)
    Guillaume
    @dracorpg
    This is great, thanks! Maybe you guys should communicate more about this kind of stuff? Had I known about this combination of "scheduling issues detected on 1.11" + "1.15 available therefore anything <1.13 is now unsupported" + "painless upgrades are now possible" my user experience would have been different ;)
    arduinopepe
    @arduinopepe
    Good Morning
    is there anyway to use custom metrics for horizontalpodautoscaler ? for example http request ?
    Thomas Coudert
    @thcdrt
    Yes @dracorpg , I agree with you, our communication was not enough. About unsupported version and upgrade I know, our PM is working on it, and you should soon have a communication about it.
    I think he posted a message on gitter when upgrade was available, but we should have sent a mail too.
    Guillaume
    @dracorpg
    Maybe you could have a communication channel through notifications in the manager UI? I didn't pay attention if such a thing was designed in the new manager
    Thomas Coudert
    @thcdrt
    @dracorpg yes it could be a way too
    Guillaume
    @dracorpg
    email is fine too though! as long as important info gets through :)
    We are in the process of moving our prod stack from self-managed Docker Swarm on CoreOS on Public Cloud to a managed Kubernetes cluster, so I'll be sure to monitor this gitter channel in the meantime :P
    Any news about the paying LoadBalancer transition BTW?
    Thomas Coudert
    @thcdrt
    About LoadBalancer, work is still in progress on OVH side, so there is some time before it will be paying.
    Guillaume
    @dracorpg
    Alright, thanks! I was wondering how much of the architecture is actually shared with the nextgen IP Load Balancer offering?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    Hello @dracorpg , are you talking about LoadBalancer typed service in Kubernetes, or IP Load Balancer OVH product ?
    Guillaume
    @dracorpg
    Precisely about both :)
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    About the second one, all are dedicated to a client: dedicated IP + dedicated Infrastructure
    The first one is based one the OVH product, but in order to be free, we build a solution with shared components: Same IP LoadBalancer infrastructure for multiple kubernetes cluster.
    Guillaume
    @dracorpg
    Formulated differently: do/will k8s LoadBalancer services benefit from the same level of HA + DoS protection offered by the non-k8s IPLB product?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    But a dedicated IP + DNS per clients
    @dracorpg these are based on the same product so we have quitely the same Quality Of Service. We only have to rework a part of the solution to have the same SLA.
    Guillaume
    @dracorpg
    Okay, great, that's what I wanted to know :) I don't mind sharing too much, as long as the underlying infra is built just as resilient!
    We currently use "IP Load Balancer" pack1 in front of our Docker Swarm nodes for SSL termination + actual-node-IP-obfuscation + general-peace-of-mind , but switching to managed Kubernetes I'd rather move to the much more flexible solution of an ingress controller exposed as LoadBalancer and handling the SSL termination (it's become so easy now with traefik).
    Dennis van der Veeke
    @MrDienns

    I am currently setting up RBAC permissions for all of my services, and I've just restarted my influxdb service without giving it permission to anything (see below)

    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    rules: []
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: influxdb
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: influxdb
    subjects:
    - kind: ServiceAccount
      name: influxdb
      namespace: my-namespace
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: influxdb
      namespace: my-namespace

    however, whenever I deploy this influxdb instance, it can still read the secrets that are used in the deployment. Is this because kubernetes automatically gives access to the secret that is referenced in the deployment/statefulset, or did I do something wrong?

    samuel-girard
    @samuel-girard
    Hello. Is there a way to create a Volume Snapshot from a Kubernetes Persistent Volume?
    arduinopepe
    @arduinopepe
    hi
    Guillaume
    @dracorpg
    @samuel-girard they are regular OpenStack Nova volumes on the back-side
    arduinopepe
    @arduinopepe
    hi team
    someone could help me with metric server
    in metric server cobtainer i foud thise erorr
    1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:serco-node1: unable to get CPU for container "hpautoscale" in pod hpautoscale/hpautoscale-647478dcc-kmnzg on node "51.68.41.164", discarding data: missing cpu usage metric
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @samuel-girard
    Indeed as @dracorpg said, you can create snapshots of your volumes through your Openstack tenant, but this isn't recommended.
    Snapshots can blocked volumes removing action and stuck the whole cluster.
    To make backup of your volumes, you can check these links if you want:
    https://github.com/heptio/velero
    https://github.com/pieterlange/kube-backup