Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Christian
    @zeeZ
    Official rbac doc has it in an example. Not that it doesn't work for all verbs (or didn't last I checked), such as list
    Dennis van der Veeke
    @MrDienns
    as long as I can prevent a pod from reading any other secret than the one it needs, it's good :)
    Michał Frąckiewicz
    @SystemZ
    How long does adding smallest B2-7 node to k8s cluster can take?
    I think it's now more than 20 mins in "Installing" state
    Thomas Coudert
    @thcdrt
    It should take several minutes
    More than 10 mn it begins to not be normal
    Michał Frąckiewicz
    @SystemZ
    I'll give info in another 10mins, just to be sure
    Thomas Coudert
    @thcdrt
    Can you send me your cluster id in private please ?
    Michał Frąckiewicz
    @SystemZ
    ok
    Michał Frąckiewicz
    @SystemZ
    @thcdrt thx, it's working now :)
    Thomas Coudert
    @thcdrt
    You're welcome, don't hesitate to ping us as you did if it seems to you a bit too long.
    Michał Frąckiewicz
    @SystemZ
    I guess monitoring for creation > X min would be great, then no PM needed from customers ;)
    Thomas Coudert
    @thcdrt
    Yep, that's planned, we have it only on clusters for now
    Guillaume
    @dracorpg
    @thcdrt regarding k8s node deployment delay, I believe the new Public Cloud manager UI is simply updated much more slowly than the actual node state
    I created a node yesterday, it was showing in kubectl get node (and actually starting to schedule pods) about 3~4 min later but the UI was still showing "Installation in progress" like... 10 min later?
    Michał Frąckiewicz
    @SystemZ
    I've seen it's like 1 min lag max during installation
    Guillaume
    @dracorpg
    anyway the "OpenStack Nova instance ready" to "Kubernetes node ready" delay overhead seems minimal - that's really nice
    Frédéric Falquéro
    @fred_de_paris_gitlab
    I have something strange. I deploy using a apply -f xxxx .... i have : deployment.apps/xxxxx configured
    service/xxxxx unchanged
    yctn
    @yctn
    this means nothing changed in the service. therefor it did nothing
    Frédéric Falquéro
    @fred_de_paris_gitlab
    but if a look at my pods nothing append
    but the deployment (last image version change as expected)
    yctn
    @yctn
    then show the full full maby
    pods are not services
    Frédéric Falquéro
    @fred_de_paris_gitlab
    I usually deploy this way. yesterday it was ok, today it's ko
    Guillaume
    @dracorpg
    We also observed Deployment updates not being honored by the controller (i.e. ReplicaSets & Pods remaining at the previous version) in the last few days
    Thomas Coudert
    @thcdrt
    We detected some scheduling problems in Kubernetes 1.11 for a few weeks now. If you are in this version and seem to have scheduling trouble, please upgrade
    Guillaume
    @dracorpg
    this is quite annoying to say the least :) since it happened to a dev cluster, we can afford creating a new one (now that multiple k8s clusters per Public Cloud project are supported) and redeploying to it - however I won't fancy this happening in my production environment :|
    Oh yes indeed, said cluster is still on 1.11 (and new one on 1.15 works fine)
    Thomas Coudert
    @thcdrt
    As Kubernetes only support 3 last versions, I advise you (and all Kubernetes users) to upgrade at least to 1.13
    To benefit for bugfixes and security updates
    Guillaume
    @dracorpg
    I believe the previous manager UI used to require wiping the cluster for such k8s version updates? I see that it now allows rolling upgrades :)
    Thomas Coudert
    @thcdrt
    Yes before you was forced to reset your cluster using UI but for a few weeks now you can upgrade properly :)
    Guillaume
    @dracorpg
    This is great, thanks! Maybe you guys should communicate more about this kind of stuff? Had I known about this combination of "scheduling issues detected on 1.11" + "1.15 available therefore anything <1.13 is now unsupported" + "painless upgrades are now possible" my user experience would have been different ;)
    arduinopepe
    @arduinopepe
    Good Morning
    is there anyway to use custom metrics for horizontalpodautoscaler ? for example http request ?
    Thomas Coudert
    @thcdrt
    Yes @dracorpg , I agree with you, our communication was not enough. About unsupported version and upgrade I know, our PM is working on it, and you should soon have a communication about it.
    I think he posted a message on gitter when upgrade was available, but we should have sent a mail too.
    Guillaume
    @dracorpg
    Maybe you could have a communication channel through notifications in the manager UI? I didn't pay attention if such a thing was designed in the new manager
    Thomas Coudert
    @thcdrt
    @dracorpg yes it could be a way too
    Guillaume
    @dracorpg
    email is fine too though! as long as important info gets through :)
    We are in the process of moving our prod stack from self-managed Docker Swarm on CoreOS on Public Cloud to a managed Kubernetes cluster, so I'll be sure to monitor this gitter channel in the meantime :P
    Any news about the paying LoadBalancer transition BTW?
    Thomas Coudert
    @thcdrt
    About LoadBalancer, work is still in progress on OVH side, so there is some time before it will be paying.
    Guillaume
    @dracorpg
    Alright, thanks! I was wondering how much of the architecture is actually shared with the nextgen IP Load Balancer offering?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    Hello @dracorpg , are you talking about LoadBalancer typed service in Kubernetes, or IP Load Balancer OVH product ?
    Guillaume
    @dracorpg
    Precisely about both :)
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    About the second one, all are dedicated to a client: dedicated IP + dedicated Infrastructure
    The first one is based one the OVH product, but in order to be free, we build a solution with shared components: Same IP LoadBalancer infrastructure for multiple kubernetes cluster.
    Guillaume
    @dracorpg
    Formulated differently: do/will k8s LoadBalancer services benefit from the same level of HA + DoS protection offered by the non-k8s IPLB product?
    Pierre Péronnet
    @Pierre_Peronnet_twitter
    But a dedicated IP + DNS per clients
    @dracorpg these are based on the same product so we have quitely the same Quality Of Service. We only have to rework a part of the solution to have the same SLA.