This channel is closed. Please join us on Discord : https://discord.gg/27yHfTpv9z
People
Repo info
Activity
Michał Frąckiewicz
@SystemZ
Hmmm, I started new 1.11 cluster to replicate my setup and it's "yes"
so my cluster is misconfigured somehow
Dennis van der Veeke
@MrDienns
is rbac enabled on our clusters? in the kubernetes docs, it says "To enable RBAC, start the apiserver with --authorization-mode=RBAC", which I dont think we end users can do?
if I specify in my k8s role that that role only has access to the secrets in a specific namespace, does that mean that that role will have access to all secrets in that namespace? is there some way (if needed) of specifying that my role only has access to one particular secret, while also being in a shared namespace?
Christian
@zeeZ
Yes. You can specify a resourceNames list in your roles rules
Dennis van der Veeke
@MrDienns
thank you very much, i shall try that out later
Christian
@zeeZ
Official rbac doc has it in an example. Not that it doesn't work for all verbs (or didn't last I checked), such as list
Dennis van der Veeke
@MrDienns
as long as I can prevent a pod from reading any other secret than the one it needs, it's good :)
Michał Frąckiewicz
@SystemZ
How long does adding smallest B2-7 node to k8s cluster can take? I think it's now more than 20 mins in "Installing" state
Thomas Coudert
@thcdrt
It should take several minutes
More than 10 mn it begins to not be normal
Michał Frąckiewicz
@SystemZ
I'll give info in another 10mins, just to be sure
Thomas Coudert
@thcdrt
Can you send me your cluster id in private please ?
Michał Frąckiewicz
@SystemZ
ok
Michał Frąckiewicz
@SystemZ
@thcdrt thx, it's working now :)
Thomas Coudert
@thcdrt
You're welcome, don't hesitate to ping us as you did if it seems to you a bit too long.
Michał Frąckiewicz
@SystemZ
I guess monitoring for creation > X min would be great, then no PM needed from customers ;)
Thomas Coudert
@thcdrt
Yep, that's planned, we have it only on clusters for now
Guillaume
@dracorpg
@thcdrt regarding k8s node deployment delay, I believe the new Public Cloud manager UI is simply updated much more slowly than the actual node state
I created a node yesterday, it was showing in kubectl get node (and actually starting to schedule pods) about 3~4 min later but the UI was still showing "Installation in progress" like... 10 min later?
Michał Frąckiewicz
@SystemZ
I've seen it's like 1 min lag max during installation
_
Guillaume
@dracorpg
anyway the "OpenStack Nova instance ready" to "Kubernetes node ready" delay overhead seems minimal - that's really nice
Frédéric Falquéro
@fred_de_paris_gitlab
I have something strange. I deploy using a apply -f xxxx .... i have : deployment.apps/xxxxx configured service/xxxxx unchanged
yctn
@yctn
this means nothing changed in the service. therefor it did nothing
Frédéric Falquéro
@fred_de_paris_gitlab
but if a look at my pods nothing append
but the deployment (last image version change as expected)
yctn
@yctn
then show the full full maby
pods are not services
Frédéric Falquéro
@fred_de_paris_gitlab
I usually deploy this way. yesterday it was ok, today it's ko
Guillaume
@dracorpg
We also observed Deployment updates not being honored by the controller (i.e. ReplicaSets & Pods remaining at the previous version) in the last few days
Thomas Coudert
@thcdrt
We detected some scheduling problems in Kubernetes 1.11 for a few weeks now. If you are in this version and seem to have scheduling trouble, please upgrade
Guillaume
@dracorpg
this is quite annoying to say the least :) since it happened to a dev cluster, we can afford creating a new one (now that multiple k8s clusters per Public Cloud project are supported) and redeploying to it - however I won't fancy this happening in my production environment :|
Oh yes indeed, said cluster is still on 1.11 (and new one on 1.15 works fine)
Thomas Coudert
@thcdrt
As Kubernetes only support 3 last versions, I advise you (and all Kubernetes users) to upgrade at least to 1.13
To benefit for bugfixes and security updates
Guillaume
@dracorpg
I believe the previous manager UI used to require wiping the cluster for such k8s version updates? I see that it now allows rolling upgrades :)
Thomas Coudert
@thcdrt
Yes before you was forced to reset your cluster using UI but for a few weeks now you can upgrade properly :)
Guillaume
@dracorpg
This is great, thanks! Maybe you guys should communicate more about this kind of stuff? Had I known about this combination of "scheduling issues detected on 1.11" + "1.15 available therefore anything <1.13 is now unsupported" + "painless upgrades are now possible" my user experience would have been different ;)
arduinopepe
@arduinopepe
Good Morning
is there anyway to use custom metrics for horizontalpodautoscaler ? for example http request ?
Thomas Coudert
@thcdrt
Yes @dracorpg , I agree with you, our communication was not enough. About unsupported version and upgrade I know, our PM is working on it, and you should soon have a communication about it.
I think he posted a message on gitter when upgrade was available, but we should have sent a mail too.
Guillaume
@dracorpg
Maybe you could have a communication channel through notifications in the manager UI? I didn't pay attention if such a thing was designed in the new manager
Thomas Coudert
@thcdrt
@dracorpg yes it could be a way too
Guillaume
@dracorpg
email is fine too though! as long as important info gets through :)
We are in the process of moving our prod stack from self-managed Docker Swarm on CoreOS on Public Cloud to a managed Kubernetes cluster, so I'll be sure to monitor this gitter channel in the meantime :P
Any news about the paying LoadBalancer transition BTW?