Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Christian
    @zeeZ
    kubectl describe clusterrole argocd-application-controller
    kubectl describe clusterrolebinding argocd-application-controller
    Those should work and match as a first sanity check, unless I mistyped on mobile
    Also check if that cluster-admin role really exists
    Michał Frąckiewicz
    @SystemZ
    systemz@pc:~$ kubectl -n argocd get serviceaccount argocd-application-controller
    NAME                            SECRETS   AGE
    argocd-application-controller   1         11h
    
    
    systemz@pc:~$ kubectl describe clusterrole argocd-application-controller
    Name:         argocd-application-controller
    Labels:       app.kubernetes.io/component=application-controller
                  app.kubernetes.io/name=argocd-application-controller
                  app.kubernetes.io/part-of=argocd
    Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                    {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"ap...
    PolicyRule:
      Resources  Non-Resource URLs  Resource Names  Verbs
      ---------  -----------------  --------------  -----
      *.*        []                 []              [*]
                 [*]                []              [*]
    
    
    
    systemz@pc:~$ kubectl describe clusterrolebinding argocd-application-controller
    Name:         argocd-application-controller
    Labels:       <none>
    Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                    {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"argocd-application-controlle...
    Role:
      Kind:  ClusterRole
      Name:  cluster-admin
    Subjects:
      Kind            Name                           Namespace
      ----            ----                           ---------
      ServiceAccount  argocd-application-controller  argocd
    yep, it exists
    systemz@pc:~$ kubectl get clusterrole
    NAME                                                                   AGE
    admin                                                                  264d
    argocd-application-controller                                          11h
    argocd-server                                                          11h
    calico                                                                 264d
    calico-node-3.6.0                                                      173d
    cloud-controller-manager                                               264d
    cluster-admin                                                          264d
    ...
    Christian
    @zeeZ
    kubectl auth can-i list clusterroles.rbac.authorization.k8s.io
    As the Argo account. Still doesn't make any sense to me why it wouldn't work
    Michał Frąckiewicz
    @SystemZ
    systemz@pc:~$ kubectl auth can-i list clusterroles.rbac.authorization.k8s.io
    Warning: resource 'clusterroles' is not namespace scoped in group 'rbac.authorization.k8s.io'
    yes
    
    argocd@argocd-application-controller-5d5866cf56-ct94d:~$ kubectl auth can-i list clusterroles.rbac.authorization.k8s.io
    no
    Christian
    @zeeZ
    Which means there is either an absolutely stupid facepalm thing I'm missing, or your cluster is weird
    Michał Frąckiewicz
    @SystemZ
    I'm curious if I can replicate that on fresh OVH k8s cluster
    Christian
    @zeeZ
    Try kubectl auth can-i .... --as=system:serviceaccount:argocd... on your admin account, substitute accordingly
    Michał Frąckiewicz
    @SystemZ
    something like this?
    systemz@pc:~$ kubectl auth can-i list --as=system:serviceaccount:argocd-application-controller clusterroles.rbac.authorization.k8s.io
    Warning: resource 'clusterroles' is not namespace scoped in group 'rbac.authorization.k8s.io'
    no
    Christian
    @zeeZ
    Yeah. You're missing the namespace after serviceaccount: though
    sys:sa:ns:acc
    Michał Frąckiewicz
    @SystemZ

    oh man, this are long strings in a cmd :)

    kubectl auth can-i list --as=system:serviceaccount:argocd:argocd-application-controller clusterroles.rbac.authorization.k8s.io
    Warning: resource 'clusterroles' is not namespace scoped in group 'rbac.authorization.k8s.io'
    no

    still "no", though

    Michał Frąckiewicz
    @SystemZ
    Hmmm, I started new 1.11 cluster to replicate my setup and it's "yes"
    so my cluster is misconfigured somehow
    Dennis van der Veeke
    @MrDienns
    is rbac enabled on our clusters? in the kubernetes docs, it says "To enable RBAC, start the apiserver with --authorization-mode=RBAC", which I dont think we end users can do?
    Thomas Coudert
    @thcdrt
    Hello @MrDienns , you can see all features enabled here: https://docs.ovh.com/gb/en/kubernetes/exposed-apis-software-versions-reserved-resources/
    Indeed RBAC are enabled
    Dennis van der Veeke
    @MrDienns
    fantastic, thank you
    if I specify in my k8s role that that role only has access to the secrets in a specific namespace, does that mean that that role will have access to all secrets in that namespace? is there some way (if needed) of specifying that my role only has access to one particular secret, while also being in a shared namespace?
    Christian
    @zeeZ
    Yes. You can specify a resourceNames list in your roles rules
    Dennis van der Veeke
    @MrDienns
    thank you very much, i shall try that out later
    Christian
    @zeeZ
    Official rbac doc has it in an example. Not that it doesn't work for all verbs (or didn't last I checked), such as list
    Dennis van der Veeke
    @MrDienns
    as long as I can prevent a pod from reading any other secret than the one it needs, it's good :)
    Michał Frąckiewicz
    @SystemZ
    How long does adding smallest B2-7 node to k8s cluster can take?
    I think it's now more than 20 mins in "Installing" state
    Thomas Coudert
    @thcdrt
    It should take several minutes
    More than 10 mn it begins to not be normal
    Michał Frąckiewicz
    @SystemZ
    I'll give info in another 10mins, just to be sure
    Thomas Coudert
    @thcdrt
    Can you send me your cluster id in private please ?
    Michał Frąckiewicz
    @SystemZ
    ok
    Michał Frąckiewicz
    @SystemZ
    @thcdrt thx, it's working now :)
    Thomas Coudert
    @thcdrt
    You're welcome, don't hesitate to ping us as you did if it seems to you a bit too long.
    Michał Frąckiewicz
    @SystemZ
    I guess monitoring for creation > X min would be great, then no PM needed from customers ;)
    Thomas Coudert
    @thcdrt
    Yep, that's planned, we have it only on clusters for now
    Guillaume
    @dracorpg
    @thcdrt regarding k8s node deployment delay, I believe the new Public Cloud manager UI is simply updated much more slowly than the actual node state
    I created a node yesterday, it was showing in kubectl get node (and actually starting to schedule pods) about 3~4 min later but the UI was still showing "Installation in progress" like... 10 min later?
    Michał Frąckiewicz
    @SystemZ
    I've seen it's like 1 min lag max during installation
    Guillaume
    @dracorpg
    anyway the "OpenStack Nova instance ready" to "Kubernetes node ready" delay overhead seems minimal - that's really nice
    Frédéric Falquéro
    @fred_de_paris_gitlab
    I have something strange. I deploy using a apply -f xxxx .... i have : deployment.apps/xxxxx configured
    service/xxxxx unchanged
    yctn
    @yctn
    this means nothing changed in the service. therefor it did nothing
    Frédéric Falquéro
    @fred_de_paris_gitlab
    but if a look at my pods nothing append
    but the deployment (last image version change as expected)
    yctn
    @yctn
    then show the full full maby
    pods are not services
    Frédéric Falquéro
    @fred_de_paris_gitlab
    I usually deploy this way. yesterday it was ok, today it's ko
    Guillaume
    @dracorpg
    We also observed Deployment updates not being honored by the controller (i.e. ReplicaSets & Pods remaining at the previous version) in the last few days
    Thomas Coudert
    @thcdrt
    We detected some scheduling problems in Kubernetes 1.11 for a few weeks now. If you are in this version and seem to have scheduling trouble, please upgrade