Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Christian
    @zeeZ
    move before dot into resources and everything after as apigroups and hope for the best, or if you want to do it "properly" you'd do one rules entry per api group
    Christian
    @zeeZ
    why those wildcards are not working I don't know, maybe someone more knowledgeable would
    Michał Frąckiewicz
    @SystemZ
    still no dice, listing it like this doesn't help either :/
    rules:
      - apiGroups:
          - ""
        resources:
          - namespaces
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - persistentvolumeclaims
        verbs:
          - get
      - apiGroups:
          - apps
        resources:
          - statefulsets
        verbs:
          - list
      - apiGroups:
          - argoproj.io
        resources:
          - appprojects
        verbs:
          - list
    i hoped that k8s can save me time, not something like this
    Christian
    @zeeZ
    If that's not working at all and the bindings are correct then I dunno, did you edit and forgot to apply again?
    Michał Frąckiewicz
    @SystemZ
    I'm applying, dirs, filenames, double checked
    maybe there is some other project that would help be apply changes made in git ?
    or I just should use standard CI gitlabci/jenkins and drop this idea entirely?

    last chance, I see in docs
    https://argoproj.github.io/argo-cd/getting_started/

    On GKE, you will need grant your account the ability to create new cluster roles:
    kubectl create clusterrolebinding YOURNAME-cluster-admin-binding --clusterrole=cluster-admin --user=YOUREMAIL@gmail.com

    maybe something similar is needed for OVH too ?

    Amine
    @A-Hilaly
    If a GitOps operator is what you're looking for you can take a look at "flux"
    weaveworks got some great tools arround GitOps also
    Michał Frąckiewicz
    @SystemZ
    I'll look into it but I'm wondering if that same issue applies to this project too :/
    @zeeZ had some long yamls for that
    Christian
    @zeeZ
    To an extent, yes. When I created that they had full access rbac rules as well
    It is weird that your rules don't work though
    Michał Frąckiewicz
    @SystemZ
    yea, it's strange, I'm already discussing this on argo Slack
    Pavel Tatarskiy
    @vintikzzz
    Hi, I've tried to upgrade to next minor 1.14
    and get stuck again
    Pavel Tatarskiy
    @vintikzzz
    dashboard shows that everything ok
    image.png
    But i can't connect to it
    Pavels-iMac:frontend vintikzzzz$ kubectl top no
    Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
    Pavel Tatarskiy
    @vintikzzz
    is it time to reset cluster?
    arduinopepe
    @arduinopepe
    Hi guys
    I'm tryng to delete a namespace but is still in hang state
    jenkins Terminating 5d22h
    is there any issu on k8s services ?
    Thomas Coudert
    @thcdrt
    Hello @vintikzzz , cheking with you in private
    Joël LE CORRE
    @jlecorre_gitlab
    Hello @arduinio there is no outage in progress at the moment.
    Maybe there are some non terminated finalizers in your namespace?
    arduinopepe
    @arduinopepe
    ok
    now i'm resolved
    thanks a lot
    Pavel Tatarskiy
    @vintikzzz
    @thcdrt all work again, thank you!
    Michał Frąckiewicz
    @SystemZ

    Is RBAC any different on OVH k8s cluster than let's say GCP?
    One of the devs from argo Slack said me this:

    I’m afraid I don’t know what could be the problem. You may need to check with OVH why this wouldnt work

    yctn
    @yctn
    @SystemZ no rbac is rbac. but how rbac itself is designed could be very differently yes
    Michał Frąckiewicz
    @SystemZ

    Ok, I'll write more details.
    There is yaml which considering other yamls should give one pod some godlike permissions on cluster:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: argocd-application-controller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: argocd-application-controller
    subjects:
    - kind: ServiceAccount
      name: argocd-application-controller
      namespace: argocd

    yet, it doesn't have any:

    argocd@argocd-application-controller-5d5866cf56-8lbkd:~$ kubectl get clusterroles
    Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:argocd:argocd-application-controller" cannot list clusterroles.rbac.authorization.k8s.io at the cluster scope

    Any idea how to debug it?

    Philippe Vienne
    @PhilippeVienne_gitlab
    @SystemZ You are specifying a argocd-application-controller ClusterRole (line 8) isn't cluster-admin you want to designate ?
    Michał Frąckiewicz
    @SystemZ
    If I recall correctly, I tried it to and doesn't work either
    Let me try again...
    Philippe Vienne
    @PhilippeVienne_gitlab
    Edit your cluster role binding then recreate your pod (otherwise secret JWT is not refreshed)
    Michał Frąckiewicz
    @SystemZ
    oh, it needs restart? ok, let's try it
    Christian
    @zeeZ
    Does it really? That'd be an important detail I also didn't know
    Michał Frąckiewicz
    @SystemZ
    I removed pod, it recreated itself, still no enough permissions with cluster-admin
    argocd@argocd-application-controller-5d5866cf56-ct94d:~$ kubectl get clusterroles
    Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:argocd:argocd-application-controller" cannot list clusterroles.rbac.authorization.k8s.io at the cluster scope
    Christian
    @zeeZ
    kubectl -n argocd get serviceaccount argocd-application-controller
    kubectl describe clusterrole argocd-application-controller
    kubectl describe clusterrolebinding argocd-application-controller
    Those should work and match as a first sanity check, unless I mistyped on mobile
    Also check if that cluster-admin role really exists
    Michał Frąckiewicz
    @SystemZ
    systemz@pc:~$ kubectl -n argocd get serviceaccount argocd-application-controller
    NAME                            SECRETS   AGE
    argocd-application-controller   1         11h
    
    
    systemz@pc:~$ kubectl describe clusterrole argocd-application-controller
    Name:         argocd-application-controller
    Labels:       app.kubernetes.io/component=application-controller
                  app.kubernetes.io/name=argocd-application-controller
                  app.kubernetes.io/part-of=argocd
    Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                    {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"ap...
    PolicyRule:
      Resources  Non-Resource URLs  Resource Names  Verbs
      ---------  -----------------  --------------  -----
      *.*        []                 []              [*]
                 [*]                []              [*]
    
    
    
    systemz@pc:~$ kubectl describe clusterrolebinding argocd-application-controller
    Name:         argocd-application-controller
    Labels:       <none>
    Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                    {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"argocd-application-controlle...
    Role:
      Kind:  ClusterRole
      Name:  cluster-admin
    Subjects:
      Kind            Name                           Namespace
      ----            ----                           ---------
      ServiceAccount  argocd-application-controller  argocd
    yep, it exists
    systemz@pc:~$ kubectl get clusterrole
    NAME                                                                   AGE
    admin                                                                  264d
    argocd-application-controller                                          11h
    argocd-server                                                          11h
    calico                                                                 264d
    calico-node-3.6.0                                                      173d
    cloud-controller-manager                                               264d
    cluster-admin                                                          264d
    ...