Ok, I'll write more details.
There is yaml which considering other yamls should give one pod some godlike permissions on cluster:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argocd-application-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argocd-application-controller
subjects:
- kind: ServiceAccount
name: argocd-application-controller
namespace: argocd
yet, it doesn't have any:
argocd@argocd-application-controller-5d5866cf56-8lbkd:~$ kubectl get clusterroles
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:argocd:argocd-application-controller" cannot list clusterroles.rbac.authorization.k8s.io at the cluster scope
Any idea how to debug it?
cluster-admin
argocd@argocd-application-controller-5d5866cf56-ct94d:~$ kubectl get clusterroles
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:argocd:argocd-application-controller" cannot list clusterroles.rbac.authorization.k8s.io at the cluster scope
kubectl describe clusterrole argocd-application-controller
kubectl describe clusterrolebinding argocd-application-controller
systemz@pc:~$ kubectl -n argocd get serviceaccount argocd-application-controller
NAME SECRETS AGE
argocd-application-controller 1 11h
systemz@pc:~$ kubectl describe clusterrole argocd-application-controller
Name: argocd-application-controller
Labels: app.kubernetes.io/component=application-controller
app.kubernetes.io/name=argocd-application-controller
app.kubernetes.io/part-of=argocd
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"ap...
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
systemz@pc:~$ kubectl describe clusterrolebinding argocd-application-controller
Name: argocd-application-controller
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"argocd-application-controlle...
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount argocd-application-controller argocd
systemz@pc:~$ kubectl get clusterrole
NAME AGE
admin 264d
argocd-application-controller 11h
argocd-server 11h
calico 264d
calico-node-3.6.0 173d
cloud-controller-manager 264d
cluster-admin 264d
...
systemz@pc:~$ kubectl auth can-i list clusterroles.rbac.authorization.k8s.io
Warning: resource 'clusterroles' is not namespace scoped in group 'rbac.authorization.k8s.io'
yes
argocd@argocd-application-controller-5d5866cf56-ct94d:~$ kubectl auth can-i list clusterroles.rbac.authorization.k8s.io
no
oh man, this are long strings in a cmd :)
kubectl auth can-i list --as=system:serviceaccount:argocd:argocd-application-controller clusterroles.rbac.authorization.k8s.io
Warning: resource 'clusterroles' is not namespace scoped in group 'rbac.authorization.k8s.io'
no
still "no", though
kubectl get node
(and actually starting to schedule pods) about 3~4 min later but the UI was still showing "Installation in progress" like... 10 min later?