Warning FailedMount 12m (x219 over 20h) kubelet Unable to attach or mount volumes: unmounted volumes=[sandbox], unattached volumes=[service-device-registry-conf sandbox registry kube-api-access-wt6hp]: timed out waiting for the condition
Warning FailedMount 7m34s (x110 over 20h) kubelet Unable to attach or mount volumes: unmounted volumes=[sandbox], unattached volumes=[sandbox registry kube-api-access-wt6hp service-device-registry-conf]: timed out waiting for the condition
Warning FailedMount 64s (x611 over 20h) kubelet MountVolume.SetUp failed for volume "sandbox" : secret "sandbox-tls" not found
72s Warning FailedMount pod/eclipse-hono-adapter-mqtt-vertx-784986ff44-r7prl MountVolume.SetUp failed for volume "adapter-mqtt-vertx-conf" : failed to sync secret cache: timed out waiting for the condition
72s Warning FailedMount pod/eclipse-hono-service-auth-7fd5db64bd-d8mm2 MountVolume.SetUp failed for volume "service-auth-conf" : failed to sync secret cache: timed out waiting for the condition
72s Warning FailedMount pod/eclipse-hono-dispatch-router-744f4556c7-jxrxx MountVolume.SetUp failed for volume "dispatch-router-conf" : failed to sync secret cache: timed out waiting for the condition
Hello, here are the pv/pvc outputs
pvc:
data-eclipse-hono-zookeeper-0 Bound pvc-b069e0aa-f358-4bea-9cd5-1063ca235cba 200Mi RWO local-path 21h
eclipse-hono-service-device-registry Bound pvc-daaf6896-051d-4b82-938a-88950f51bcbe 1Mi RWO local-path 21h
data-eclipse-hono-kafka-0 Bound pvc-67109af6-8fc7-4f9c-baad-2212677b6d50 200Mi RWO local-path 21h
pv
root@iot-honoditto:/home/iotadm# k3s kubectl get pv -n hono
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b069e0aa-f358-4bea-9cd5-1063ca235cba 200Mi RWO Delete Bound hono/data-eclipse-hono-zookeeper-0 local-path 21h
pvc-daaf6896-051d-4b82-938a-88950f51bcbe 1Mi RWO Delete Bound hono/eclipse-hono-service-device-registry local-path 21h
pvc-67109af6-8fc7-4f9c-baad-2212677b6d50 200Mi RWO Delete Bound hono/data-eclipse-hono-kafka-0 local-path 21h
kubectl get namespaces
, kubectl get Certificate -A
and kubectl get pods -A
?
$ helm install --set hono.useLoadBalancer=true --set ditto.nginx.service.type=LoadBalancer c2e eclipse-iot/cloud2edge --debug --wait
install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /home/jopri/.cache/helm/repository/cloud2edge-0.2.3.tgz
client.go:128: [debug] creating 67 resource(s)
wait.go:48: [debug] beginning wait for 67 resources with timeout of 5m0s
ready.go:258: [debug] Service does not have load balancer ingress IP address: default/c2e-ditto-nginx
ready.go:277: [debug] Deployment is not ready: default/c2e-ditto-concierge. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/c2e-ditto-concierge. 0 out of 1 expected pods are ready
... it repeat that line a lot ...
ready.go:277: [debug] Deployment is not ready: default/c2e-ditto-concierge. 0 out of 1 expected pods are ready
Error: INSTALLATION FAILED: timed out waiting for the condition
helm.go:84: [debug] timed out waiting for the condition
INSTALLATION FAILED
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:127
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra@v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra@v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:255
runtime.goexit
runtime/asm_amd64.s:1581
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
c2e-adapter-amqp-vertx-844569b6cf-wps8n 1/1 Running 0 31m
c2e-adapter-http-vertx-798cc9c868-lzm6r 1/1 Running 0 31m
c2e-adapter-mqtt-vertx-56d466b64f-5x8rz 1/1 Running 0 31m
c2e-artemis-7dc8876c77-crxcg 1/1 Running 0 31m
c2e-dispatch-router-5c47d54984-5vx9h 1/1 Running 0 31m
c2e-ditto-concierge-5f5b64cf8c-vp6k2 0/1 CrashLoopBackOff 10 (39s ago) 31m
c2e-ditto-connectivity-b8bb5b9b8-fzgj5 0/1 CrashLoopBackOff 10 (43s ago) 31m
c2e-ditto-gateway-55568c749d-hfg6n 0/1 CrashLoopBackOff 9 (4m45s ago) 31m
c2e-ditto-nginx-57c75dbd7b-b9g44 0/1 Init:0/1 0 31m
c2e-ditto-policies-5fd487989f-lwznc 0/1 CrashLoopBackOff 9 (5m6s ago) 31m
c2e-ditto-swaggerui-7896f78cf9-kzmmn 0/1 CreateContainerError 0 31m
c2e-ditto-things-6d46bcc987-665gx 0/1 CrashLoopBackOff 9 (4m54s ago) 31m
c2e-ditto-thingssearch-77f8448f45-8fwvc 0/1 CrashLoopBackOff 10 (60s ago) 31m
c2e-service-auth-c4fbfb767-jfr26 1/1 Running 0 31m
c2e-service-command-router-f6b8c84df-wstd8 1/1 Running 0 31m
c2e-service-device-registry-7858454b58-mrjgs 1/1 Running 0 31m
ditto-mongodb-7f9fb64588-sxvr8 1/1 Running 0 31m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50m default-scheduler Successfully assigned default/c2e-ditto-policies-5fd487989f-lwznc to minikube
Normal Pulling 50m kubelet Pulling image "docker.io/eclipse/ditto-policies:2.3.2"
Normal Pulled 45m kubelet Successfully pulled image "docker.io/eclipse/ditto-policies:2.3.2" in 5m8.951331575s
Warning BackOff 45m (x2 over 45m) kubelet Back-off restarting failed container
Normal Created 44m (x3 over 45m) kubelet Created container ditto-policies
Normal Started 44m (x3 over 45m) kubelet Started container ditto-policies
Normal Pulled 44m (x2 over 45m) kubelet Container image "docker.io/eclipse/ditto-policies:2.3.2" already present on machine
Warning DNSConfigForming 17s (x257 over 50m) kubelet Search Line limits were exceeded, some search paths have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local nantes.intranet dhcp.nantes.intranet intranet
$ kubectl logs c2e-ditto-policies-5fd487989f-lwznc
11:55:06,247 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
11:55:06,248 |-INFO in ch.qos.logback.core.joran.action.StatusListenerAction - Added status listener of type [ch.qos.logback.core.status.OnConsoleStatusListener]
11:55:06,276 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
11:55:06,282 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
11:55:06,286 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
11:55:06,306 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
11:55:06,306 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDERR]
11:55:06,307 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
11:55:06,425 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
11:55:06,448 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [file]
11:55:06,452 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@-268823809 - setting totalSizeCap to 1 GB
11:55:06,454 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@-268823809 - Will use gz compression
11:55:06,455 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@-268823809 - Will use the pattern /var/log/ditto/policies.log.%d{yyyy-MM-dd} for the active file
11:55:06,457 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - The date pattern is 'yyyy-MM-dd' from file name pattern '/var/log/ditto/policies.log.%d{yyyy-MM-dd}.gz'.
11:55:06,457 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Roll-over at midnight.
11:55:06,462 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Setting initial period to Mon Apr 11 11:55:06 CEST 2022
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
c2e-adapter-amqp-vertx LoadBalancer 10.106.154.222 10.106.154.222 5672:32672/TCP,5671:32671/TCP h18m
c2e-adapter-http-vertx LoadBalancer 10.104.0.105 10.104.0.105 8080:30080/TCP,8443:30443/TCP h18m
c2e-adapter-mqtt-vertx LoadBalancer 10.103.24.73 10.103.24.73 1883:31883/TCP,8883:30883/TCP h18m
c2e-artemis ClusterIP 10.102.9.17 <none> 5671/TCP h18m
c2e-dispatch-router ClusterIP 10.101.78.95 <none> 5673/TCP h18m
c2e-dispatch-router-ext LoadBalancer 10.101.111.82 10.101.111.82 15671:30671/TCP,15672:30672/TCP h18m
c2e-ditto-gateway ClusterIP 10.111.216.195 <none> 8080/TCP h18m
c2e-ditto-nginx LoadBalancer 10.102.64.159 10.102.64.159 8080:31040/TCP h18m
c2e-ditto-swaggerui ClusterIP 10.106.183.17 <none> 8080/TCP h18m
c2e-service-auth ClusterIP 10.106.54.125 <none> 5671/TCP h18m
c2e-service-command-router ClusterIP 10.97.147.126 <none> 5671/TCP h18m
c2e-service-device-registry ClusterIP 10.99.252.228 <none> 5671/TCP,8080/TCP,8443/TCP h18m
c2e-service-device-registry-ext LoadBalancer 10.106.5.69 10.106.5.69 28080:31080/TCP,28443:31443/TCP h18m
ditto-mongodb ClusterIP 10.102.138.215 <none> 27017/TCP h18m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP h20m
$helm install --dependency-update cloud2edge $PATHFILE --debug --wait --timeout 40m --set hono.useLoadBalancer=true --set ditto.nginx.service.type=LoadBalancer
///it does the install
ready.go:277: [debug] Deployment is not ready: default/cloud2edge-ditto-swaggerui. 0 out of 1 expected pods are ready
///it blocks on it again
ready.go:277: [debug] Deployment is not ready: default/cloud2edge-ditto-swaggerui. 0 out of 1 expected pods are ready
Error: INSTALLATION FAILED: timed out waiting for the condition
helm.go:84: [debug] timed out waiting for the condition
INSTALLATION FAILED
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:127
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra@v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra@v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:255
runtime.goexit
runtime/asm_amd64.s:1581
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud2edge-adapter-amqp-vertx-799cc7c8f5-p8pzr 1/1 Running 0 44m
cloud2edge-adapter-http-vertx-56b688f974-klcmh 1/1 Running 0 44m
cloud2edge-adapter-mqtt-vertx-5c556bd45-tw8pr 1/1 Running 0 44m
cloud2edge-artemis-7b86d8f758-qg7nw 1/1 Running 0 44m
cloud2edge-dispatch-router-586bcc64c9-5tl8h 1/1 Running 0 44m
cloud2edge-ditto-concierge-7bdfcf4c6f-pgwr7 1/1 Running 0 44m
cloud2edge-ditto-connectivity-69d7fc9599-r242t 1/1 Running 0 44m
cloud2edge-ditto-gateway-5955667cbb-r5pwp 1/1 Running 0 44m
cloud2edge-ditto-nginx-5fb7f9bc57-fm985 1/1 Running 0 44m
cloud2edge-ditto-policies-55d67966ff-jhpz2 1/1 Running 0 44m
cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f 0/1 CreateContainerError 0 44m
cloud2edge-ditto-things-6bd67d9d55-4bpb2 1/1 Running 0 44m
cloud2edge-ditto-thingssearch-98db7bcc9-bbcp9 1/1 Running 0 44m
cloud2edge-service-auth-7c7f7d6d55-b5tvp 1/1 Running 0 44m
cloud2edge-service-command-router-7f4ccc848f-n82w9 1/1 Running 0 44m
cloud2edge-service-device-registry-5c9846dfc9-8kmjz 1/1 Running 0 44m
ditto-mongodb-7db65ff8c-59k9z 1/1 Running 0 44m
$ kubectl describe pod cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f
Name: cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 27 Apr 2022 10:19:46 +0200
Labels: app.kubernetes.io/instance=cloud2edge
app.kubernetes.io/name=ditto-swaggerui
pod-template-hash=78b8d64b9
Annotations: <none>
Status: Pending
IP: 172.17.0.18
IPs:
IP: 172.17.0.18
Controlled By: ReplicaSet/cloud2edge-ditto-swaggerui-78b8d64b9
Init Containers:
ditto-init:
Container ID: docker://2ae2b39cd99db2d56b85d1858245413b8433d1785923eb2800508b9698cf34bf
Image: docker.io/swaggerapi/swagger-ui:v4.6.1
Image ID: docker-pullable://swaggerapi/swagger-ui@sha256:a4e032a1dc6f4522ca2bfc51869867257ae7396082bdcc7548348523e2a210fb
Port: <none>
Host Port: <none>
Command:
sh
-ec
mkdir -p /usr/share/nginx/html/openapi
curl -sL https://raw.githubusercontent.com/eclipse/ditto/2.4.0/documentation/src/main/resources/openapi/ditto-api-2.yml -o /usr/share/nginx/html/openapi/ditto-openapi-2.yaml
cp -rv /etc/nginx/. /init-config/
cp -rv /usr/share/nginx/html/. /init-content/
mkdir /var/lib/nginx/logs
mkdir /var/lib/nginx/tmp
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 27 Apr 2022 10:21:29 +0200
Finished: Wed, 27 Apr 2022 10:21:30 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/init-config from swagger-ui-config (rw)
/init-content from swagger-ui-content (rw)
/var/lib/nginx from swagger-ui-work (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5n6b8 (ro)
Containers:
ditto-swaggerui:
Container ID:
Image: docker.io/swaggerapi/swagger-ui:v4.6.1
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CreateContainerError
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 32Mi
Requests:
cpu: 50m
memory: 16Mi
Environment:
QUERY_CONFIG_ENABLED: true
Mounts:
/etc/nginx from swagger-ui-config (rw)
/run/nginx from swagger-ui-run (rw)
/usr/share/nginx/html from swagger-ui-content (rw)
/usr/share/nginx/html/ from swagger-ui-api (rw,path="openapi")
/var/cache/nginx from swagger-ui-cache (rw)
/var/lib/nginx from swagger-ui-work (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5n6b8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
swagger-ui-api:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: cloud2edge-ditto-swaggerui
Optional: false
swagger-ui-cache:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
swagger-ui-work:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
swagger-ui-config:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
swagger-ui-content:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
swagger-ui-run:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-5n6b8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: ...
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 45m default-scheduler Successfully assigned default/cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f to minikube
Warning FailedMount 45m (x2 over 45m) kubelet MountVolume.SetUp failed for volume "swagger-ui-api" : failed to sync configmap cache: timed out waiting for the condition
Normal Pulling 44m kubelet Pulling image "docker.io/swaggerapi/swagger-ui:v4.6.1"
Normal Pulled 43m kubelet Successfully pulled image "docker.io/swaggerapi/swagger-ui:v4.6.1" in 1m38.091229094s
Normal Created 43m kubelet Created container ditto-init
Normal Started 43m kubelet Started container ditto-init
Warning Failed 41m (x9 over 43m) kubelet Error: Error response from daemon: Duplicate mount point: /usr/share/nginx/html
Normal Pulled 4m59s (x180 over 43m) kubelet Container image "docker.io/swaggerapi/swagger-ui:v4.6.1" already present on machine
so this is the troublesome pod but i don't get it where it doesn't want to work