Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Susana Dias
    @Sudms_gitlab
    hello, theres any problem with c2e? im trying to install it and it get stuck in mongo
    Kai Hudalla
    @sophokles73

    Hi @Sudms_gitlab, not that I am aware of. It would be helpful if you could provide some more information like the command line you are using to install the chart and any relevant log output. What do you mean by

    it get stuck in mongo
    ?

    2 replies
    Sascha Seegebarth
    @SSeegebarth_twitter
    Hi, I couldn't find a hint if a special linux distribution is recommended for cloud2edge. Currently we installed debian on our server vm and my colleague is facing some problems that he was not facing on his own vm with linux Ubuntu. Is there a recommended linux distribution? Thanks
    Kai Hudalla
    @sophokles73
    No, there aren't any specific requirements regarding the distro. Maybe you can describe the particular issue(s) that you were facing?
    Steven-Posterick
    @Steven-Posterick
    Hello we've ran into installation problems on Debian distribution when trying to install hono or cloud2edge each one results in mounting errors.
    When following https://github.com/eclipse/hono/tree/master/deploy/src/main/sandbox
    MountVolume.SetUp failed for volume "sandbox" : secret "sandbox-tls" not found on most of the pods.
    Steven-Posterick
    @Steven-Posterick
    Here is also the output when describing a pod that is stuck in the container creating state.
    Warning  FailedMount  12m (x219 over 20h)    kubelet  Unable to attach or mount volumes: unmounted volumes=[sandbox], unattached volumes=[service-device-registry-conf sandbox registry kube-api-access-wt6hp]: timed out waiting for the condition
      Warning  FailedMount  7m34s (x110 over 20h)  kubelet  Unable to attach or mount volumes: unmounted volumes=[sandbox], unattached volumes=[sandbox registry kube-api-access-wt6hp service-device-registry-conf]: timed out waiting for the condition
      Warning  FailedMount  64s (x611 over 20h)    kubelet  MountVolume.SetUp failed for volume "sandbox" : secret "sandbox-tls" not found
    Steven-Posterick
    @Steven-Posterick
    Other mount volume errors from when we first started the pods.
    72s         Warning   FailedMount              pod/eclipse-hono-adapter-mqtt-vertx-784986ff44-r7prl            MountVolume.SetUp failed for volume "adapter-mqtt-vertx-conf" : failed to sync secret cache: timed out waiting for the condition
    72s         Warning   FailedMount              pod/eclipse-hono-service-auth-7fd5db64bd-d8mm2                  MountVolume.SetUp failed for volume "service-auth-conf" : failed to sync secret cache: timed out waiting for the condition
    72s         Warning   FailedMount              pod/eclipse-hono-dispatch-router-744f4556c7-jxrxx               MountVolume.SetUp failed for volume "dispatch-router-conf" : failed to sync secret cache: timed out waiting for the condition
    Sascha Seegebarth
    @SSeegebarth_twitter
    @Sudms do you have an idea regarding @Steven-Posterick problem? Could it be related to pv/pvc?
    Steven-Posterick
    @Steven-Posterick

    Hello, here are the pv/pvc outputs
    pvc:

    data-eclipse-hono-zookeeper-0          Bound    pvc-b069e0aa-f358-4bea-9cd5-1063ca235cba   200Mi      RWO            local-path     21h
    eclipse-hono-service-device-registry   Bound    pvc-daaf6896-051d-4b82-938a-88950f51bcbe   1Mi        RWO            local-path     21h
    data-eclipse-hono-kafka-0              Bound    pvc-67109af6-8fc7-4f9c-baad-2212677b6d50   200Mi      RWO            local-path     21h

    pv

    root@iot-honoditto:/home/iotadm# k3s kubectl get pv -n hono
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS   REASON   AGE
    pvc-b069e0aa-f358-4bea-9cd5-1063ca235cba   200Mi      RWO            Delete           Bound    hono/data-eclipse-hono-zookeeper-0          local-path              21h
    pvc-daaf6896-051d-4b82-938a-88950f51bcbe   1Mi        RWO            Delete           Bound    hono/eclipse-hono-service-device-registry   local-path              21h
    pvc-67109af6-8fc7-4f9c-baad-2212677b6d50   200Mi      RWO            Delete           Bound    hono/data-eclipse-hono-kafka-0              local-path              21h
    Kai Hudalla
    @sophokles73
    @Steven-Posterick Can you please describe what you are trying to achieve? In particular, the instructions from https://github.com/eclipse/hono/tree/master/deploy/src/main/sandbox are supposed to be used to set up the Hono Sandbox. If you want to use the same setup for your own environment then you will need adapt the settings for the CertManager accordingly. As a start, can you post the output of kubectl get namespaces, kubectl get Certificate -A and kubectl get pods -A?
    Johan
    @JohanBlue
    Hello, I'm trying to install cloud2edge locally and i have some troubles with it.
    I m trying to install it with the "default" namespace but even with it: it blocks at the installation
    $ helm install --set hono.useLoadBalancer=true --set ditto.nginx.service.type=LoadBalancer c2e eclipse-iot/cloud2edge --debug --wait
    install.go:178: [debug] Original chart version: ""
    install.go:195: [debug] CHART PATH: /home/jopri/.cache/helm/repository/cloud2edge-0.2.3.tgz
    
    client.go:128: [debug] creating 67 resource(s)
    wait.go:48: [debug] beginning wait for 67 resources with timeout of 5m0s
    ready.go:258: [debug] Service does not have load balancer ingress IP address: default/c2e-ditto-nginx
    ready.go:277: [debug] Deployment is not ready: default/c2e-ditto-concierge. 0 out of 1 expected pods are ready
    ready.go:277: [debug] Deployment is not ready: default/c2e-ditto-concierge. 0 out of 1 expected pods are ready
    ... it repeat that line a lot ...
    ready.go:277: [debug] Deployment is not ready: default/c2e-ditto-concierge. 0 out of 1 expected pods are ready
    Error: INSTALLATION FAILED: timed out waiting for the condition
    helm.go:84: [debug] timed out waiting for the condition
    INSTALLATION FAILED
    main.newInstallCmd.func2
        helm.sh/helm/v3/cmd/helm/install.go:127
    github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.3.0/command.go:856
    github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.3.0/command.go:974
    github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.3.0/command.go:902
    main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
    runtime.main
        runtime/proc.go:255
    runtime.goexit
        runtime/asm_amd64.s:1581
    
    $ kubectl get pods
    NAME                                           READY   STATUS                 RESTARTS        AGE
    c2e-adapter-amqp-vertx-844569b6cf-wps8n        1/1     Running                0               31m
    c2e-adapter-http-vertx-798cc9c868-lzm6r        1/1     Running                0               31m
    c2e-adapter-mqtt-vertx-56d466b64f-5x8rz        1/1     Running                0               31m
    c2e-artemis-7dc8876c77-crxcg                   1/1     Running                0               31m
    c2e-dispatch-router-5c47d54984-5vx9h           1/1     Running                0               31m
    c2e-ditto-concierge-5f5b64cf8c-vp6k2           0/1     CrashLoopBackOff       10 (39s ago)    31m
    c2e-ditto-connectivity-b8bb5b9b8-fzgj5         0/1     CrashLoopBackOff       10 (43s ago)    31m
    c2e-ditto-gateway-55568c749d-hfg6n             0/1     CrashLoopBackOff       9 (4m45s ago)   31m
    c2e-ditto-nginx-57c75dbd7b-b9g44               0/1     Init:0/1               0               31m
    c2e-ditto-policies-5fd487989f-lwznc            0/1     CrashLoopBackOff       9 (5m6s ago)    31m
    c2e-ditto-swaggerui-7896f78cf9-kzmmn           0/1     CreateContainerError   0               31m
    c2e-ditto-things-6d46bcc987-665gx              0/1     CrashLoopBackOff       9 (4m54s ago)   31m
    c2e-ditto-thingssearch-77f8448f45-8fwvc        0/1     CrashLoopBackOff       10 (60s ago)    31m
    c2e-service-auth-c4fbfb767-jfr26               1/1     Running                0               31m
    c2e-service-command-router-f6b8c84df-wstd8     1/1     Running                0               31m
    c2e-service-device-registry-7858454b58-mrjgs   1/1     Running                0               31m
    ditto-mongodb-7f9fb64588-sxvr8                 1/1     Running                0               31m
    Thomas Jaeckle
    @thjaeckle
    @JohanSmile what does the pod logs of the ditto containers say? e.g. kubectl describe pod c2e-ditto-policies-5fd487989f-lwznc
    Johan
    @JohanBlue
    this are the events written by this command:
    Events:
      Type     Reason            Age                  From               Message
      ----     ------            ----                 ----               -------
      Normal   Scheduled         50m                  default-scheduler  Successfully assigned default/c2e-ditto-policies-5fd487989f-lwznc to minikube
      Normal   Pulling           50m                  kubelet            Pulling image "docker.io/eclipse/ditto-policies:2.3.2"
      Normal   Pulled            45m                  kubelet            Successfully pulled image "docker.io/eclipse/ditto-policies:2.3.2" in 5m8.951331575s
      Warning  BackOff           45m (x2 over 45m)    kubelet            Back-off restarting failed container
      Normal   Created           44m (x3 over 45m)    kubelet            Created container ditto-policies
      Normal   Started           44m (x3 over 45m)    kubelet            Started container ditto-policies
      Normal   Pulled            44m (x2 over 45m)    kubelet            Container image "docker.io/eclipse/ditto-policies:2.3.2" already present on machine
      Warning  DNSConfigForming  17s (x257 over 50m)  kubelet            Search Line limits were exceeded, some search paths have been omitted, the applied search line is: default.svc.cluster.local svc.cluster.local cluster.local nantes.intranet dhcp.nantes.intranet intranet
    does my dhcp handicap the configuration?
    Thomas Jaeckle
    @thjaeckle
    maybe .. I never saw the last line in any other deployment ..
    Johan
    @JohanBlue
    or should i search a way to remove the Container image "docker.io/eclipse/ditto-policies:2.3.2"?
    Thomas Jaeckle
    @thjaeckle
    no, the chart must download this image and it successfully did so
    Johan
    @JohanBlue
    that might be weird but... is there a way to install cloud2edge without being connected to internet?
    Thomas Jaeckle
    @thjaeckle
    could you check the logs of such a failed container as well? kubectl logs c2e-ditto-policies-5fd487989f-lwznc
    Johan
    @JohanBlue
    $ kubectl logs c2e-ditto-policies-5fd487989f-lwznc
    11:55:06,247 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
    11:55:06,248 |-INFO in ch.qos.logback.core.joran.action.StatusListenerAction - Added status listener of type [ch.qos.logback.core.status.OnConsoleStatusListener]
    11:55:06,276 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
    11:55:06,282 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
    11:55:06,286 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
    11:55:06,306 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
    11:55:06,306 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDERR]
    11:55:06,307 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
    11:55:06,425 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
    11:55:06,448 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [file]
    11:55:06,452 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@-268823809 - setting totalSizeCap to 1 GB
    11:55:06,454 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@-268823809 - Will use gz compression
    11:55:06,455 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@-268823809 - Will use the pattern /var/log/ditto/policies.log.%d{yyyy-MM-dd} for the active file
    11:55:06,457 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - The date pattern is 'yyyy-MM-dd' from file name pattern '/var/log/ditto/policies.log.%d{yyyy-MM-dd}.gz'.
    11:55:06,457 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Roll-over at midnight.
    11:55:06,462 |-INFO in c.q.l.core.rolling.DefaultTimeBasedFileNamingAndTriggeringPolicy - Setting initial period to Mon Apr 11 11:55:06 CEST 2022
    Thomas Jaeckle
    @thjaeckle
    does it stop there? maybe follow the logs via kubectl logs -f c2e-ditto-policies-5fd487989f-lwznc
    Johan
    @JohanBlue
    yes it does stop there. it's the same messages
    Thomas Jaeckle
    @thjaeckle
    @JohanSmile did you change any logging configuration for ditto before installing the chart?
    Johan
    @JohanBlue
    i did not
    Johan
    @JohanBlue
    should i?
    Thomas Jaeckle
    @thjaeckle
    no, just wanted to rule that out
    the chart should not log to a file by default - which apparently happens in your case - so we have to check why this is happening..
    Johan
    @JohanBlue
    maybe because im using the "default" namespace?
    Thomas Jaeckle
    @thjaeckle
    I don't think that this has an influence
    Johan
    @JohanBlue
    when i do kubectl get services i get:
    $ kubectl get services
    NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                           AGE
    c2e-adapter-amqp-vertx            LoadBalancer   10.106.154.222   10.106.154.222   5672:32672/TCP,5671:32671/TCP     h18m
    c2e-adapter-http-vertx            LoadBalancer   10.104.0.105     10.104.0.105     8080:30080/TCP,8443:30443/TCP     h18m
    c2e-adapter-mqtt-vertx            LoadBalancer   10.103.24.73     10.103.24.73     1883:31883/TCP,8883:30883/TCP     h18m
    c2e-artemis                       ClusterIP      10.102.9.17      <none>           5671/TCP                          h18m
    c2e-dispatch-router               ClusterIP      10.101.78.95     <none>           5673/TCP                          h18m
    c2e-dispatch-router-ext           LoadBalancer   10.101.111.82    10.101.111.82    15671:30671/TCP,15672:30672/TCP   h18m
    c2e-ditto-gateway                 ClusterIP      10.111.216.195   <none>           8080/TCP                          h18m
    c2e-ditto-nginx                   LoadBalancer   10.102.64.159    10.102.64.159    8080:31040/TCP                    h18m
    c2e-ditto-swaggerui               ClusterIP      10.106.183.17    <none>           8080/TCP                          h18m
    c2e-service-auth                  ClusterIP      10.106.54.125    <none>           5671/TCP                          h18m
    c2e-service-command-router        ClusterIP      10.97.147.126    <none>           5671/TCP                          h18m
    c2e-service-device-registry       ClusterIP      10.99.252.228    <none>           5671/TCP,8080/TCP,8443/TCP        h18m
    c2e-service-device-registry-ext   LoadBalancer   10.106.5.69      10.106.5.69      28080:31080/TCP,28443:31443/TCP   h18m
    ditto-mongodb                     ClusterIP      10.102.138.215   <none>           27017/TCP                         h18m
    kubernetes                        ClusterIP      10.96.0.1        <none>           443/TCP                           h20m
    Katkuri Ramesh Netha
    @rameshkatkuri:matrix.org
    [m]
    I have installed helm chart and pods are up and running I have to test below scenarios
    1.I'm able to send control command to hono to ditto
    2.I'm trying to send control/command message to device to hono but i'm getting 408 error .
    @thjaeckle Can you please help on this
    1 reply
    Johan
    @JohanBlue
    Hello,
    excuse me i come again after some time.
    i have changed of machine and i still have problems to install cloud2edge.
    it gives me a time out error on the installation BUT there are more pods actives than last time.
    $helm install --dependency-update cloud2edge $PATHFILE --debug --wait --timeout 40m --set hono.useLoadBalancer=true --set ditto.nginx.service.type=LoadBalancer
    ///it does the install
    ready.go:277: [debug] Deployment is not ready: default/cloud2edge-ditto-swaggerui. 0 out of 1 expected pods are ready
    ///it blocks on it again
    ready.go:277: [debug] Deployment is not ready: default/cloud2edge-ditto-swaggerui. 0 out of 1 expected pods are ready
    Error: INSTALLATION FAILED: timed out waiting for the condition
    helm.go:84: [debug] timed out waiting for the condition
    INSTALLATION FAILED
    main.newInstallCmd.func2
        helm.sh/helm/v3/cmd/helm/install.go:127
    github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.3.0/command.go:856
    github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.3.0/command.go:974
    github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.3.0/command.go:902
    main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
    runtime.main
        runtime/proc.go:255
    runtime.goexit
        runtime/asm_amd64.s:1581
    
    
    $ kubectl get pods
    NAME                                                  READY   STATUS                 RESTARTS   AGE
    cloud2edge-adapter-amqp-vertx-799cc7c8f5-p8pzr        1/1     Running                0          44m
    cloud2edge-adapter-http-vertx-56b688f974-klcmh        1/1     Running                0          44m
    cloud2edge-adapter-mqtt-vertx-5c556bd45-tw8pr         1/1     Running                0          44m
    cloud2edge-artemis-7b86d8f758-qg7nw                   1/1     Running                0          44m
    cloud2edge-dispatch-router-586bcc64c9-5tl8h           1/1     Running                0          44m
    cloud2edge-ditto-concierge-7bdfcf4c6f-pgwr7           1/1     Running                0          44m
    cloud2edge-ditto-connectivity-69d7fc9599-r242t        1/1     Running                0          44m
    cloud2edge-ditto-gateway-5955667cbb-r5pwp             1/1     Running                0          44m
    cloud2edge-ditto-nginx-5fb7f9bc57-fm985               1/1     Running                0          44m
    cloud2edge-ditto-policies-55d67966ff-jhpz2            1/1     Running                0          44m
    cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f            0/1     CreateContainerError   0          44m
    cloud2edge-ditto-things-6bd67d9d55-4bpb2              1/1     Running                0          44m
    cloud2edge-ditto-thingssearch-98db7bcc9-bbcp9         1/1     Running                0          44m
    cloud2edge-service-auth-7c7f7d6d55-b5tvp              1/1     Running                0          44m
    cloud2edge-service-command-router-7f4ccc848f-n82w9    1/1     Running                0          44m
    cloud2edge-service-device-registry-5c9846dfc9-8kmjz   1/1     Running                0          44m
    ditto-mongodb-7db65ff8c-59k9z                         1/1     Running                0          44m
    
    $ kubectl describe pod cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f
    Name:         cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f
    Namespace:    default
    Priority:     0
    Node:         minikube/192.168.49.2
    Start Time:   Wed, 27 Apr 2022 10:19:46 +0200
    Labels:       app.kubernetes.io/instance=cloud2edge
                  app.kubernetes.io/name=ditto-swaggerui
                  pod-template-hash=78b8d64b9
    Annotations:  <none>
    Status:       Pending
    IP:           172.17.0.18
    IPs:
      IP:           172.17.0.18
    Controlled By:  ReplicaSet/cloud2edge-ditto-swaggerui-78b8d64b9
    Init Containers:
      ditto-init:
        Container ID:  docker://2ae2b39cd99db2d56b85d1858245413b8433d1785923eb2800508b9698cf34bf
        Image:         docker.io/swaggerapi/swagger-ui:v4.6.1
        Image ID:      docker-pullable://swaggerapi/swagger-ui@sha256:a4e032a1dc6f4522ca2bfc51869867257ae7396082bdcc7548348523e2a210fb
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -ec
          mkdir -p /usr/share/nginx/html/openapi 
          curl -sL https://raw.githubusercontent.com/eclipse/ditto/2.4.0/documentation/src/main/resources/openapi/ditto-api-2.yml -o /usr/share/nginx/html/openapi/ditto-openapi-2.yaml               
          cp -rv /etc/nginx/. /init-config/
          cp -rv /usr/share/nginx/html/. /init-content/
          mkdir /var/lib/nginx/logs
          mkdir /var/lib/nginx/tmp
    
        State:          Terminated
          Reason:       Completed
          Exit Code:    0
          Started:      Wed, 27 Apr 2022 10:21:29 +0200
          Finished:     Wed, 27 Apr 2022 10:21:30 +0200
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /init-config from swagger-ui-config (rw)
          /init-content from swagger-ui-content (rw)
          /var/lib/nginx from swagger-ui-work (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5n6b8 (ro)
    Containers:
      ditto-swaggerui:
        Container ID:   
        Image:          docker.io/swaggerapi/swagger-ui:v4.6.1
        Image ID:       
        Port:           8080/TCP
        Host Port:      0/TCP
        State:          Waiting
          Reason:       CreateContainerError
        Ready:          False
        Restart Count:  0
        Limits:
          cpu:     100m
          memory:  32Mi
        Requests:
          cpu:     50m
          memory:  16Mi
        Environment:
          QUERY_CONFIG_ENABLED:  true
        Mounts:
          /etc/nginx from swagger-ui-config (rw)
          /run/nginx from swagger-ui-run (rw)
          /usr/share/nginx/html from swagger-ui-content (rw)
          /usr/share/nginx/html/ from swagger-ui-api (rw,path="openapi")
          /var/cache/nginx from swagger-ui-cache (rw)
          /var/lib/nginx from swagger-ui-work (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5n6b8 (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             False 
      ContainersReady   False 
      PodScheduled      True 
    Volumes:
      swagger-ui-api:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      cloud2edge-ditto-swaggerui
        Optional:  false
      swagger-ui-cache:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      swagger-ui-work:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      swagger-ui-config:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      swagger-ui-content:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      swagger-ui-run:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:     
        SizeLimit:  <unset>
      kube-api-access-5n6b8:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       <nil>
        DownwardAPI:             true
    QoS Class:                   Burstable
    Node-Selectors:              <none>
    Tolerations: ...
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason       Age                    From               Message
      ----     ------       ----                   ----               -------
      Normal   Scheduled    45m                    default-scheduler  Successfully assigned default/cloud2edge-ditto-swaggerui-78b8d64b9-jfc9f to minikube
      Warning  FailedMount  45m (x2 over 45m)      kubelet            MountVolume.SetUp failed for volume "swagger-ui-api" : failed to sync configmap cache: timed out waiting for the condition
      Normal   Pulling      44m                    kubelet            Pulling image "docker.io/swaggerapi/swagger-ui:v4.6.1"
      Normal   Pulled       43m                    kubelet            Successfully pulled image "docker.io/swaggerapi/swagger-ui:v4.6.1" in 1m38.091229094s
      Normal   Created      43m                    kubelet            Created container ditto-init
      Normal   Started      43m                    kubelet            Started container ditto-init
      Warning  Failed       41m (x9 over 43m)      kubelet            Error: Error response from daemon: Duplicate mount point: /usr/share/nginx/html
      Normal   Pulled       4m59s (x180 over 43m)  kubelet            Container image "docker.io/swaggerapi/swagger-ui:v4.6.1" already present on machine
    so this is the troublesome pod but i don't get it where it doesn't want to work
    Thomas Jaeckle
    @thjaeckle
    hi @JohanBlue
    we'll look into it (we recently had some changes in the swagger-ui configuration in the helm charts) ..
    in the meantime: the swagger-ui is not mandatory and can also be disabled via configuration
    Johan
    @JohanBlue
    so i should put "--set ditto.swaggerui=disabled" to my helm command to resolve the problem?
    Thomas Jaeckle
    @thjaeckle
    no .. ditto.swaggerui.enabled=false
    Johan
    @JohanBlue
    ok thanks
    Johan
    @JohanBlue
    It works great, thanks a lot.
    Mehdi Kherbache
    @MehdiKherb
    hello, i'm trying to connect a simulated contiki-ng based MQTT client to the c2e architecture. To do so, i need to have the ipv6 address of the MQTT adapter of Hono which isn't configured by default if i am not wrong ! is it possible to configure it manually after deployment of the c2e architecture ?
    Kai Hudalla
    @sophokles73
    If you are deploying to Minikube, then it looks like this is not (yet) possible: https://minikube.sigs.k8s.io/docs/faq/#does-minikube-support-ipv6
    Otherwise, my understanding is that it is mainly a question of whether the kubernetes runtime and in particular the Container Network Interface support IPv6. You may find some insight in this k8s blog post
    Mehdi Kherbache
    @MehdiKherb
    thank you for the insights ! Indeed i am deploying to minikube, so in order to enable IPv6 support i need to deploy it in MicroK8s right?
    2 replies
    Kai Hudalla
    @sophokles73
    I do not know because I haven't yet had the requirement to use IPv6.
    Mehdi Kherbache
    @MehdiKherb
    alright, thanks
    Johan
    @JohanBlue
    Hello, have you ever tried to put cloud2edge on AWS EKS ?
    Thomas Jaeckle
    @thjaeckle
    @JohanBlue we run our commercial service on AWS EKS - so not exactly the cloud2edge package, but similar
    do you have problems with that?
    Johan
    @JohanBlue
    i m trying to do the same yes.
    1 reply
    Mehdi Kherbache
    @MehdiKherb
    Hello, i want to know how to disable the authentication mechanism in Hono ?