Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Julian
    @JulianFeinauer
    But @writetoatul_twitter I would advice you to have a look at the Bosch iot things managed service. I think this could fit you better than running ditto yourself
    Atul Kumar
    @writetoatul_twitter
    Sorry but business requirement is a local solution @JulianFeinauer that could run without internet
    ottlukas
    @ottlukas
    ok default is indeed devops:foobar and works for me in my test installation ;)
    Julian
    @JulianFeinauer
    :+1:
    Thomas Jaeckle
    @thjaeckle

    I'll quote what @ottlukas said once again, when you read carefully you should find the credentials in his answer:

    ok default is indeed devops:foobar and works for me in my test installation ;)

    Atul Kumar
    @writetoatul_twitter
    what is the minimm system requirement for ditto?
    Julian
    @JulianFeinauer
    I guess a System to run it on?
    Thomas Jaeckle
    @thjaeckle
    I would recommend 4 CPUs and min. 4GB of RAM
    Thomas Jaeckle
    @thjaeckle
    ah, that is for single instances of course - if you need a high available cluster (meaning 3 instances for each of the 6 Ditto services = 18 containers) I would recommend at least 10-12 CPUs and 12GB of RAM
    best in a Kubernetes cluster with several masters and e.g. using at least 3 worker nodes
    Florian Fendt
    @ffendt
    should we add that information to the official docs?
    Thomas Jaeckle
    @thjaeckle
    well, I just "guessed" - best would of course to try it out and get as low as possible :D
    Julian
    @JulianFeinauer
    Do you see any possibility to have an alternative backend other than Mongo db in the near future? Ideally even pluggable?
    Thomas Jaeckle
    @thjaeckle

    @JulianFeinauer yes and no ;)
    for the EventSourcing persistence Cassandra should be quite easy to add by additionally making use of the Akka persistence cassandra plugin: https://github.com/akka/akka-persistence-cassandra
    a few time ago we did a PoC and got that running
    so for the persistence we could easily have a pluggable option of MongoDB vs Cassandra

    however - the "no" part - for the search-index replacing MongoDB with Cassandra will not work (at least I'm quite sure that Cassandra is not built for doing JSON document based search queries)
    so for that part of Ditto another option to MongoDB would be Elasticsearch (which we also already positively evaluated)
    however that would be quite an effort of supporting using Elasticsearch - nothing we have on our mid-term agenda

    Julian
    @JulianFeinauer
    @thjaeckle thansk for the response… woudl you think that postgres could be an option with their JSON / JSONB Support (https://www.postgresql.org/docs/9.5/functions-json.html). My point is that Mongo and Cassandra scale well and are nice as a Service but self hosted mostly a pain in the ass
    and Good old Postgres is just simple and easy to administrate and rock solid : )
    Thomas Jaeckle
    @thjaeckle
    that could work when using persistence plugins:
    https://github.com/akka/akka-persistence-jdbc
    or
    https://github.com/WegenenVerkeer/akka-persistence-postgresql
    we as maintainers however have no interest in adding support for those, so that would have to be lead by the community and only in a way which does not break the existing persistence ;)
    Julian
    @JulianFeinauer
    So what you say is, a PR should be raather „mediocre“ amount of work and you would generally accept it, is that right @thjaeckle ? :P
    Thomas Jaeckle
    @thjaeckle
    "mediocore" amount of work: yes and no :D
    for the Akka persistence it should be easy, yes (although we also have some "special queries" against MongoDB in our own codebase which would also have to get an alternative for Postgres)
    for the Search index it should be quite complex
    if you manage to get that all working w/o breaking existing stuff, we would accept a PR - but to be honest that is a big "IF" ;)
    Kai Hudalla
    @sophokles73
    I do not know how the Ditto team handles big contributions but in Hono we would also expect a contributor to indicate his willingness to maintain the code in a case where the existing maintainers do not have a primary interest in the functionality. FMPOV this might also result in becoming a committer on the project ...
    Julian
    @JulianFeinauer
    @sophokles73 thanks for the hint. I am not very used in how this is handled at eclipse or in these projects specific. But, if we would implement that then we would also use it and thus, of course maintain it. Thats way to big of a feature to do it „just for fun“. So indeed I would also welcome to be invited as a contributor in that case. And, I generally agree, that it is important to keep an eye on these things!
    Thomas Jaeckle
    @thjaeckle
    yes, thanks for the input on that
    I guees I implicitly assumed that when going through the work of such a major contribution, the contributing side already brings in some commitment to maintain "its baby" ;)
    Julian
    @JulianFeinauer
    Perhaps MongoDB is not that baaad … :D
    Yannic Klem
    @Yannic92
    :D
    Thomas Jaeckle
    @thjaeckle
    btw: Ditto can totally run with MongoDB 3.6 - if it is the new MongoDB license you are concerned about
    Julian
    @JulianFeinauer
    No its more the management. Without their Atlas service its hard administrate (backups are not easy to make) and those things
    Patrick Sernetz
    @patrickse
    I am looking for a way to list all of the configured connections from ditto. I´ve tried to look through the code and documentation but haven´t found a suitable devops command. Is there a way to list connections?
    Thomas Jaeckle
    @thjaeckle

    hi @patrickse
    unfortunately there is no way to do this via an API yet - it currently only is possible by doing a query against MongoDB
    We got the same question earlier in the chat and this was my answer:

    hi @BobClaerhout - that should be possible by doing a Mongo query against the connectivity database, e.g.: db.connection_journal.disctinct('pid')

    you could create a GitHub issue for that, I think it's a totally valid use case
    Patrick Sernetz
    @patrickse
    @thjaeckle great.. thanks for the feedback. I will create an issue for that request
    Patrick Sernetz
    @patrickse

    I got another question on javascript payload mapping. I´ve got a thing with 3 features and I retrieve a mqtt message which can contain 2..3 fields. I´ve tried to setup a single javascript payload mapping to just update 2 features of the 3 thing features. But I got no clue. I´ve already tried a few things:

    • Return an array of ditto protocol messages from the mapper to do iterative updates on the thing, where every feature is treated as a single ditto protocol message. But it looks its not possible to return an array (would be quite handy).
    • Retrieve the current state of the thing, update the fields and push the changed thing back to ditto, but I think that is not possible due to sandboxing inside the javascript mapper.

    To sum up... I just want to do a partial upgrade of a thing inside a javascript mapper.

    Thomas Jaeckle
    @thjaeckle
    hm, returning a JavaScript array with several DittoProtocol message should work, we added that prior to the Ditto 1.0.0 release
    that is also documented in the javascript docs of the function to implement: https://www.eclipse.org/ditto/connectivity-mapping.html#mapping-incoming-messages
    does that not work for you?
    Patrick Sernetz
    @patrickse
    I will have another look.. I had something in my mind... that this should work.. but I had failure on the mapping... I will test that out..
    Thanks for the reply @thjaeckle
    Patrick Sernetz
    @patrickse
    @thjaeckle stupid me... it´s working. Got another problem with initializing headers as an array and not as an object while create the response message... Thanks a lot ;)
    now everything is fixed and working as intended
    Thomas Jaeckle
    @thjaeckle
    cool :+1:
    LeptonByte
    @LeptonByte

    hi everyone :)
    I've got an issue while connecting to ditto via websocket with the python websocket_client module:

    --- response header ---
    HTTP/1.1 400 Bad Request
    Server: nginx/1.16.0
    Date: Tue, 04 Feb 2020 12:00:03 GMT
    Content-Type: text/plain; charset=UTF-8
    Content-Length: 34
    Connection: keep-alive
    X-Frame-Options: SAMEORIGIN
    X-Content-Type-Options: nosniff
    X-XSS-Protection: 1; mode=block
    Access-Control-Allow-Origin: http://MY_SITE_URL
    Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
    Access-Control-Allow-Credentials: true
    Access-Control-Allow-Headers: Accept,Authorization,Cache-Control,Content-Type,Content-Length,DNT,If-Match,If-Modified-Since,If-None-Match,Keep-Alive,Origin,User-Agent,X-Requested-With

    any ideas what could cause the "400 bad request" ?

    python snippet:

     header = {"Authorization": "Bearer " + id_token}
     ws = websocket.WebSocketApp("wss://MY_SITE_URL/ws/2",
                                    header=header,
                                    on_message=on_message,
                                    on_error=on_error,
                                    on_close=on_close)
    ws.run_forever()

    I use a nginx to test the bearer token and change the token to the username for ditto. This method works for the REST-API...

    Thomas Jaeckle
    @thjaeckle
    Hi @LeptonByte can you provide the logs of nginx? Ditto does not produce 400 with content type text/plain, so I guess this is a problem in the nginx
    LeptonByte
    @LeptonByte

    the docker-compose log shows:

    connectivity_1     | 2020-02-04 16:11:21,617 INFO  [] o.e.d.s.c.m.ReconnectActor akka://ditto-cluster/user/connectivityRoot/reconnect/singleton/supervised-child - Sending reconnects for Connections. Will be sent again after the configured interval of <PT10M>.
    connectivity_1     | 2020-02-04 16:11:21,620 INFO  [] o.e.d.s.c.m.ReconnectActor akka://ditto-cluster/user/connectivityRoot/reconnect/singleton/supervised-child - Sending reconnects completed.
    connectivity_1     | 2020-02-04 16:11:21,621 INFO  [] o.e.d.s.c.m.ReconnectActor akka://ditto-cluster/user/connectivityRoot/reconnect/singleton/supervised-child - Got reconnects completed.
    gateway_1          | 2020-02-04 16:11:22,843 WARN  [2fdae31e-4acf-459f-ba90-0681fbe45b8d] o.e.d.s.g.s.a.d.DummyAuthenticationProvider  - Dummy authentication has been applied for the following subjects: nginx:my_thing
    gateway_1          | 2020-02-04 16:11:22,849 INFO  [2fdae31e-4acf-459f-ba90-0681fbe45b8d] o.e.d.s.g.e.d.RequestResultLoggingDirective  - StatusCode of request GET '/ws/2' was: 400
    openresty_nginx_1  | 192.168.64.4 - - [04/Feb/2020:15:11:22 +0000] "GET /ws/2 HTTP/1.0" 400 34 "-" "-" "MY.IP"

    so in my oppinion the request goes through the nginx to the gateway...

    Thomas Jaeckle
    @thjaeckle
    Hm, HTTP/1.0 in the nginx log is what makes me wonder whether this is ok or not.. You use openresty? I heard of it but don't have a clue how to configure it..
    Do you have any chance to get the response body of the 400? Content length is 34 bytes, so there should be a message in there.
    Thomas Jaeckle
    @thjaeckle
    Did you set proxy_http_version 1.1; in nginx for the proxied ws route?
    LeptonByte
    @LeptonByte

    sorry for the delay...
    There was no proxy_http_version in my nginx config file, so I added it.
    But that doesn't change anything:

    gateway_1          | 2020-02-05 09:29:35,489 WARN  [f5bb8b65-8473-45ad-a7f8-cba055f14c81] o.e.d.s.g.s.a.d.DummyAuthenticationProvider  - Dummy authentication has been applied for the following subjects: nginx:my_thing
    gateway_1          | 2020-02-05 09:29:35,493 INFO  [f5bb8b65-8473-45ad-a7f8-cba055f14c81] o.e.d.s.g.e.d.RequestResultLoggingDirective  - StatusCode of request GET '/ws/2' was: 400
    openresty_nginx_1  | 192.168.64.4 - - [05/Feb/2020:08:29:35 +0000] "GET /ws/2 HTTP/1.1" 400 34 "-" "-" "MY.IP"

    I thought your example (https://github.com/eclipse/ditto-examples/blob/master/grove-ctrl/python/ditto_grove_demo.py) could help, but it uses API Version 1...

    Thomas Jaeckle
    @thjaeckle
    api v1 and v2 should not make a difference here ..
    it would be really helpful to see the response body of the 400
    alternatively, you could try to set the environment variable LOG_LEVEL_APPLICATION to DEBUG in the gateway and see if this adds more information
    LeptonByte
    @LeptonByte
    I set the log-level like you said:
    gateway_1          | 2020-02-05 10:07:51,448 DEBUG [] o.e.d.s.g.e.d.CorrelationIdEnsuringDirective  - Created new CorrelationId: 354d1c43-38a2-429f-8673-7d5b7f6fd05a
    gateway_1          | 2020-02-05 10:07:51,448 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.u.t.TraceUriGenerator  - Returning fallback traceUri for '/ws/2': 'TraceInformation{traceUri='/other', tags={ditto.request.path=/other}}'
    gateway_1          | 2020-02-05 10:07:51,448 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.e.d.RequestTimeoutHandlingDirective  - Started mutable timer <StartedKamonTimer [name=roundtrip_http, tags={ditto.request.path=/other, ditto.request.method=GET, segment=overall}, onStopHandlers=[org.eclipse.ditto.services.utils.metrics.instruments.timer.OnStopHandler@cf94726a], segments={}, startTimestamp=8022541761770181, stopped=false]>.
    gateway_1          | 2020-02-05 10:07:51,451 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.s.a.AuthenticationChain  - Applying authentication provider <DummyAuthenticationProvider> to URI <http://MY_SITE_URL.dev/ws/2>.
    gateway_1          | 2020-02-05 10:07:51,451 WARN  [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.s.a.d.DummyAuthenticationProvider  - Dummy authentication has been applied for the following subjects: nginx:my_thing
    gateway_1          | 2020-02-05 10:07:51,451 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.s.a.AuthenticationChain  - Authentication using authentication provider <DummyAuthenticationProvider> to URI <http://MY_SITE_URL.dev/ws/2> was successful.
    gateway_1          | 2020-02-05 10:07:51,452 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.e.d.a.AuthorizationContextVersioningDirective  - Original authorization context: ImmutableAuthorizationContext [authorizationSubjects=[nginx:my_thing]]
    gateway_1          | 2020-02-05 10:07:51,452 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.e.d.a.AuthorizationContextVersioningDirective  - Mapped authorization context: ImmutableAuthorizationContext [authorizationSubjects=[nginx:my_thing, my_thing]]
    gateway_1          | 2020-02-05 10:07:51,457 INFO  [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.e.d.RequestResultLoggingDirective  - StatusCode of request GET '/ws/2' was: 400
    gateway_1          | 2020-02-05 10:07:51,457 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.e.d.RequestResultLoggingDirective  - Raw request URI was: Raw-Request-URI: /ws/2
    gateway_1          | 2020-02-05 10:07:51,458 DEBUG [354d1c43-38a2-429f-8673-7d5b7f6fd05a] o.e.d.s.g.e.d.RequestTimeoutHandlingDirective  - Finished timer <StartedKamonTimer [name=roundtrip_http, tags={ditto.request.path=/other, ditto.statusCode=400, ditto.request.method=GET, segment=overall}, onStopHandlers=[org.eclipse.ditto.services.utils.metrics.instruments.timer.OnStopHandler@cf94726a], segments={}, startTimestamp=8022541761770181, stopped=true]> with status <400>.
    openresty_nginx_1  | 192.168.64.4 - - [05/Feb/2020:09:07:51 +0000] "GET /ws/2 HTTP/1.1" 400 34 "-" "-" "MY.IP"
    Thomas Jaeckle
    @thjaeckle
    I am quite sure that your WS client does something wrong or your nginx is misconfigured
    I can reproduce the behavior when using my browser directly to access Ditto via http://localhost:8080/ws/2
    As a result I get a status code 400 and the following text is returned:
    Expected WebSocket Upgrade request - which is 34 bytes long (explaining your "Content-Length: 34")
    So somehow no websocket upgrade is performed - did you follow the nginx config we have in place for Ditto /ws route? -> https://github.com/eclipse/ditto/blob/master/deployment/docker/nginx.conf#L50
    LeptonByte
    @LeptonByte
    Thanks for the idea! My second nginx changed the X-Real-IP header entry to the IP of the first nginx. My bad.
    Now all works like it should. Sorry for wasting your time.