Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • May 22 2018 12:20
    mijicd unassigned #296
  • May 22 2018 12:20
    mijicd unassigned #295
  • May 22 2018 12:20
    mijicd unassigned #282
  • May 22 2018 12:20
    mijicd unassigned #259
  • May 22 2018 12:20
    mijicd unassigned #245
  • May 22 2018 12:20
    mijicd unassigned #244
  • May 22 2018 12:20
    mijicd unassigned #239
  • May 22 2018 12:20
    mijicd unassigned #237
  • May 22 2018 12:20
    mijicd unassigned #236
  • May 22 2018 12:20
    mijicd unassigned #216
  • May 22 2018 12:20
    mijicd unassigned #214
  • May 22 2018 12:20
    mijicd unassigned #195
  • May 22 2018 12:20
    mijicd unassigned #195
  • May 22 2018 12:20
    mijicd unassigned #195
  • May 22 2018 12:20
    mijicd unassigned #200
  • May 22 2018 12:20
    mijicd unassigned #197
  • May 22 2018 12:20
    mijicd unassigned #195
  • May 22 2018 12:20
    mijicd unassigned #198
  • May 22 2018 12:20
    mijicd unassigned #195
  • May 22 2018 12:20
    mijicd unassigned #194
Dave
@rising_dark_gitlab
I think I've finally put 2 and 2 together to make 5 or 6.. the "REDIS" chart enables persistence by default, but the mainflux charts define redis-auth.master.persistence.enabled=false
Am I meant to customize that for a deployment? Or is the data in redis truly transient?
Thanks, and my apologies for the noise.
Referring back to the mainflux manual, redis only exists to provide an event source via redis streams. There is no data stored in redis, so yes, it is completely transient.
Manuel Imperiale
@manuio
@rising_dark_gitlab check here: https://github.com/mainflux/devops/blob/master/charts/mainflux/templates/things-deployment.yaml#L28 and here: https://github.com/mainflux/mainflux/blob/master/cmd/things/main.go#L138
It's used as Things service auth cache. Redis is not only used for event sourcing.
Dave
@rising_dark_gitlab
@manuio That is correct, but
https://github.com/mainflux/devops/blob/master/charts/mainflux/values.yaml#L199
Clearly shows that "persistence" is disabled for "redis-auth". Should I enable persistence for redis-auth? or is it safe to delete/recreate (will it just force re-authentication?)
Dave
@rising_dark_gitlab
interestingly, redis-streams-master DOES have persistence enabled. I'm clearly confused about this. Is it something I need to worry about? I've force migrating via kubectl drain --delete-emptydir and things seem to keep working.
Manuel Imperiale
@manuio
@rising_dark_gitlab redis-auth recreate a mapping of ID/Key in the RAM to accelerate future Identity requests. If you restart the service it it will be generated again without lossing any information since all it's persistent in Things DB. But you can also use persistent volume. It's up to you.
Dave
@rising_dark_gitlab
@manuio Thanks for clarifying. I'm happy not to use persistent volumes, I was just a little taken aback by the need for kubernetes to purge the data when migrating the pod to another node. All is clear now.
Manuel Imperiale
@manuio
@rising_dark_gitlab great! :+1:
magixmin
@magixmin
hey man , why my grafana awalys show Network Error: Bad Gateway(502)
but it`s work befor ?
magixmin
@magixmin
Alexander Teplov
@teploff

Hello everyone!
Dear @drasko @nmarcetic give some advices please.
My question concerns dynamic configuration.
Are there some ways to get data flow from new things which was added after app connection with Mainflux had established?
In more detail:
1) create Mainflux user
2) under created user create things: "a", "b", "c" and application "app" and channels.
3) Things ("a", "b", "c") start to publish data , thing("app") start to subscribe, and it works well.
4) After that user forget to add thing "d", and create it.
So how to get data flow from thing "d" without restarting subscribe thing "app"?

Thanks

Manuel Imperiale
@manuio
@molodoj88 simply connect the new thing ("d") to the same channel and you will be able to get the data without restarting anything
Drasko DRASKOVIC
@drasko
@teploff I think you are refering to Writers, which have a hard-coded config file to connect to NATS
they are stateless in order to be clusterable, so we do not change their internal state
What you should do with your app is not to subscribe directly to NATS, but rather go via frontend protocol adapters (MQTT, WS, CoAP, HTTP, ...)
this way you can subsribe/unsubscribe dynamically
NATS is in internal network and is good for internal applications / DB writers
however when building outside applications on the top of Mainflux, better is to treat them like ordinary things and use their API keys for auth
Being in internal network, NATS is not protected by the auth
That being said - you can just use config file to subscribe to all NATS channels (*)
Alexander Teplov
@teploff
@manuio @drasko thank you so much!
Drasko DRASKOVIC
@drasko
:+1:
Jonathan Dreyer
@jonathandreyer
Hi everybody, I have developed a micro-service to forward message by HTTP. Currently, it is a version for the mainflux release v0.11.0 but I will update the code for the next release (v0.12.0). The source code is available here.
As discuted with @drasko in the PR #1158, I have migrated the PR in an extension of the mainflux platform. Dont' hesitate to open issues if you have comments, ideas, etc.
Drasko DRASKOVIC
@drasko
Thanks @jonathandreyer !
I'll take a look
Jonathan Dreyer
@jonathandreyer
@drasko It is with pleasure.
I didn’t know if somebody has already tried to use in Go the upstream components, because I have some trouble to do that. For the next release of http-forwarder (v0.12.0), I try to use components which provide from "master" to anticipate the next release (and also create a CI which are tested the integration of those components into this extension).
The returned error is:
go: github.com/jonathandreyer/mainflux-httpforwarder/cmd/http-forwarder imports github.com/mainflux/mainflux/messaging/nats: package provided by github.com/mainflux/mainflux at latest version v0.11.0 but not at required version v0.11.1-0.20210209214404-f0f60e2d2a2c
Jonathan Dreyer
@jonathandreyer
Hi everybody, I have started testing the provision service with the release v0.11.0, but it returns the error:
mainflux-provision | {"level":"warn","message":"Method provision for token: <TOKEN> and things: [] took 54.290893ms to complete with error: failed to create bootstrap config : failed to create entity : 405 Method Not Allowed.","ts":"2021-02-21T21:38:20.369064276Z"} mainflux-bootstrap | {"level":"warn","message":"Method bootstrap for thing with external id <EXTERNAL_ID> took 4.531134ms to complete with error: non-existent entity.","ts":"2021-02-21T21:38:20.380319273Z"} Somebody has already had this error? For information, I have also tested with the master version and there is no error.
In reading the documentation, I have found something strange on the page for the provision service (here). The port of two first requests (8888 and 8091) seems wrong because in the .env it is indicated 8190. If you would like, I can create a PR to fix that in the doc repository.
When I have started to use the provision service (as an add-on) in following the documentation, the example does not work because an error related to certificate generation is returned. I have discovered that it is necessary to start also the certs service or disable his usage in .env by environment variable MF_PROVISION_X509_PROVISIONING. This issue has been generated by the #1221. To fix that, I propose to disable the usage of certs provisioning as it is indicated in the README.md of the provision service. @MF-Teams, what is your point of view?
I have also discovered two mistakes in the provision service, one in the docker-compose (environment variable MF_PROVISION_LOG_LEVEL is duplicated) and one in the README.md environment variables MF_PROVISION_HTTP_PORT, MF_PROVISION_SERVER_KEY, MF_PROVISION_PASS & MF_PROVISION_USER is duplicated). As for the documentation repository, I can create a PR to fix that.
Drasko DRASKOVIC
@drasko
@mteodor is maintainer of Provision service, I would like to hear his opinion on this
Mirko Teodorovic
@mteodor
@jonathandreyer I've created a PR to sort the different ports and default value for provision https://github.com/mainflux/mainflux/pull/1367/files, for the docs you can make a PR, thanks
Mirko Teodorovic
@mteodor
for the error you are having
MF_PROVISION_BS_SVC_URL set to http://localhost:8202/things
Jonathan Dreyer
@jonathandreyer
@mteodor Thanks for the feedback and your proposal regarding the error. With your message, I have fixed the bug. I will create a PR when your PR will be merged.
Jonathan Dreyer
@jonathandreyer
@mteodor I have another question regarding the provision service. When I provision a new device (with the provision service), I have multiple information added into metadata field of the thing (like 'cfg_id': '<THING_ID>', 'ctrl_channel_id': '', 'data_channel_id': '', 'export_channel_id': '', 'type': 'gateway’), which are not useful for a simple device. I understand that in some of your case (e.g. a gateway) that is useful. Is it possible to disable that information? The temporary idea can be to rewrite the content of metadata of things after the provision (without this field) but that seems a bit strange to have a cleaned thing configuration.
Mirko Teodorovic
@mteodor
@jonathandreyer, no currently it is not possible to configure provision service to skip mentioned fields, these are like minimum, you can only have additional metadata fields configured in config.toml
Jonathan Dreyer
@jonathandreyer
@mteodor Thanks for your reply and your support. I have tested the temporary idea and it is working fine. So I will use is.
I have another question/remark (sorry for my many questions). Yesterday, I have tested the get request to see which content is used and it doesn't work. Today, I have tested it again with the full default config (docker & config.toml) and there is really a trouble. The returned code is 500 and the internal error is : mainflux-provision | {"level":"warn","message":"Method mapping for token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTQyMzYxNTIsImlhdCI6MTYxNDIwMDE1MiwiaXNzIjoibWFpbmZsdXguYXV0aCIsInN1YiI6ImFkbWluQGxvY2FsaG9zdC5sb2NhbCIsImlzc3Vlcl9pZCI6ImVmODNiZDk2LTY3NDctNGVjZS04ZmFmLTY3YzllMDExMmM2NyIsInR5cGUiOjB9.G8J2yyy47Kd9UaB0FVN7VfJWkeRl_lUo3Sl3rGJtCII took 6.51463ms to complete with error: unauthorized access : failed to fetch entity : 404 Not Found","ts":"2021-02-24T20:55:52.502075499Z"}
Does it another trouble in my configuration or an error in the service?
krishna
@krishna70425541_twitter
hi can anyone help me in this . I am using mainflux coap adapter for post and get messages:
my coap url : coap://" + "54.162.61.126" + ":" + "31739" + "/" +
"channels/cabf1231-7aef-4e70-a780-4a6c69fa212d/messages?authorization=54919bc7-3c6d-4d03-948c-75104ea1eaa5
error on the console ::{"level":"warn","message":"Failed to authorize: bad request","ts":"2021-02-28T14:18:42.837932711Z"}
{"level":"warn","message":"Failed to authorize: bad request","ts":"2021-02-28T14:18:47.868028109Z"}
whats is wrong in this url ??
in coap url authorization will be thing key right?
krishna
@krishna70425541_twitter
client.observe(
new CoapHandler() {
@Override
public void onLoad(CoapResponse response) {
String content = response.getResponseText();
log.info("value ::::: {} ",content);
}
@Override
public void onError() {
log.error("OBSERVING FAILED (press enter to exit)");
}
},0);
log.info("End with response ::::::::::::: {}");
};
krishna
@krishna70425541_twitter
@teploff @jonathandreyer @mteodor can help me in this
Drasko DRASKOVIC
@drasko
@dusanb94 can help with this ^
@krishna70425541_twitter is payload OK? I mean senml
krishna
@krishna70425541_twitter
@drasko yes payload is ok i am sending TEXT_PLAIN type message