configure env for docker image like: -e FQDN=10.100.0.129. After that the website will be accesible via https://10.100.0.129
Thanks that fixed my redirects :)
Hello @harzival . I need to publish the newest app to the appstore. Will try to do it today and I will come back to you.
Good to know, I patiently await.
Is there another way to onboard by the way beside the iOS/Android app btw? Not a requirement but CLI based onboarding for cloud would be useful for automation.
By the way, don’t you want to use k8s instead of single docker container @harzival ? There is a helm chart and it’s pretty easy to deploy. https://plgd.dev/deployment/k8s/
I haven't used Kubernetes before so I'll have to read up, I'll give it a go though
@harzival You can use ob tool to onboard device to the bundle.
git clone https://github.com/plgd-dev/hub.git cd hub/bundle/client/ob go build . # onboard any device to bundle with address 10.100.0.129 ./ob --addr 10.100.0.129 # show parameters ./ob --help
But you need to have installed golang to build ob tool.
Both actually, for a automotive/warehousing company in the UK, trying to put OCF at the core of making a "Digital Twin" out of factories and such.
One part of that is low cost modular sensors that are easily mass deployed (M5Stack's ESP32 units, fully encased with grove ports and a large variety of also-encased sensors types that pretty much plug and play, worth checking out). The plan is to have identical firmware on all of them, and OTA 'drivers' for each type of sensor appendage, so I can give each operation a bucket of them, when plugged in they get onboarded and exposed. Currently we take months for any kind of sensor deployment, with bespoke design, network setup, security, dashboards/monitoring. Can just make a single dashboard for the lot once this works.
The other part is augmented reality, and that more personal to me, I want to make a open source platform-agnostic augmented reality "browser" that lets all the glasses and AR enabled devices have shared experiences via OCF. A server in any place, like your home or a supermarket, exposes standardised types of anchor points/tags (images on the wall, april tags, qr codes, etc), the first anchor added being the 0,0,0 point of the room, all other being relative to it, when any are seen the AR device knows its relative position to the 0,0,0 point and uses its native AR framework to keep track of its transform from that point. The other part exposed via the OCF on that server is virtual "furniture"/experiences, which are just blobs of WebGL code with a position relative to that 0,0,0. The AR device loads those WebGL/HTML5 blobs through a headless browser and uses the renderable output to display it over the real world where its been pre-placed (relative to the 0,0,0 point of the room the phone is tracking).