by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 18 20:24

    eduardoveiga on exportData

    test (compare)

  • Sep 18 19:35

    github-actions[bot] on master

    Deployed 9779dd5 with MkDocs ve… (compare)

  • Sep 18 19:35

    gustavosbarreto on src

    Add updating instructions (compare)

  • Sep 18 19:21
    leonardojoao commented #408
  • Sep 18 18:56
    gustavosbarreto commented #410
  • Sep 18 18:50
    gustavosbarreto transferred #386
  • Sep 18 18:50
    thelonecabbage opened #23
  • Sep 18 18:50
    gustavosbarreto commented #386
  • Sep 18 18:49
    gustavosbarreto commented #386
  • Sep 18 18:49
    gustavosbarreto commented #386
  • Sep 18 18:43
    gustavosbarreto commented #402
  • Sep 18 17:27

    eduardoveiga on exportData

    api: add filtering by number of… (compare)

  • Sep 18 13:31

    eduardoveiga on exportData

    remove (compare)

  • Sep 18 09:19
    openedhardware opened #410
  • Sep 18 00:09

    gustavosbarreto on balena

    WIP (compare)

  • Sep 17 19:17

    eduardoveiga on exportData

    wip (compare)

  • Sep 17 19:16

    eduardoveiga on exportData

    wip (compare)

  • Sep 17 19:11

    eduardoveiga on exportData

    wip (compare)

  • Sep 17 19:02

    gustavosbarreto on agent_main_refactoring

    WIP (compare)

  • Sep 17 14:46

    eduardoveiga on exportData

    wip (compare)

hertzli
@hertzli
When I sucessfully installed ShellHub on 4 other (Ubuntu) devices, I enabled remote access. Not sure how to do this on Synology, though.
Luis Gustavo S. Barreto
@gustavosbarreto

@gustavosbarreto I've tested v0.3.7 again: server on a Debian 10 VPS, agent on two Raspberry Pis, one running the latest Docker (19.03.12) and the other running the older Docker version installed by apt (18.09.1). v0.3.7 works perfectly on both! And setting SHELLHUB_HTTP_PORT in .env also works now. Excellent!

Thank you for testing out and providing feedback

Luis Gustavo S. Barreto
@gustavosbarreto

Could this be caused by Docker running on a local port?

Maybe some security layer is located between the docker daemon and host system

hertzli
@hertzli
@gustavosbarreto Seems more like a system variable hasn't been set correctly, i.e. the /host/ should be replaced by an IP address or a host name?
Luis Gustavo S. Barreto
@gustavosbarreto

@gustavosbarreto Seems more like a system variable hasn't been set correctly, i.e. the /host/ should be replaced by an IP address or a host name?

@hertzli The -v /host/.... option should mount the host filesystem inside container to /host directory

docker run --rm --privileged -it -v /:/host ubuntu
ls -la /host/var/
ls -la /host/var/services/
ls -la /host/var/services/homes
@hertzli can you run this commands and paste the output?
hertzli
@hertzli

docker run --rm --privileged -it -v /:/host ubuntu

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
3ff22d22a855: Pull complete
e7cb79d19722: Pull complete
323d0d660b6a: Pull complete
b7f616834fd0: Pull complete
Digest: sha256:5d1d5407f353843ecf8b16524bc5565aa332e9e6a1297c73a92d3e754b8a636d
Status: Downloaded newer image for ubuntu:latest

ls -la /host/var/

total 72
drwxr-xr-x 18 root root 4096 Jul 24 06:46 .
drwxr-xr-x 23 root root 4096 Jul 24 06:45 ..
drwxr-xr-x 6 root root 4096 Jul 24 06:47 cache
drwxrwx--- 2 root root 4096 Nov 10 2017 crash
drwx--x--x 3 root root 4096 Sep 22 2019 db
drwxr-xr-x 3 root root 4096 May 29 07:03 dynlib
drwxr-xr-x 2 root root 4096 Nov 10 2017 empty
drwxr-xr-x 30 root root 4096 Aug 11 18:54 lib
lrwxrwxrwx 1 root root 11 Dec 29 2017 lock -> ../run/lock
drwxr-xr-x 20 root root 4096 Aug 3 10:52 log
drwxr-xr-x 31 root root 4096 Feb 13 11:00 packages
drwxr-xr-x 3 root root 4096 Dec 30 2017 quarantine
lrwxrwxrwx 1 root root 6 May 29 07:03 run -> ../run
drwxr-xr-x 2 root root 4096 Aug 2 06:52 services
drwxr-xr-x 4 root root 4096 Aug 10 11:14 spool
drwxr-xr-x 3 root root 4096 May 29 07:02 state
drwxr-xr-x 2 root root 4096 Nov 10 2017 synobackup
drwx------ 4 root root 4096 Dec 29 2017 target
drwxrwxrwx 4 root root 4096 Aug 11 19:48 tmp
drwxr-xr-x 3 root root 4096 Jul 24 06:46 update

ls -la /host/var/services/

total 8
drwxr-xr-x 2 root root 4096 Aug 2 06:52 .
drwxr-xr-x 18 root root 4096 Jul 24 06:46 ..
lrwxrwxrwx 1 root root 18 Apr 25 2018 download -> /volume1/@download
lrwxrwxrwx 1 root root 14 Jul 24 06:47 homes -> /volume1/homes
lrwxrwxrwx 1 root root 14 Feb 10 2018 music -> /volume1/music
lrwxrwxrwx 1 root root 24 Dec 29 2017 pgsql -> /volume1/@database/pgsql
lrwxrwxrwx 1 root root 14 Feb 10 2018 photo -> /volume1/photo
lrwxrwxrwx 1 root root 21 Aug 2 06:52 surveillance -> /volume1/surveillance
lrwxrwxrwx 1 root root 13 Jul 24 06:46 tmp -> /volume1/@tmp
lrwxrwxrwx 1 root root 14 Feb 10 2018 video -> /volume1/video
lrwxrwxrwx 1 root root 12 Dec 29 2017 web -> /volume1/web

ls -la /host/var/services/homes

lrwxrwxrwx 1 root root 14 Jul 24 06:47 /host/var/services/homes -> /volume1/homes
root@93fdbcd347b3:/#
The NAS runs on a RAID 6 setup
However, there's only one volume group, /volume1
hertzli
@hertzli
So, a container, mystifying_lamport was created and activated... Suppose that was for a purpose?
@gustavosbarreto ... or was it just an example to be deleted afterwards?
Luis Gustavo S. Barreto
@gustavosbarreto

@gustavosbarreto ... or was it just an example to be deleted afterwards?

With --rm docker to automatically clean up the container and remove

lrwxrwxrwx 1 root root 14 Jul 24 06:47 homes -> /volume1/homes here is the problem
hertzli
@hertzli
DSM has a "terminal" feature, catching return codes & errors. When trying to connect via the hub, this is the output:
time="2020-08-11T20:32:42Z" level=warning msg="exit status 1"
time="2020-08-11T20:32:42Z" level=warning msg="read /dev/ptmx: input/output erro
r"
Luis Gustavo S. Barreto
@gustavosbarreto

Since you need to run the install from root, I cut out this to install ShellHub:
docker run -d \
--name=shellhub \
--restart=on-failure \
--privileged \
--net=host \
--pid=host \
-v /:/host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/passwd:/etc/passwd \
-v /etc/group:/etc/group \
-e SERVER_ADDRESS=https://cloud.shellhub.io \
-e PRIVATE_KEY=/host/etc/shellhub.key \
-e TENANT_ID=3b1....myID.....efbf \
shellhubio/agent:v0.3.5

When I then start the container via the Synology package web interface, it flips on - and off right away with this message "Docker-containeren shellhub standsede uventet." [-> "The Docker container shellhub stopped unexpectedly"].

I'm too new to SheelHub to know where to look for a more precise log.

Run this command again without the "-d" for detailed output from docker

@hertzli Append "-v /volume1:/volume1 \" after the last "-v" argument and replace v0.3.5 with v0.3.7

It should work for now until a proper fix is found
hertzli
@hertzli
--rm conflicts with --restart=on-failure. If I run the docker command from the command line, the command keeps running in the foreground. If interrupted, the container disappears if using the --rm option.
Oki - will test
Luis Gustavo S. Barreto
@gustavosbarreto
docker run -d \
--name=shellhub \
--restart=on-failure \
--privileged \
--net=host \
--pid=host \
-v /:/host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/passwd:/etc/passwd \
-v /etc/group:/etc/group \
-v /volume1:/volume1 \
-e SERVER_ADDRESS=https://cloud.shellhub.io \
-e PRIVATE_KEY=/host/etc/shellhub.key \
-e TENANT_ID=3b1....myID.....efbf \
shellhubio/agent:v0.3.7
ensure that you have removed the older container: docker stop shellhub and docker rm shellhub
Mike
@sixhills
I'm not sure if this is a minor bug or a undocumented restriction but device names containing upper-case letters don't work. For example, if I rename a device in the web UI from its default MAC address to something more readable, like "Pi41", and attempt to ssh to it using "ssh pi@SSHID", the connection fails with "Invalid session target". Renaming the device to "pi41" works.
Otavio Salvador
@otavio
@sixhills do you mind opening an issue about this?
hertzli
@hertzli
time="2020-08-11T20:50:19Z" level=warning msg="exit status 1"
time="2020-08-11T20:50:19Z" level=warning msg=EOF
@sixhills My device name was changed to HzliSyn01, i.e., with cap letters in it. Will test after renaming
Hmm - "nsenter: stat of /proc/15774/ns/user failed: No such file or directory"
Could the same be the case with usernames with cap letters in them?
The username in casu is HzliSysAdm
Mike
@sixhills
hertzli
@hertzli
Anything I can do in the meantime, except sit on my hands?
hertzli
@hertzli
nsenter: stat of /proc/17224/ns/user failed: No such file or directory. True, /proc does not have such a subdir, even if the container is restarted (yes, with new pid..).
Mike
@sixhills
@hertzli I've created a username on a Raspberry Pi containing upper-case letters and ssh user@SSHID works fine, so no problem with mixed-case usernames.
hertzli
@hertzli
Phew... :-)
Definitely a prob with /proc - might very well be some DSM access restriction. I tried to add -v /proc:/proc \ to the install, just in case, but no cigar.
hertzli
@hertzli

So, the --pid=host enables shellhub to see all processes on the host. From the docs:
"By default, all containers have the PID namespace enabled.

PID namespace provides separation of processes. The PID Namespace removes the view of the system processes, and allows process ids to be reused including pid 1.

In certain cases you want your container to share the host’s process namespace, basically allowing processes within the container to see all of the processes on the system. "

However, it seems like the host does not know a PID matching that of the shellhub container, hence there is no subdirectory /proc/<what_shellhub_think_its_PID_id>. Hence shellhub cannot stat the /proc/<process_id>/ns/user file.

In one test run, the system sees a shellhub PID of 31823. However, when trying to connect via the shellhub cloud, I get "nsenter: stat of /proc/32187/ns/user failed: No such file or directory". But 31823 is not the same as 32187...

System

steschuser
@steschuser

documentation looks broken for registering-device/

curl http://docs.shellhub.io/guides/registering-device/ -I
HTTP/1.1 404 Not Found

Otavio Salvador
@otavio
@steschuser Ohh, sorry for that. Is it possible for you to report this ?
steschuser
@steschuser
pawanks
@pawanks

Hi all , I am trying to setup local shellhub server , seems getting 502 from api server for /api/login .
Logs:
gateway_1 | 139.0.62.69 - - [12/Aug/2020:13:00:35 +0000] "POST /api/login HTTP/1.1" 502 565 "http://jarvis.parkplus.io/login?redirect=%2F" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"
gateway1 | 2020/08/12 13:00:35 [error] 11#11: *1 upstream prematurely closed connection while reading response header from upstream, client: 139.0.62.69, server: , request: "POST /api/login HTTP/1.1", upstream: "http://172.19.0.4:8080/api/login", host: "jarvis.parkplus.io", referrer: "http://jarvis.parkplus.io/login?redirect=%2F"

I followed offical doc: http://docs.shellhub.io/getting-started/creating-account/

Otavio Salvador
@otavio
@pawanks are you using 0.3.7 release?
pawanks
@pawanks
yes @otavio , I think the problem is with api server .
⇨ http server started on [::]:8080 echo: http: panic serving 172.19.0.6:39804: read /run/secrets/api_private_key: is a directory goroutine 14 [running]:
So I login to the container , both private and public keys are missing.
root@ip-10-0-1-92:~# docker exec -it bf6b40b4354a sh / # ls -l /run/ total 4 drwxr-xr-x 4 root root 4096 Aug 12 13:00 secrets / # ls -l /run/secrets/ total 8 drwxr-xr-x 2 root root 4096 Aug 12 10:11 api_private_key drwxr-xr-x 2 root root 4096 Aug 12 10:11 api_public_key / # ls -l /run/secrets/api_private_key/ total 0 / #
Otavio Salvador
@otavio
@pawanks I pinged @gustavosbarreto so he can look at this with you, please wait for few minutes ...
pawanks
@pawanks
Thanks @otavio , I am new to shell hub, just came across to it today :-). Seems like the problem is with rendering secret to api docker container. I will wait for @gustavosbarreto .
Luis Gustavo S. Barreto
@gustavosbarreto
@pawanks Did you follow the instructions in the docs exactly?
It seems that you have missed the "Generate keys" step