by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 18 20:24

    eduardoveiga on exportData

    test (compare)

  • Sep 18 19:35

    github-actions[bot] on master

    Deployed 9779dd5 with MkDocs ve… (compare)

  • Sep 18 19:35

    gustavosbarreto on src

    Add updating instructions (compare)

  • Sep 18 19:21
    leonardojoao commented #408
  • Sep 18 18:56
    gustavosbarreto commented #410
  • Sep 18 18:50
    gustavosbarreto transferred #386
  • Sep 18 18:50
    thelonecabbage opened #23
  • Sep 18 18:50
    gustavosbarreto commented #386
  • Sep 18 18:49
    gustavosbarreto commented #386
  • Sep 18 18:49
    gustavosbarreto commented #386
  • Sep 18 18:43
    gustavosbarreto commented #402
  • Sep 18 17:27

    eduardoveiga on exportData

    api: add filtering by number of… (compare)

  • Sep 18 13:31

    eduardoveiga on exportData

    remove (compare)

  • Sep 18 09:19
    openedhardware opened #410
  • Sep 18 00:09

    gustavosbarreto on balena

    WIP (compare)

  • Sep 17 19:17

    eduardoveiga on exportData

    wip (compare)

  • Sep 17 19:16

    eduardoveiga on exportData

    wip (compare)

  • Sep 17 19:11

    eduardoveiga on exportData

    wip (compare)

  • Sep 17 19:02

    gustavosbarreto on agent_main_refactoring

    WIP (compare)

  • Sep 17 14:46

    eduardoveiga on exportData

    wip (compare)

hertzli
@hertzli

ls -la /host/var/services/homes

lrwxrwxrwx 1 root root 14 Jul 24 06:47 /host/var/services/homes -> /volume1/homes
root@93fdbcd347b3:/#
The NAS runs on a RAID 6 setup
However, there's only one volume group, /volume1
hertzli
@hertzli
So, a container, mystifying_lamport was created and activated... Suppose that was for a purpose?
@gustavosbarreto ... or was it just an example to be deleted afterwards?
Luis Gustavo S. Barreto
@gustavosbarreto

@gustavosbarreto ... or was it just an example to be deleted afterwards?

With --rm docker to automatically clean up the container and remove

lrwxrwxrwx 1 root root 14 Jul 24 06:47 homes -> /volume1/homes here is the problem
hertzli
@hertzli
DSM has a "terminal" feature, catching return codes & errors. When trying to connect via the hub, this is the output:
time="2020-08-11T20:32:42Z" level=warning msg="exit status 1"
time="2020-08-11T20:32:42Z" level=warning msg="read /dev/ptmx: input/output erro
r"
Luis Gustavo S. Barreto
@gustavosbarreto

Since you need to run the install from root, I cut out this to install ShellHub:
docker run -d \
--name=shellhub \
--restart=on-failure \
--privileged \
--net=host \
--pid=host \
-v /:/host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/passwd:/etc/passwd \
-v /etc/group:/etc/group \
-e SERVER_ADDRESS=https://cloud.shellhub.io \
-e PRIVATE_KEY=/host/etc/shellhub.key \
-e TENANT_ID=3b1....myID.....efbf \
shellhubio/agent:v0.3.5

When I then start the container via the Synology package web interface, it flips on - and off right away with this message "Docker-containeren shellhub standsede uventet." [-> "The Docker container shellhub stopped unexpectedly"].

I'm too new to SheelHub to know where to look for a more precise log.

Run this command again without the "-d" for detailed output from docker

@hertzli Append "-v /volume1:/volume1 \" after the last "-v" argument and replace v0.3.5 with v0.3.7

It should work for now until a proper fix is found
hertzli
@hertzli
--rm conflicts with --restart=on-failure. If I run the docker command from the command line, the command keeps running in the foreground. If interrupted, the container disappears if using the --rm option.
Oki - will test
Luis Gustavo S. Barreto
@gustavosbarreto
docker run -d \
--name=shellhub \
--restart=on-failure \
--privileged \
--net=host \
--pid=host \
-v /:/host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/passwd:/etc/passwd \
-v /etc/group:/etc/group \
-v /volume1:/volume1 \
-e SERVER_ADDRESS=https://cloud.shellhub.io \
-e PRIVATE_KEY=/host/etc/shellhub.key \
-e TENANT_ID=3b1....myID.....efbf \
shellhubio/agent:v0.3.7
ensure that you have removed the older container: docker stop shellhub and docker rm shellhub
Mike
@sixhills
I'm not sure if this is a minor bug or a undocumented restriction but device names containing upper-case letters don't work. For example, if I rename a device in the web UI from its default MAC address to something more readable, like "Pi41", and attempt to ssh to it using "ssh pi@SSHID", the connection fails with "Invalid session target". Renaming the device to "pi41" works.
Otavio Salvador
@otavio
@sixhills do you mind opening an issue about this?
hertzli
@hertzli
time="2020-08-11T20:50:19Z" level=warning msg="exit status 1"
time="2020-08-11T20:50:19Z" level=warning msg=EOF
@sixhills My device name was changed to HzliSyn01, i.e., with cap letters in it. Will test after renaming
Hmm - "nsenter: stat of /proc/15774/ns/user failed: No such file or directory"
Could the same be the case with usernames with cap letters in them?
The username in casu is HzliSysAdm
Mike
@sixhills
hertzli
@hertzli
Anything I can do in the meantime, except sit on my hands?
hertzli
@hertzli
nsenter: stat of /proc/17224/ns/user failed: No such file or directory. True, /proc does not have such a subdir, even if the container is restarted (yes, with new pid..).
Mike
@sixhills
@hertzli I've created a username on a Raspberry Pi containing upper-case letters and ssh user@SSHID works fine, so no problem with mixed-case usernames.
hertzli
@hertzli
Phew... :-)
Definitely a prob with /proc - might very well be some DSM access restriction. I tried to add -v /proc:/proc \ to the install, just in case, but no cigar.
hertzli
@hertzli

So, the --pid=host enables shellhub to see all processes on the host. From the docs:
"By default, all containers have the PID namespace enabled.

PID namespace provides separation of processes. The PID Namespace removes the view of the system processes, and allows process ids to be reused including pid 1.

In certain cases you want your container to share the host’s process namespace, basically allowing processes within the container to see all of the processes on the system. "

However, it seems like the host does not know a PID matching that of the shellhub container, hence there is no subdirectory /proc/<what_shellhub_think_its_PID_id>. Hence shellhub cannot stat the /proc/<process_id>/ns/user file.

In one test run, the system sees a shellhub PID of 31823. However, when trying to connect via the shellhub cloud, I get "nsenter: stat of /proc/32187/ns/user failed: No such file or directory". But 31823 is not the same as 32187...

System

steschuser
@steschuser

documentation looks broken for registering-device/

curl http://docs.shellhub.io/guides/registering-device/ -I
HTTP/1.1 404 Not Found

Otavio Salvador
@otavio
@steschuser Ohh, sorry for that. Is it possible for you to report this ?
steschuser
@steschuser
pawanks
@pawanks

Hi all , I am trying to setup local shellhub server , seems getting 502 from api server for /api/login .
Logs:
gateway_1 | 139.0.62.69 - - [12/Aug/2020:13:00:35 +0000] "POST /api/login HTTP/1.1" 502 565 "http://jarvis.parkplus.io/login?redirect=%2F" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36"
gateway1 | 2020/08/12 13:00:35 [error] 11#11: *1 upstream prematurely closed connection while reading response header from upstream, client: 139.0.62.69, server: , request: "POST /api/login HTTP/1.1", upstream: "http://172.19.0.4:8080/api/login", host: "jarvis.parkplus.io", referrer: "http://jarvis.parkplus.io/login?redirect=%2F"

I followed offical doc: http://docs.shellhub.io/getting-started/creating-account/

Otavio Salvador
@otavio
@pawanks are you using 0.3.7 release?
pawanks
@pawanks
yes @otavio , I think the problem is with api server .
⇨ http server started on [::]:8080 echo: http: panic serving 172.19.0.6:39804: read /run/secrets/api_private_key: is a directory goroutine 14 [running]:
So I login to the container , both private and public keys are missing.
root@ip-10-0-1-92:~# docker exec -it bf6b40b4354a sh / # ls -l /run/ total 4 drwxr-xr-x 4 root root 4096 Aug 12 13:00 secrets / # ls -l /run/secrets/ total 8 drwxr-xr-x 2 root root 4096 Aug 12 10:11 api_private_key drwxr-xr-x 2 root root 4096 Aug 12 10:11 api_public_key / # ls -l /run/secrets/api_private_key/ total 0 / #
Otavio Salvador
@otavio
@pawanks I pinged @gustavosbarreto so he can look at this with you, please wait for few minutes ...
pawanks
@pawanks
Thanks @otavio , I am new to shell hub, just came across to it today :-). Seems like the problem is with rendering secret to api docker container. I will wait for @gustavosbarreto .
Luis Gustavo S. Barreto
@gustavosbarreto
@pawanks Did you follow the instructions in the docs exactly?
It seems that you have missed the "Generate keys" step
pawanks
@pawanks
I did for some reason I had directories in earlier , I cleaned and pull the repo again and now ./bin/keygen seems to *.key at
"/". It fixed now. Thanks
Thanks @otavio @gustavosbarreto . Appreciate the help.
Mike
@sixhills
While connected to a device via ShellHub, running "who", "w", "users" or "last" on the target device doesn't list the ssh session and nothing is written to /var/log/auth.log or "docker logs shellhub" when the connection is made. Is there any way of detecting inbound connections made via the ShellHub agent? [ShellHub is great but I'm a little cautious about invisible sessions!]
Otavio Salvador
@otavio
@sixhills it can certainly be improved. As you likely noticed the project is new and we are still adding features and cutting out some edges to make it more polished.
@sixhills logging is one area we need to improve a lot and which it is likely an easy task for someone interested in start to contribute. About registering it inside who and other host session manager we need to investigate how it could be done.
@sixhills so, please open issues about those aspects
and if you are in the mood of working on any of those, it'd be awesome.
Mike
@sixhills
Thanks! I have a programming background but I'm really more of a systems integrator. I'd love to help but my skills may be lacking. I'll have a look at the source and see if i can understand it.
Otavio Salvador
@otavio
@sixhills start small; logging might be a nice way to start