Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 10 18:55
    davestephens edited #607
  • Sep 10 18:55
    davestephens opened #607
  • Sep 10 18:55
    davestephens labeled #607
  • Sep 10 18:17
    AlanAndre closed #605
  • Sep 09 06:24
    davestephens closed #606
  • Sep 09 00:35
    fetherolfjd opened #606
  • Sep 08 13:29
    AlanAndre opened #605
  • Sep 08 13:29
    AlanAndre labeled #605
  • Sep 03 16:44
    eniad edited #604
  • Sep 03 16:44
    eniad edited #604
  • Sep 03 16:13
    eniad closed #459
  • Sep 03 15:33
    eniad opened #604
  • Sep 03 15:33
    eniad edited #603
  • Sep 03 15:31
    eniad labeled #603
  • Sep 03 15:31
    eniad opened #603
  • Sep 02 09:26
    HitLuca closed #602
  • Sep 02 09:23
    HitLuca edited #602
  • Sep 02 09:22
    HitLuca opened #602
  • Sep 02 09:16
    HitLuca synchronize #595
  • Sep 02 09:15
    HitLuca synchronize #546
Jean Lucas
@jeanlst
I was looking at the smarttools issue, has anyone been able to make a role for it? davestephens/ansible-nas#2
19 replies
I have my own domain and I'm using protonmail for mail with that domain
allthestairs
@allthestairs
This looked like an interesting option: https://hub.docker.com/r/analogj/scrutiny
allthestairs
@allthestairs
I integrated https://hub.docker.com/r/analogj/scrutiny into the stats role that I also added to run it alongside grafana
it can handle all sorts of smartd notification tools using https://containrrr.dev/shoutrrr/services/overview/
allthestairs
@allthestairs
If anyone wants to try it you can find my branch here: https://github.com/allthestairs/ansible-nas/tree/scrutiny It does include a commit that transitions the stats task to a role
it should be configurable to allow notifications without manually editing the template file
allthestairs
@allthestairs

I think the migration to roles hasn't been finished. There's open PRs for Calibre (davestephens/ansible-nas#415) and a bunch of others, but no commits have been made since Apr 2.

I went through and created a branch for each reamining task that replaces task with a role and then created another streamlined single-commit branch that has everything as a role with not a single remaining bare task https://github.com/allthestairs/ansible-nas/tree/all_roles

Andrew DiLosa
@adilosa

pulled the commit, seemed to install wireguard fine. ended up moving to wg-quick settings from systemd as part of debugging. turned out there was bug in the Unifi controller that made port forwarding settings silently not take effect, ended up in a whole Unifi upgrade hell where DHCP was hosed all day.

anyways... tl;dr once i rebuilt my network i ended up configuring wireguard manually in the host. it seems to give me access to all my containers that are locally accessible anyway, without mucking around with docker networking.

allthestairs
@allthestairs
I seem to have run into an issue with using it and docker port forwarding but I probably fucked something up manually while experimenting.
It definitely needs more testing before I'd suggest anyone do more than experiment with it.
Also I realized my routing manipulation in the containers isn't a real solution since it doesn't survive a container restart.
I'm not sure if Docker actually supports a way to do what I'm doing, but the current ansible plugin definitely seems not to
Well, actually lets say I'm pretty sure I could make it work if I was willing to go into the docker_container setup for every container and modify it there, but I don't really want to make that sweeping a set of changes to enable VPN
allthestairs
@allthestairs
I think if you added it at container creation time with network_mode as <container_name>:ansible_wireguard it would actually work but I'd need to add a jinja template conditional to every task to make that work
I suppose one could write a wrapper task for ansible-nas in general that replaces docker_container tasks in all the roles that you could use to automatically handle things like traefik labels and vpn routing in some sort of bizarro-inheritance but that would be a big change to the overall project
Jean Lucas
@jeanlst
Is davestephens still maintaining the project or has he set it aside?
georgejung
@georgejung
hey guys, im running home assistant on my ansible nas and my traefik reverse proxy is now throwing 400 errors. Apparently this seems to be an HA thing as my other services behind traefik are working. Anyone know what entries we need to have in HA for a traefik proxy to work? Fwiw im using duckdns...
georgejung
@georgejung

so i had to modify my HA config file to specify the traefik ip for my external access to work with my reverse proxy as of the july HA release. To get this I went into http services and then home assistant and got the ip there that traefik uses. Thats the one i put in my config file:

http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 172.30.1.1

https://www.reddit.com/r/homeassistant/comments/og1hao/400_bad_request_error_behind_nginx_proxy_manager/

Mindbuilder1
@Mindbuilder1
hello have a Problem by the instalation at the point to start ansible nas is camming error: Failed to connect to the host via ssh: ssh: Could not resolve hostname ansible-nas: Temporary failure in name resolution"
grafik.png
basavarajbhavi
@basavarajbhavi:matrix.org
[m]
hi, i am running an ansible-playbook file for creating groups and subgroups and I am new to ansible can anyone help me with it... it is giving me an error
task:
  • name: "Create GitLab Group"
    ^ here

this is my playbookfile

  • hosts: all
    tasks:

    - name: "Delete GitLab Group"

    community.general.gitlab_group:

    api_url: http://localhost:8080/

    api_token: "{{CUivwY2io91d-cFjyvAt}}"

    validate_certs: False

    name: my_first_group

    state: absent

  • name: "Create GitLab Group"
    community.general.gitlab_group:
    api_url: http://localhost:8080/
    api_token: "{{CUivwY2io91d-cFjyvAt}}"
    validate_certs: True
    api_username: root
    api_password: "password@123"
    name: my_first_group
    path: my_first_group
    state: present

    The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group

  • name: "Create GitLab SubGroup"
    community.general.gitlab_group:
    api_url: http://localhost.com/
    validate_certs: True
    api_username: root
    api_password: "password@123"
    name: my_first_subgroup
    path: my_first_subgroup
    state: present
    parent: "super_parent/parent"
basavarajbhavi
@basavarajbhavi:matrix.org
[m]
:point_up: Edit: - hosts: all
tasks:

- name: "Delete GitLab Group"

community.general.gitlab_group:

api_url: http://localhost:8080/

api_token: "{{CUivwY2io91d-cFjyvAt}}"

validate_certs: False

name: my_first_group

state: absent

:point_up: Edit: ---
  • hosts: all
    tasks:

    - name: "Delete GitLab Group"

    community.general.gitlab_group:

    api_url: http://localhost:8080/

    api_token: "{{CUivwY2io91d-cFjyvAt}}"

    validate_certs: False

    name: my_first_group

    state: absent

  • name: "Create GitLab Group"
    community.general.gitlab_group:
    api_url: http://localhost:8080/
    api_token: "{{CUivwY2io91d-cFjyvAt}}"
    validate_certs: True
    api_username: root
    api_password: "password@123"
    name: my_first_group
    path: my_first_group
    state: present

    The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group

  • name: "Create GitLab SubGroup"
    community.general.gitlab_group:
    api_url: http://localhost.com/
    validate_certs: True
    api_username: root
    api_password: "password@123"
    name: my_first_subgroup
    path: my_first_subgroup
    state: present
    parent: "super_parent/parent"
zorkol
@zorkol
hey all, i am VERY new to all this can i ask a question here?
jniens1979
@jniens1979
Hi all, I recently installed ansible nas on my supermicro SYS-5028D-TN4T server. Everything works properly, thanks!
I would like to expand the installation with an ip camera application (with onvif support) and it would be nice if this could be added to ansible-nas.
Luca Candela
@CaliLuke
hey y'all, I'm struggling to install the script on ZFS and I was wondering if anyone else has encountered this so I don't have to bug Dave unnecessarily
I described the issue pretty clearly there if anyone can throw me a hand I'll be very grateful
Luca Candela
@CaliLuke
ok I think I got it
the script installed docker with the overlay option, then it couldn't run it
and for some reason it wouldn't overwrite the options when I replaced the storage driver with ZFS
anyways, I edited /etc/docker/daemon.json manually and now docker runs
let's see if I can re-run the ansible script to setup the server now
crossing fingers!
nope, still doesn't work
damn it
open to suggestions :)
georgejung
@georgejung
Had a few problems with Transmission after a recent reboot. They updated the image, so using the latest caused some issue with NORDVPN that was throwing DNS resolution. I changed the image to 3.7.1 instead of latest and its working again. When i have more time I'll try to dig into it. Also my flood UI stopped working, turns out i needed to update my env variable to flood-for-transmission.
jakobjs
@jakobjs_twitter
First time user, currently trying to get things up-n-running. Just wanted to say thanks for spending your time on this project, it is awesome! :D
Jon Gibbins
@dotjay
Hello! I was wondering if anybody knows whether or not it's possible to back up two computers to Time Machine with Ansible-NAS. I have a single instance of Time Machine running in Docker, but wondering if I'd need to run a second instance to backup a second computer? Not sure it'd play nice with me backup up two computers to the one instance?
fofx
@fofx
Hi there. Hope someone can help me out here. I've upgraded my zfs pool and migrated my original pool to the new one. I would like my Ansible-NAS instance to start referencing the new pool, and I figured the easiest way was to rename the new pool to the original pool name. I can't run zfs export my original pool because it is currently being used. How can I "shutdown" all of the Ansible-NAS services so they are no longer using the pool? Am I going about this in a way that makes sense or is there a better solution?
allthestairs
@allthestairs
You can stop all the docker containers if that is what you mean by services.
If the docker daemon itself is accessing the pool then you'd need to stop the docker service itself.
fofx
@fofx

thanks @allthestairs that did the trick. I should have thought of that earlier. However, it appears there are missing datasets in my new pool even though I see all of the files there

TASK [Portainer Docker Container] *
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error removing container 0ec0a4054a8dcd44c8e93ee32ba5ed8a86f0bc0cf9605ea0138003240b617ac8: 500 Server Error: Internal Server Error (\"b'{\"message\":\"container 0ec0a4054a8dcd44c8e93ee32ba5ed8a86f0bc0cf9605ea0138003240b617ac8: driver \\\"zfs\\\" failed to remove root filesystem: exit status 1: \\\"/sbin/zfs fs destroy -r pool1/ce0be2ade0ab135fe5de3499905a265f2cb01ec8394d43fc622176ceff58464c\\\" => cannot open \'pool1/ce0be2ade0ab135fe5de3499905a265f2cb01ec8394d43fc622176ceff58464c\': dataset does not exist\\n\"}'\")"}

I'm not sure why the dataset is missing. Here is the command I used to send a snapshot from pool1 to pool2:
sudo zfs send -R pool1@now | zfs recv -F pool2
allthestairs
@allthestairs
Sorry, you're beyond me, my setup uses btrfs.
fofx
@fofx
I reattempted to migrate my pool. This time i made sure to use the -r option when creating the snapshot which included all of the datasets. I then just updated my all.yml to use the new zfs pool mount point and everything is working as expected.
Benedikt Strasser
@Shoggomo
grafik.png
Hello! I'm currently trying out ansible-nas in fresh Ubuntu 20.04 VM and somehow my networking broke, showing network unreachabe when pinging 8.8.8.8. I solved it by reinstalling ubuntu, but I wonder what caused it. It was a fresh system with only ansible-nas installed. Here is my nas.yml.