Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    amentee
    @amentee
      # module.client.module.client.module.host.libvirt_domain.domain[0] will be created
      + resource "libvirt_domain" "domain" {
          + arch        = (known after apply)
          + cloudinit   = (known after apply)
          + cpu         = {
              + "mode" = "custom"
            }
          + disk        = [
              + {
                  + block_device = null
                  + file         = null
                  + scsi         = null
                  + url          = null
                  + volume_id    = (known after apply)
                  + wwn          = null
                },
            ]
          + emulator    = (known after apply)
          + fw_cfg_name = "opt/com.coreos/config"
          + id          = (known after apply)
          + machine     = (known after apply)
          + memory      = 1024
          + name        = "client"
          + qemu_agent  = true
          + running     = true
          + vcpu        = 1
    
          + console {
              + source_host    = "127.0.0.1"
              + source_service = "0"
              + target_port    = "0"
              + target_type    = "serial"
              + type           = "pty"
            }
          + console {
              + source_host    = "127.0.0.1"
              + source_service = "0"
              + target_port    = "1"
              + target_type    = "virtio"
              + type           = "pty"
            }
    
          + graphics {
              + autoport       = true
              + listen_address = "0.0.0.0"
              + listen_type    = "address"
              + type           = "spice"
            }
    
          + network_interface {
              + addresses      = (known after apply)
              + bridge         = "br0"
              + hostname       = (known after apply)
              + mac            = (known after apply)
              + network_id     = (known after apply)
              + network_name   = (known after apply)
              + wait_for_lease = true
            }
    
          + xml {}
        }
    egotthold
    @egotthold:matrix.org
    [m]
    Yeah if I interpret this output correctly the VMs are not actually created but it fails during downloading the images. You need to use the images key in the main.tf to exclude the SLES images.
    ^ @amentee:
    amentee
    @amentee
    I shared only error lines .there were 6-7 vms created messages as well
    I can see the images under /vmdisks
    egotthold
    @egotthold:matrix.org
    [m]
    Well can you see the images for the OS versions or also the images for the hosts?
    egotthold
    @egotthold:matrix.org
    [m]
    I am asking because normally the machine images are not visible before all images are successfully downloaded.
    amentee
    @amentee
    no I can see only the images for the OS versions
    so i need to put - images = ["sles12sp5o", "sles12sp4o", "sles15sp4o", "sles15sp3o"] in main.tf
    11 replies
    shall i put this under base module in main.tf?
    CΓ©dric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: There are several issues in your main.tf:
    • you should leave the cc_username and cc_password values as "" unless you have SUSE Customer Center credentials.
    • you don't need the centos7 image. Just the opensuse154o image should be enough: use it for your minion
    • you can remove the client machine as we are removing the traditional stack it won't be useful to you for development
    • as mentioned earlier, use uyuni-released, uyuni-nightly or uyuni-master
    • Using the br0 bridge means that you need to setup DNS and DHCP correctly in your network. DNS is crucial for Uyuni to operate correctly. You can handle this at the libvirt network level if needed. I can show you my libvirt network definition to give more details if you want
    @amentee: you can remove the image configuration on the server, but I would leave image = "opensuse154o" on the minion. And don't remove the images line in the base if you don't want to download all distro images we have πŸ˜‰
    amentee
    @amentee
    @cbosdonnat:matrix.org .. please share your libvirt network configuration and main.tf as well if possible
    CΓ©dric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    Here it is stripped to the minimum viable parts:
    <network>
      <name>default</name>
      <forward mode='nat' />
      <bridge name='virbr0' stp='on' delay='0'/>
      <domain name='mgr.lab' localOnly='yes'/>
      <ip address='192.168.122.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.122.2' end='192.168.122.254'/>
          <host mac='2A:C3:A7:A6:01:00' name='dev-srv' ip='192.168.122.110'/>
          <host mac='2A:C3:A7:A6:01:01' name='dev-min-kvm' ip='192.168.122.111'/>
          <host mac='2A:C3:A7:A6:01:02' name='dev-cli-kvm' ip='192.168.122.112'/>
        </dhcp>
      </ip>
    </network>
    With this, you need to set the mac addresses in the main.tf and add entries in your /etc/hosts for your VMs. The libvirt dnsmasq only answers the VMs, not the virtual host
    amentee
    @amentee
    @cbosdonnat:matrix.org -- Below is my modified main.tf
    terraform {
     required_version = "1.0.10"
     required_providers {
       libvirt = {
         source = "dmacvicar/libvirt"
         version = "0.6.3"
       }
     }
    }
    
    provider "libvirt" {
    }
    
    
    
    module "base" {
      source = "./modules/base"
    
      cc_username = ""
      cc_password = ""
    
    
      provider_settings = {
        bridge = "br0"
        pool = "vmdisks"
        images = ["centos7"]
        domain = "suse.lab"
        ssh_key_path = "/home/sachin/private"
      }
    }
    
    module "server" {
      source = "./modules/server"
      base_configuration = module.base.configuration
    
      name = "server"
      product_version = "uyuni-released"
    
    }
    
    
    
    module "minion" {
      source = "./modules/minion"
      base_configuration = module.base.configuration
    
      name = "minion"
      image = "opensuse154o"
      server_configuration = module.server.configuration
    }
    below is the dump of libvirt network
    uyuni:~/sumaform # virsh net-dumpxml default
    <network>
      <name>default</name>
      <uuid>d3e5a31d-9f9d-49f2-9989-7356ba2b3d66</uuid>
      <forward mode='nat'>
        <nat>
          <port start='1024' end='65535'/>
        </nat>
      </forward>
      <bridge name='virbr0' stp='on' delay='0'/>
      <mac address='52:54:00:60:ce:c2'/>
      <domain name='suse.lab' localOnly='yes'/>
      <ip address='192.168.122.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.122.10' end='192.168.122.254'/>
          <host mac='52:54:00:09:af:bf' ip='192.168.122.2'/>
          <host mac='52:54:00:76:78:dc' ip='192.168.122.3'/>
          <host mac='52:54:00:90:15:99' ip='192.168.122.4'/>
        </dhcp>
      </ip>
    </network>
    
    uyuni:~/sumaform #
    below is my /etc/hosts file
    uyuni:~/sumaform # cat /etc/hosts
    #
    # hosts         This file describes a number of hostname-to-address
    #               mappings for the TCP/IP subsystem.  It is mostly
    #               used at boot time, when no name servers are running.
    #               On small systems, this file can be used instead of a
    #               "named" name server.
    # Syntax:
    #
    # IP-Address  Full-Qualified-Hostname  Short-Hostname
    #
    127.0.0.1       localhost
    # special IPv6 addresses
    ::1             localhost ipv6-localhost ipv6-loopback
    fe00::0         ipv6-localnet
    ff00::0         ipv6-mcastprefix
    ff02::1         ipv6-allnodes
    ff02::2         ipv6-allrouters
    ff02::3         ipv6-allhosts
    
    # Pre-set matadata server shortcut to speed up access image build overlay
    169.254.169.254 metadata.google.internal metadata.google.internal
    0.0.0.0 test
    
    192.168.122.2 uyuniserver.suse.lab
    192.168.122.3 leap154.suse.lab
    192.168.122.4 centos7.suse.lab
    uyuni:~/sumaform #
    please let me know if configs are correct or something else need to be modified
    amentee
    @amentee
    I ran the terraform apply again and now its showing below error
    module.server.module.server.module.host.libvirt_domain.domain[0]: Creating...
    β•·
    β”‚ Error: Error defining libvirt domain: virError(Code=8, Domain=10, Message='invalid argument: could not get preferred machine for /usr/bin/qemu-system-x86_64 type=kvm')
    β”‚
    β”‚   with module.server.module.server.module.host.libvirt_domain.domain[0],
    β”‚   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    β”‚   84: resource "libvirt_domain" "domain" {
    β”‚
    β•΅
    β•·
    β”‚ Error: Error defining libvirt domain: virError(Code=8, Domain=10, Message='invalid argument: could not get preferred machine for /usr/bin/qemu-system-x86_64 type=kvm')
    β”‚
    β”‚   with module.minion.module.minion.module.host.libvirt_domain.domain[0],
    β”‚   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    β”‚   84: resource "libvirt_domain" "domain" {
    β”‚
    β•΅
    amentee
    @amentee
    So here is the update
    i fixed above error
    I can see minion and server running
    uyuni:~/sumaform # virsh list
     Id   Name     State
    ------------------------
     3    minion   running
     4    server   running
    
    uyuni:~/sumaform #
    amentee
    @amentee
    now I am facing different error
    β•·
    β”‚ Error: Error: couldn't retrieve IP address of domain.Please check following:
    β”‚ 1) is the domain running proplerly?
    β”‚ 2) has the network interface an IP address?
    β”‚ 3) Networking issues on your libvirt setup?
    β”‚  4) is DHCP enabled on this Domain's network?
    β”‚ 5) if you use bridge network, the domain should have the pkg qemu-agent installed
    β”‚ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup
    β”‚  timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)
    β”‚
    β”‚   with module.server.module.server.module.host.libvirt_domain.domain[0],
    β”‚   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    β”‚   84: resource "libvirt_domain" "domain" {
    β”‚
    β•΅
    β•·
    β”‚ Error: Error: couldn't retrieve IP address of domain.Please check following:
    β”‚ 1) is the domain running proplerly?
    β”‚ 2) has the network interface an IP address?
    β”‚ 3) Networking issues on your libvirt setup?
    β”‚  4) is DHCP enabled on this Domain's network?
    β”‚ 5) if you use bridge network, the domain should have the pkg qemu-agent installed
    β”‚ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup
    β”‚  timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)
    β”‚
    β”‚   with module.minion.module.minion.module.host.libvirt_domain.domain[0],
    β”‚   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    β”‚   84: resource "libvirt_domain" "domain" {
    β”‚
    β•΅
    below are my configs currently setup
    uyuni:~/sumaform # cat /etc/NetworkManager/NetworkManager.conf
    [main]
    plugins=keyfile
    dhcp=dhclient
    
    [connectivity]
    uri=http://conncheck.opensuse.org
    
    uyuni:~/sumaform #
    uyuni:~/sumaform # cat /etc/NetworkManager/conf.d/localdns.conf
    [main]
    plugins=keyfile
    dns=dnsmasq
    uyuni:~/sumaform #
    uyuni:~/sumaform # cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf
    server=/suse.lab/192.168.122.1
    uyuni:~/sumaform #
    uyuni:~/sumaform # systemctl status dnsmasq.service
    Γ— dnsmasq.service - DNS caching server.
         Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
         Active: failed (Result: exit-code) since Wed 2022-11-30 08:19:58 UTC; 31min ago
        Process: 15951 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCCESS)
        Process: 16011 ExecStart=/usr/sbin/dnsmasq --log-async --enable-dbus --keep-in-foreground (code=exited, status=2)
       Main PID: 16011 (code=exited, status=2)
    
    Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Starting DNS caching server....
    Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[15951]: dnsmasq: syntax check OK.
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: dnsmasq: failed to create listening socket for port 53: Address already in use
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: failed to create listening socket for port 53: Address already in use
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: FAILED to start up
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Failed to start DNS caching server..
    uyuni:~/sumaform #
    uyuni:~/sumaform #
    uyuni:~/sumaform # systemctl status NetworkManager
    ● NetworkManager.service - Network Manager
         Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/NetworkManager.service.d
                 └─NetworkManager-ovs.conf
         Active: active (running) since Wed 2022-11-30 08:19:58 UTC; 31min ago
           Docs: man:NetworkManager(8)
       Main PID: 16197 (NetworkManager)
          Tasks: 4 (limit: 4915)
         CGroup: /system.slice/NetworkManager.service
                 β”œβ”€ 15946 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/run/NetworkManager/dnsmasq.pid --listen-address=127.0>
                 └─ 16197 /usr/sbin/NetworkManager --no-daemon
    
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4427] device (vnet5): Activation: starting connection 'vnet5>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4430] device (vnet5): state change: disconnected -> prepare >
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4433] device (vnet5): state change: prepare -> config (reaso>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4435] device (vnet5): state change: config -> ip-config (rea>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4436] device (br0): bridge port vnet5 was attached
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4437] device (vnet5): Activation: connection 'vnet5' enslave>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4438] device (vnet5): state change: ip-config -> ip-check (r>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4515] device (vnet5): state change: ip-check -> secondaries >
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4521] device (vnet5): state change: secondaries -> activated>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4534] device (vnet5): Activation: successful, device activat>
    uyuni:~/sumaform #
    my dnsmasq service is failing as two addresses are running on port 53
    i even tried to stop and start but it fails even then
    amentee
    @amentee
    uyuni:~/sumaform # netstat -plunt | grep -i dns
    tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      15946/dnsmasq
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1595/dnsmasq
    udp        0      0 127.0.0.1:53            0.0.0.0:*                           15946/dnsmasq
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1595/dnsmasq
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1595/dnsmasq
    uyuni:~/sumaform #
    any ideas how to fix this?
    amentee
    @amentee
    below is the output of ip addr show
    uyuni:~/sumaform # ip addr show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 42:01:0a:be:00:04 brd ff:ff:ff:ff:ff:ff
        altname enp0s4
        altname ens4
        inet 10.190.0.4/32 scope global dynamic eth0
           valid_lft 2662sec preferred_lft 2662sec
        inet6 fe80::e659:d850:f8ad:bf2f/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        altname enp0s5
        altname ens5
    4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::4001:aff:fe00:2/64 scope link
           valid_lft forever preferred_lft forever
    5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:60:ce:c2 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global noprefixroute virbr0
           valid_lft forever preferred_lft forever
    14: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:52:4d:2b brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe52:4d2b/64 scope link
           valid_lft forever preferred_lft forever
    15: vnet9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:7e:ef:99 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe7e:ef99/64 scope link
           valid_lft forever preferred_lft forever
    uyuni:~/sumaform #
    virbr0 is showing down
    amentee
    @amentee
    @cbosdonnat:matrix.org ..tried everything but no luck. Still teh same error as above
    CΓ©dric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    your main.tf still has the line birdge = "br0" so your VMs don't use the libvirt network you have defined. You need to remove that line, the defaults is to use the libvirt default network. Of course you will need to rerun terraform apply and that may destroy what you already have
    amentee
    @amentee
    @cbosdo .. thanks for the help. It worked after removing the br0 from main.tf. Now its throwing avahi error messages. Do I need to install the avahi package?
    β•·
    β”‚ Error: remote-exec provisioner error
    β”‚
    β”‚   with module.minion.module.minion.module.host.null_resource.provisioning[0],
    β”‚   on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
    β”‚  251:   provisioner "remote-exec" {
    β”‚
    β”‚ error executing "/tmp/terraform_1286222172.sh": Process exited with status 1
    β•΅
    uyuni:~/sumaform

    Below four messages appearing as erroneous while running terraform apply

    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_pkg
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: pkg.latest
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: An exception occurred in this state: Traceback (most recent call last):
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_change_domain
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: file.replace
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):         Name: /etc/avahi/avahi-daemon.conf
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: /etc/avahi/avahi-daemon.conf: file not found
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Started: 09:36:34.797556
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Duration: 4.735 ms
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_restrict_interfaces
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: file.replace
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):         Name: /etc/avahi/avahi-daemon.conf
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: /etc/avahi/avahi-daemon.conf: file not found
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Started: 09:36:34.802938
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Duration: 3.768 ms
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------

    ```
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S

    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S
    amentee
    @amentee
    after installing avahi package on minion ,i re-ran the terraform apply
    now the error is reduced to 1
    β•·
    β”‚ Error: remote-exec provisioner error
    β”‚
    β”‚   with module.minion.module.minion.module.host.null_resource.provisioning[0],
    β”‚   on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
    β”‚  251:   provisioner "remote-exec" {
    β”‚
    β”‚ error executing "/tmp/terraform_1041118824.sh": Process exited with status 1
    β•΅