Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    With this, you need to set the mac addresses in the main.tf and add entries in your /etc/hosts for your VMs. The libvirt dnsmasq only answers the VMs, not the virtual host
    amentee
    @amentee
    @cbosdonnat:matrix.org -- Below is my modified main.tf
    terraform {
     required_version = "1.0.10"
     required_providers {
       libvirt = {
         source = "dmacvicar/libvirt"
         version = "0.6.3"
       }
     }
    }
    
    provider "libvirt" {
    }
    
    
    
    module "base" {
      source = "./modules/base"
    
      cc_username = ""
      cc_password = ""
    
    
      provider_settings = {
        bridge = "br0"
        pool = "vmdisks"
        images = ["centos7"]
        domain = "suse.lab"
        ssh_key_path = "/home/sachin/private"
      }
    }
    
    module "server" {
      source = "./modules/server"
      base_configuration = module.base.configuration
    
      name = "server"
      product_version = "uyuni-released"
    
    }
    
    
    
    module "minion" {
      source = "./modules/minion"
      base_configuration = module.base.configuration
    
      name = "minion"
      image = "opensuse154o"
      server_configuration = module.server.configuration
    }
    below is the dump of libvirt network
    uyuni:~/sumaform # virsh net-dumpxml default
    <network>
      <name>default</name>
      <uuid>d3e5a31d-9f9d-49f2-9989-7356ba2b3d66</uuid>
      <forward mode='nat'>
        <nat>
          <port start='1024' end='65535'/>
        </nat>
      </forward>
      <bridge name='virbr0' stp='on' delay='0'/>
      <mac address='52:54:00:60:ce:c2'/>
      <domain name='suse.lab' localOnly='yes'/>
      <ip address='192.168.122.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.122.10' end='192.168.122.254'/>
          <host mac='52:54:00:09:af:bf' ip='192.168.122.2'/>
          <host mac='52:54:00:76:78:dc' ip='192.168.122.3'/>
          <host mac='52:54:00:90:15:99' ip='192.168.122.4'/>
        </dhcp>
      </ip>
    </network>
    
    uyuni:~/sumaform #
    below is my /etc/hosts file
    uyuni:~/sumaform # cat /etc/hosts
    #
    # hosts         This file describes a number of hostname-to-address
    #               mappings for the TCP/IP subsystem.  It is mostly
    #               used at boot time, when no name servers are running.
    #               On small systems, this file can be used instead of a
    #               "named" name server.
    # Syntax:
    #
    # IP-Address  Full-Qualified-Hostname  Short-Hostname
    #
    127.0.0.1       localhost
    # special IPv6 addresses
    ::1             localhost ipv6-localhost ipv6-loopback
    fe00::0         ipv6-localnet
    ff00::0         ipv6-mcastprefix
    ff02::1         ipv6-allnodes
    ff02::2         ipv6-allrouters
    ff02::3         ipv6-allhosts
    
    # Pre-set matadata server shortcut to speed up access image build overlay
    169.254.169.254 metadata.google.internal metadata.google.internal
    0.0.0.0 test
    
    192.168.122.2 uyuniserver.suse.lab
    192.168.122.3 leap154.suse.lab
    192.168.122.4 centos7.suse.lab
    uyuni:~/sumaform #
    please let me know if configs are correct or something else need to be modified
    amentee
    @amentee
    I ran the terraform apply again and now its showing below error
    module.server.module.server.module.host.libvirt_domain.domain[0]: Creating...
    ╷
    │ Error: Error defining libvirt domain: virError(Code=8, Domain=10, Message='invalid argument: could not get preferred machine for /usr/bin/qemu-system-x86_64 type=kvm')
    │
    │   with module.server.module.server.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    ╷
    │ Error: Error defining libvirt domain: virError(Code=8, Domain=10, Message='invalid argument: could not get preferred machine for /usr/bin/qemu-system-x86_64 type=kvm')
    │
    │   with module.minion.module.minion.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    amentee
    @amentee
    So here is the update
    i fixed above error
    I can see minion and server running
    uyuni:~/sumaform # virsh list
     Id   Name     State
    ------------------------
     3    minion   running
     4    server   running
    
    uyuni:~/sumaform #
    amentee
    @amentee
    now I am facing different error
    ╷
    │ Error: Error: couldn't retrieve IP address of domain.Please check following:
    │ 1) is the domain running proplerly?
    │ 2) has the network interface an IP address?
    │ 3) Networking issues on your libvirt setup?
    │  4) is DHCP enabled on this Domain's network?
    │ 5) if you use bridge network, the domain should have the pkg qemu-agent installed
    │ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup
    │  timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)
    │
    │   with module.server.module.server.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    ╷
    │ Error: Error: couldn't retrieve IP address of domain.Please check following:
    │ 1) is the domain running proplerly?
    │ 2) has the network interface an IP address?
    │ 3) Networking issues on your libvirt setup?
    │  4) is DHCP enabled on this Domain's network?
    │ 5) if you use bridge network, the domain should have the pkg qemu-agent installed
    │ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup
    │  timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)
    │
    │   with module.minion.module.minion.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    below are my configs currently setup
    uyuni:~/sumaform # cat /etc/NetworkManager/NetworkManager.conf
    [main]
    plugins=keyfile
    dhcp=dhclient
    
    [connectivity]
    uri=http://conncheck.opensuse.org
    
    uyuni:~/sumaform #
    uyuni:~/sumaform # cat /etc/NetworkManager/conf.d/localdns.conf
    [main]
    plugins=keyfile
    dns=dnsmasq
    uyuni:~/sumaform #
    uyuni:~/sumaform # cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf
    server=/suse.lab/192.168.122.1
    uyuni:~/sumaform #
    uyuni:~/sumaform # systemctl status dnsmasq.service
    × dnsmasq.service - DNS caching server.
         Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
         Active: failed (Result: exit-code) since Wed 2022-11-30 08:19:58 UTC; 31min ago
        Process: 15951 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCCESS)
        Process: 16011 ExecStart=/usr/sbin/dnsmasq --log-async --enable-dbus --keep-in-foreground (code=exited, status=2)
       Main PID: 16011 (code=exited, status=2)
    
    Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Starting DNS caching server....
    Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[15951]: dnsmasq: syntax check OK.
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: dnsmasq: failed to create listening socket for port 53: Address already in use
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: failed to create listening socket for port 53: Address already in use
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: FAILED to start up
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Failed to start DNS caching server..
    uyuni:~/sumaform #
    uyuni:~/sumaform #
    uyuni:~/sumaform # systemctl status NetworkManagerNetworkManager.service - Network Manager
         Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/NetworkManager.service.d
                 └─NetworkManager-ovs.conf
         Active: active (running) since Wed 2022-11-30 08:19:58 UTC; 31min ago
           Docs: man:NetworkManager(8)
       Main PID: 16197 (NetworkManager)
          Tasks: 4 (limit: 4915)
         CGroup: /system.slice/NetworkManager.service
                 ├─ 15946 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/run/NetworkManager/dnsmasq.pid --listen-address=127.0>
                 └─ 16197 /usr/sbin/NetworkManager --no-daemon
    
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4427] device (vnet5): Activation: starting connection 'vnet5>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4430] device (vnet5): state change: disconnected -> prepare >
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4433] device (vnet5): state change: prepare -> config (reaso>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4435] device (vnet5): state change: config -> ip-config (rea>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4436] device (br0): bridge port vnet5 was attached
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4437] device (vnet5): Activation: connection 'vnet5' enslave>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4438] device (vnet5): state change: ip-config -> ip-check (r>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4515] device (vnet5): state change: ip-check -> secondaries >
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4521] device (vnet5): state change: secondaries -> activated>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4534] device (vnet5): Activation: successful, device activat>
    uyuni:~/sumaform #
    my dnsmasq service is failing as two addresses are running on port 53
    i even tried to stop and start but it fails even then
    amentee
    @amentee
    uyuni:~/sumaform # netstat -plunt | grep -i dns
    tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      15946/dnsmasq
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1595/dnsmasq
    udp        0      0 127.0.0.1:53            0.0.0.0:*                           15946/dnsmasq
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1595/dnsmasq
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1595/dnsmasq
    uyuni:~/sumaform #
    any ideas how to fix this?
    amentee
    @amentee
    below is the output of ip addr show
    uyuni:~/sumaform # ip addr show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 42:01:0a:be:00:04 brd ff:ff:ff:ff:ff:ff
        altname enp0s4
        altname ens4
        inet 10.190.0.4/32 scope global dynamic eth0
           valid_lft 2662sec preferred_lft 2662sec
        inet6 fe80::e659:d850:f8ad:bf2f/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        altname enp0s5
        altname ens5
    4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::4001:aff:fe00:2/64 scope link
           valid_lft forever preferred_lft forever
    5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:60:ce:c2 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global noprefixroute virbr0
           valid_lft forever preferred_lft forever
    14: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:52:4d:2b brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe52:4d2b/64 scope link
           valid_lft forever preferred_lft forever
    15: vnet9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:7e:ef:99 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe7e:ef99/64 scope link
           valid_lft forever preferred_lft forever
    uyuni:~/sumaform #
    virbr0 is showing down
    amentee
    @amentee
    @cbosdonnat:matrix.org ..tried everything but no luck. Still teh same error as above
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    your main.tf still has the line birdge = "br0" so your VMs don't use the libvirt network you have defined. You need to remove that line, the defaults is to use the libvirt default network. Of course you will need to rerun terraform apply and that may destroy what you already have
    amentee
    @amentee
    @cbosdo .. thanks for the help. It worked after removing the br0 from main.tf. Now its throwing avahi error messages. Do I need to install the avahi package?
    ╷
    │ Error: remote-exec provisioner error
    │
    │   with module.minion.module.minion.module.host.null_resource.provisioning[0],
    │   on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
    │  251:   provisioner "remote-exec" {
    │
    │ error executing "/tmp/terraform_1286222172.sh": Process exited with status 1
    ╵
    uyuni:~/sumaform

    Below four messages appearing as erroneous while running terraform apply

    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_pkg
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: pkg.latest
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: An exception occurred in this state: Traceback (most recent call last):
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_change_domain
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: file.replace
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):         Name: /etc/avahi/avahi-daemon.conf
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: /etc/avahi/avahi-daemon.conf: file not found
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Started: 09:36:34.797556
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Duration: 4.735 ms
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_restrict_interfaces
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: file.replace
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):         Name: /etc/avahi/avahi-daemon.conf
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: /etc/avahi/avahi-daemon.conf: file not found
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Started: 09:36:34.802938
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Duration: 3.768 ms
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------

    ```
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S

    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ID: avahi_enable_service
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Function: service.running
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Name: avahi-daemon
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): Comment: The named service avahi-daemon is not available
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): S
    amentee
    @amentee
    after installing avahi package on minion ,i re-ran the terraform apply
    now the error is reduced to 1
    ╷
    │ Error: remote-exec provisioner error
    │
    │   with module.minion.module.minion.module.host.null_resource.provisioning[0],
    │   on backend_modules/libvirt/host/main.tf line 251, in resource "null_resource" "provisioning":
    │  251:   provisioner "remote-exec" {
    │
    │ error executing "/tmp/terraform_1041118824.sh": Process exited with status 1
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Changes:
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec): ----------
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):           ID: avahi_pkg
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):     Function: pkg.latest
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):       Result: False
    module.minion.module.minion.module.host.null_resource.provisioning[0] (remote-exec):      Comment: An exception occurred in this state: Traceback (most recent call last):
    uyuni:~/sumaform # virsh list
     Id   Name     State
    ------------------------
     11   minion   running
     12   server   running
    
    uyuni:~/sumaform #
    the good news is - I can login to server and run the command
    server:~ # spacewalk-repo-sync --help
    Usage: spacewalk-repo-sync [options]
    
    Options:
      -h, --help            show this help message and exit
      -l, --list            List the custom channels with the associated
                            repositories.
      -s, --show-packages   List all packages in a specified channel.
      -u URL, --url=URL     The url of the repository. Can be used multiple times.
      -c CHANNEL_LABEL, --channel=CHANNEL_LABEL
                            The label of the channel to sync packages to. Can be
                            used multiple times.
      -p PARENT_LABEL, --parent-channel=PARENT_LABEL
                            Synchronize the parent channel and all its child
                            channels.
      -d, --dry-run         Test run. No sync takes place.
      --latest              Sync latest packages only. Use carefully - you might
                            need to fix some dependencies on your own.
      -g CONFIG, --config=CONFIG
                            Configuration file
    eins
    @eins
    uyuni:~/sumaform # ip addr show
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 42:01:0a:be:00:04 brd ff:ff:ff:ff:ff:ff
        altname enp0s4
        altname ens4
        inet 10.190.0.4/32 scope global dynamic eth0
           valid_lft 2662sec preferred_lft 2662sec
        inet6 fe80::e659:d850:f8ad:bf2f/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        altname enp0s5
        altname ens5
    4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 42:01:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::4001:aff:fe00:2/64 scope link
           valid_lft forever preferred_lft forever
    5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:60:ce:c2 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global noprefixroute virbr0
           valid_lft forever preferred_lft forever
    14: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:52:4d:2b brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe52:4d2b/64 scope link
           valid_lft forever preferred_lft forever
    15: vnet9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
        link/ether fe:54:00:7e:ef:99 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe7e:ef99/64 scope link
           valid_lft forever preferred_lft forever
    uyuni:~/sumaform #

    Hello there.

    random thoughts here, you can use a few flags in ip command in order to output just the ip address or the link status
    for example:

    ip -br -c a
    ip -br -c -4 a
    ip -br -c -4 l
    ip -br -4 r
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: I usually disable avahi by adding use_avahi = false in the base module
    amentee
    @amentee
    @cbosdonnat:matrix.org ..I am working on this issue (uyuni-project/uyuni#6128). Its a beginner question but please let me know if I edit some code by creating a new branch , commit the same , which function in the test case does the actual test of two instances of spacewalk-repo-sync command . The code for test (https://github.com/uyuni-project/uyuni/blob/master/python/test/unit/spacewalk/satellite_tools/test_reposync.py)
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: I think it will be hard to create a unit test for this
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    the best thing to do is to fork the project and work in a branch in your fork
    amentee
    @amentee
    @cbosdo - okay . But how do I access the web ui. I mean I need to generate a test case like syncing the channel first and then I run spacewalk-repo-sync --help. or is there other way to reproduce the issue . So that once I change the code and try to reproduce it again ,I can see whether the amended code has fixed it or not.
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: the web UI is on the server VM you deployed with sumaform, just point your browser to it. The credentials on the web UI are admin/admin and on all of the sumaform-generated VMs root/linux
    to reproduce the issue you can add a channel with a repository in the Software > Manage menu
    at the repository level there is a Sync page and button to trigger the synchronization. it will run spacewalk-repo-sync under the hood. To reproduce the issue, just run spacewalk-repo-sync --help while the repo is still synchronizing
    the reposync logs are in /var/log/rhn/reposync* on the server VM