Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    amentee
    @amentee
    Okay
    egotthold
    @egotthold:matrix.org
    [m]
    Replace uyuni-server.tf.local with the actual hostname of your Uyuni Server.
    You should have seen its hostname during terraform apply.
    amentee
    @amentee
    Okay..let me check and get back
    amentee
    @amentee
    I re-ran my terraform apply
    it ended creating VM's and below error
    ╷
    │ Error: Error while determining image type for http://download.suse.de/install/SLE-12-SP5-JeOS-GM/SLES12-SP5-JeOS.x86_64-12.5-OpenStack-Cloud-GM.qcow2: Get "http://download.suse.de/install/SLE-12-SP5-JeOS-GM/SLES12-SP5-JeOS.x86_64-12.5-OpenStack-Cloud-GM.qcow2": dial tcp: lookup download.suse.de on 169.254.169.254:53: no such host
    │
    │   with module.base.module.base_backend.libvirt_volume.volumes["sles12sp5o"],
    │   on backend_modules/libvirt/base/main.tf line 56, in resource "libvirt_volume" "volumes":
    │   56: resource "libvirt_volume" "volumes" {
    │
    ╵
    ╷
    │ Error: Error while determining image type for http://schnell.suse.de/SLE12/SLE-12-SP4-JeOS-GM/SLES12-SP4-JeOS.x86_64-12.4-OpenStack-Cloud-GM.qcow2: Get "http://schnell.suse.de/SLE12/SLE-12-SP4-JeOS-GM/SLES12-SP4-JeOS.x86_64-12.4-OpenStack-Cloud-GM.qcow2": dial tcp: lookup schnell.suse.de on 169.254.169.254:53: no such host
    │
    │   with module.base.module.base_backend.libvirt_volume.volumes["sles12sp4o"],
    │   on backend_modules/libvirt/base/main.tf line 56, in resource "libvirt_volume" "volumes":
    │   56: resource "libvirt_volume" "volumes" {
    │
    ╵
    ╷
    │ Error: Error while determining image type for http://download.suse.de/install/SLE-15-SP4-Minimal-GM/SLES15-SP4-Minimal-VM.x86_64-OpenStack-Cloud-GM.qcow2: Get "http://download.suse.de/install/SLE-15-SP4-Minimal-GM/SLES15-SP4-Minimal-VM.x86_64-OpenStack-Cloud-GM.qcow2": dial tcp: lookup download.suse.de on 169.254.169.254:53: no such host
    │
    │   with module.base.module.base_backend.libvirt_volume.volumes["sles15sp4o"],
    │   on backend_modules/libvirt/base/main.tf line 56, in resource "libvirt_volume" "volumes":
    │   56: resource "libvirt_volume" "volumes" {
    │
    ╵
    ╷
    │ Error: Error while determining image type for http://download.suse.de/install/SLE-15-SP3-JeOS-GM/SLES15-SP3-JeOS.x86_64-15.3-OpenStack-Cloud-GM.qcow2: Get "http://download.suse.de/install/SLE-15-SP3-JeOS-GM/SLES15-SP3-JeOS.x86_64-15.3-OpenStack-Cloud-GM.qcow2": dial tcp: lookup download.suse.de on 169.254.169.254:53: no such host
    │
    │   with module.base.module.base_backend.libvirt_volume.volumes["sles15sp3o"],
    │   on backend_modules/libvirt/base/main.tf line 56, in resource "libvirt_volume" "volumes":
    │   56: resource "libvirt_volume" "volumes" {
    │
    ╵
    still the same output
    uyuni:~/sumaform # virsh list --all
     Id   Name   State
    --------------------
    
    uyuni:~/sumaform # virsh list
     Id   Name   State
    --------------------
    
    uyuni:~/sumaform #
    amentee
    @amentee
    Also below is the output that was shown in my terraform apply
    amentee
    @amentee
      # module.client.module.client.module.host.libvirt_domain.domain[0] will be created
      + resource "libvirt_domain" "domain" {
          + arch        = (known after apply)
          + cloudinit   = (known after apply)
          + cpu         = {
              + "mode" = "custom"
            }
          + disk        = [
              + {
                  + block_device = null
                  + file         = null
                  + scsi         = null
                  + url          = null
                  + volume_id    = (known after apply)
                  + wwn          = null
                },
            ]
          + emulator    = (known after apply)
          + fw_cfg_name = "opt/com.coreos/config"
          + id          = (known after apply)
          + machine     = (known after apply)
          + memory      = 1024
          + name        = "client"
          + qemu_agent  = true
          + running     = true
          + vcpu        = 1
    
          + console {
              + source_host    = "127.0.0.1"
              + source_service = "0"
              + target_port    = "0"
              + target_type    = "serial"
              + type           = "pty"
            }
          + console {
              + source_host    = "127.0.0.1"
              + source_service = "0"
              + target_port    = "1"
              + target_type    = "virtio"
              + type           = "pty"
            }
    
          + graphics {
              + autoport       = true
              + listen_address = "0.0.0.0"
              + listen_type    = "address"
              + type           = "spice"
            }
    
          + network_interface {
              + addresses      = (known after apply)
              + bridge         = "br0"
              + hostname       = (known after apply)
              + mac            = (known after apply)
              + network_id     = (known after apply)
              + network_name   = (known after apply)
              + wait_for_lease = true
            }
    
          + xml {}
        }
    egotthold
    @egotthold:matrix.org
    [m]
    Yeah if I interpret this output correctly the VMs are not actually created but it fails during downloading the images. You need to use the images key in the main.tf to exclude the SLES images.
    ^ @amentee:
    amentee
    @amentee
    I shared only error lines .there were 6-7 vms created messages as well
    I can see the images under /vmdisks
    egotthold
    @egotthold:matrix.org
    [m]
    Well can you see the images for the OS versions or also the images for the hosts?
    egotthold
    @egotthold:matrix.org
    [m]
    I am asking because normally the machine images are not visible before all images are successfully downloaded.
    amentee
    @amentee
    no I can see only the images for the OS versions
    so i need to put - images = ["sles12sp5o", "sles12sp4o", "sles15sp4o", "sles15sp3o"] in main.tf
    11 replies
    shall i put this under base module in main.tf?
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    @amentee: There are several issues in your main.tf:
    • you should leave the cc_username and cc_password values as "" unless you have SUSE Customer Center credentials.
    • you don't need the centos7 image. Just the opensuse154o image should be enough: use it for your minion
    • you can remove the client machine as we are removing the traditional stack it won't be useful to you for development
    • as mentioned earlier, use uyuni-released, uyuni-nightly or uyuni-master
    • Using the br0 bridge means that you need to setup DNS and DHCP correctly in your network. DNS is crucial for Uyuni to operate correctly. You can handle this at the libvirt network level if needed. I can show you my libvirt network definition to give more details if you want
    @amentee: you can remove the image configuration on the server, but I would leave image = "opensuse154o" on the minion. And don't remove the images line in the base if you don't want to download all distro images we have 😉
    amentee
    @amentee
    @cbosdonnat:matrix.org .. please share your libvirt network configuration and main.tf as well if possible
    Cédric Bosdonnat
    @cbosdonnat:matrix.org
    [m]
    Here it is stripped to the minimum viable parts:
    <network>
      <name>default</name>
      <forward mode='nat' />
      <bridge name='virbr0' stp='on' delay='0'/>
      <domain name='mgr.lab' localOnly='yes'/>
      <ip address='192.168.122.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.122.2' end='192.168.122.254'/>
          <host mac='2A:C3:A7:A6:01:00' name='dev-srv' ip='192.168.122.110'/>
          <host mac='2A:C3:A7:A6:01:01' name='dev-min-kvm' ip='192.168.122.111'/>
          <host mac='2A:C3:A7:A6:01:02' name='dev-cli-kvm' ip='192.168.122.112'/>
        </dhcp>
      </ip>
    </network>
    With this, you need to set the mac addresses in the main.tf and add entries in your /etc/hosts for your VMs. The libvirt dnsmasq only answers the VMs, not the virtual host
    amentee
    @amentee
    @cbosdonnat:matrix.org -- Below is my modified main.tf
    terraform {
     required_version = "1.0.10"
     required_providers {
       libvirt = {
         source = "dmacvicar/libvirt"
         version = "0.6.3"
       }
     }
    }
    
    provider "libvirt" {
    }
    
    
    
    module "base" {
      source = "./modules/base"
    
      cc_username = ""
      cc_password = ""
    
    
      provider_settings = {
        bridge = "br0"
        pool = "vmdisks"
        images = ["centos7"]
        domain = "suse.lab"
        ssh_key_path = "/home/sachin/private"
      }
    }
    
    module "server" {
      source = "./modules/server"
      base_configuration = module.base.configuration
    
      name = "server"
      product_version = "uyuni-released"
    
    }
    
    
    
    module "minion" {
      source = "./modules/minion"
      base_configuration = module.base.configuration
    
      name = "minion"
      image = "opensuse154o"
      server_configuration = module.server.configuration
    }
    below is the dump of libvirt network
    uyuni:~/sumaform # virsh net-dumpxml default
    <network>
      <name>default</name>
      <uuid>d3e5a31d-9f9d-49f2-9989-7356ba2b3d66</uuid>
      <forward mode='nat'>
        <nat>
          <port start='1024' end='65535'/>
        </nat>
      </forward>
      <bridge name='virbr0' stp='on' delay='0'/>
      <mac address='52:54:00:60:ce:c2'/>
      <domain name='suse.lab' localOnly='yes'/>
      <ip address='192.168.122.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.122.10' end='192.168.122.254'/>
          <host mac='52:54:00:09:af:bf' ip='192.168.122.2'/>
          <host mac='52:54:00:76:78:dc' ip='192.168.122.3'/>
          <host mac='52:54:00:90:15:99' ip='192.168.122.4'/>
        </dhcp>
      </ip>
    </network>
    
    uyuni:~/sumaform #
    below is my /etc/hosts file
    uyuni:~/sumaform # cat /etc/hosts
    #
    # hosts         This file describes a number of hostname-to-address
    #               mappings for the TCP/IP subsystem.  It is mostly
    #               used at boot time, when no name servers are running.
    #               On small systems, this file can be used instead of a
    #               "named" name server.
    # Syntax:
    #
    # IP-Address  Full-Qualified-Hostname  Short-Hostname
    #
    127.0.0.1       localhost
    # special IPv6 addresses
    ::1             localhost ipv6-localhost ipv6-loopback
    fe00::0         ipv6-localnet
    ff00::0         ipv6-mcastprefix
    ff02::1         ipv6-allnodes
    ff02::2         ipv6-allrouters
    ff02::3         ipv6-allhosts
    
    # Pre-set matadata server shortcut to speed up access image build overlay
    169.254.169.254 metadata.google.internal metadata.google.internal
    0.0.0.0 test
    
    192.168.122.2 uyuniserver.suse.lab
    192.168.122.3 leap154.suse.lab
    192.168.122.4 centos7.suse.lab
    uyuni:~/sumaform #
    please let me know if configs are correct or something else need to be modified
    amentee
    @amentee
    I ran the terraform apply again and now its showing below error
    module.server.module.server.module.host.libvirt_domain.domain[0]: Creating...
    ╷
    │ Error: Error defining libvirt domain: virError(Code=8, Domain=10, Message='invalid argument: could not get preferred machine for /usr/bin/qemu-system-x86_64 type=kvm')
    │
    │   with module.server.module.server.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    ╷
    │ Error: Error defining libvirt domain: virError(Code=8, Domain=10, Message='invalid argument: could not get preferred machine for /usr/bin/qemu-system-x86_64 type=kvm')
    │
    │   with module.minion.module.minion.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    amentee
    @amentee
    So here is the update
    i fixed above error
    I can see minion and server running
    uyuni:~/sumaform # virsh list
     Id   Name     State
    ------------------------
     3    minion   running
     4    server   running
    
    uyuni:~/sumaform #
    amentee
    @amentee
    now I am facing different error
    ╷
    │ Error: Error: couldn't retrieve IP address of domain.Please check following:
    │ 1) is the domain running proplerly?
    │ 2) has the network interface an IP address?
    │ 3) Networking issues on your libvirt setup?
    │  4) is DHCP enabled on this Domain's network?
    │ 5) if you use bridge network, the domain should have the pkg qemu-agent installed
    │ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup
    │  timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)
    │
    │   with module.server.module.server.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    ╷
    │ Error: Error: couldn't retrieve IP address of domain.Please check following:
    │ 1) is the domain running proplerly?
    │ 2) has the network interface an IP address?
    │ 3) Networking issues on your libvirt setup?
    │  4) is DHCP enabled on this Domain's network?
    │ 5) if you use bridge network, the domain should have the pkg qemu-agent installed
    │ IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup
    │  timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)
    │
    │   with module.minion.module.minion.module.host.libvirt_domain.domain[0],
    │   on backend_modules/libvirt/host/main.tf line 84, in resource "libvirt_domain" "domain":
    │   84: resource "libvirt_domain" "domain" {
    │
    ╵
    below are my configs currently setup
    uyuni:~/sumaform # cat /etc/NetworkManager/NetworkManager.conf
    [main]
    plugins=keyfile
    dhcp=dhclient
    
    [connectivity]
    uri=http://conncheck.opensuse.org
    
    uyuni:~/sumaform #
    uyuni:~/sumaform # cat /etc/NetworkManager/conf.d/localdns.conf
    [main]
    plugins=keyfile
    dns=dnsmasq
    uyuni:~/sumaform #
    uyuni:~/sumaform # cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf
    server=/suse.lab/192.168.122.1
    uyuni:~/sumaform #
    uyuni:~/sumaform # systemctl status dnsmasq.service
    × dnsmasq.service - DNS caching server.
         Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
         Active: failed (Result: exit-code) since Wed 2022-11-30 08:19:58 UTC; 31min ago
        Process: 15951 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCCESS)
        Process: 16011 ExecStart=/usr/sbin/dnsmasq --log-async --enable-dbus --keep-in-foreground (code=exited, status=2)
       Main PID: 16011 (code=exited, status=2)
    
    Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Starting DNS caching server....
    Nov 30 08:19:57 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[15951]: dnsmasq: syntax check OK.
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: dnsmasq: failed to create listening socket for port 53: Address already in use
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: failed to create listening socket for port 53: Address already in use
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal dnsmasq[16011]: FAILED to start up
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
    Nov 30 08:19:58 uyuni.asia-south2-a.c.our-bruin-361409.internal systemd[1]: Failed to start DNS caching server..
    uyuni:~/sumaform #
    uyuni:~/sumaform #
    uyuni:~/sumaform # systemctl status NetworkManagerNetworkManager.service - Network Manager
         Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: disabled)
        Drop-In: /usr/lib/systemd/system/NetworkManager.service.d
                 └─NetworkManager-ovs.conf
         Active: active (running) since Wed 2022-11-30 08:19:58 UTC; 31min ago
           Docs: man:NetworkManager(8)
       Main PID: 16197 (NetworkManager)
          Tasks: 4 (limit: 4915)
         CGroup: /system.slice/NetworkManager.service
                 ├─ 15946 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/run/NetworkManager/dnsmasq.pid --listen-address=127.0>
                 └─ 16197 /usr/sbin/NetworkManager --no-daemon
    
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4427] device (vnet5): Activation: starting connection 'vnet5>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4430] device (vnet5): state change: disconnected -> prepare >
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4433] device (vnet5): state change: prepare -> config (reaso>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4435] device (vnet5): state change: config -> ip-config (rea>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4436] device (br0): bridge port vnet5 was attached
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4437] device (vnet5): Activation: connection 'vnet5' enslave>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4438] device (vnet5): state change: ip-config -> ip-check (r>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4515] device (vnet5): state change: ip-check -> secondaries >
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4521] device (vnet5): state change: secondaries -> activated>
    Nov 30 08:40:12 uyuni.asia-south2-a.c.our-bruin-361409.internal NetworkManager[16197]: <info>  [1669797612.4534] device (vnet5): Activation: successful, device activat>
    uyuni:~/sumaform #
    my dnsmasq service is failing as two addresses are running on port 53
    i even tried to stop and start but it fails even then
    amentee
    @amentee
    uyuni:~/sumaform # netstat -plunt | grep -i dns
    tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      15946/dnsmasq
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1595/dnsmasq
    udp        0      0 127.0.0.1:53            0.0.0.0:*                           15946/dnsmasq
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1595/dnsmasq
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1595/dnsmasq
    uyuni:~/sumaform #
    any ideas how to fix this?
    amentee
    @amentee
    below is the output of ip addr show