Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Dan Čermák
    @dcermak
    (and whoever is behind it responded that the email was genuine)
    Ben Johnson
    @cbj4074
    Let's see the DKIM headers from the email :D
    Nico Sap
    @NicoJuicy
    What could go wrong? :p
    Dan Čermák
    @dcermak
    Well I got a T-Shirt and a voucher for HashiConf for replying, so even if it was a scam, I'd say it was pretty worth it for me so far ;-)
    Zburator
    @Zburator
    Hi guys, I have a layman's question:
    Can you start a vagrant with virtualbox provider on a centos 7 machine with no VT-X?
    I'm curios if I can somehow allocate one processor for that virtual machine
    Dan Čermák
    @dcermak
    Don't think that that will work, afaik virtualbox needs hardware virtualization support
    alexk345
    @alexk345
    My laravel homestead vm do not boot.
    00:00:03.412108 ** End of CPUID dump **
    00:00:03.412133 VMEmt: Halt method global1 (5)
    00:00:03.412299 VMEmt: HaltedGlobal1 config: cNsSpinBlockThresholdCfg=50000
    00:00:03.412364 Changing the VM state from 'CREATING' to 'CREATED'
    00:00:03.413548 SharedFolders host service: Adding host mapping
    00:00:03.413564 Host path '\?\C:\Dashboard\Workspace\vagrant\homestead', map name 'vagrant', writable, automount=false, automntpnt=, create_symlinks=true, missing=false
    00:00:03.414391 Changing the VM state from 'CREATED' to 'POWERING_ON'
    00:00:03.414535 AIOMgr: Endpoints without assigned bandwidth groups:
    00:00:03.414556 AIOMgr: C:\Users\ghost\VirtualBox VMs\homestead\ubuntu-20.04-amd64-disk001.vmdk
    00:00:03.414880 Changing the VM state from 'POWERING_ON' to 'RUNNING'
    00:00:03.414905 Console: Machine state changed to 'Running'
    00:00:03.420925 VMMDev: Guest Log: BIOS: VirtualBox 6.1.10
    00:00:03.421292 PCI: Setting up resources and interrupts
    00:00:03.442243 PIT: mode=2 count=0x10000 (65536) - 18.20 Hz (ch=0)
    00:00:03.453650 ERROR [COM]: aRC=VBOX_E_VM_ERROR (0x80bb0003) aIID={4680b2de-8690-11e9-b83d-5719e53cf1de} aComponent={DisplayWrap} aText={Could not take a screenshot (VERR_NOT_SUPPORTED)}, preserve=false aResultDetail=-37
    00:00:03.465650 Display::i_handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0 origin=0,0
    00:00:03.498725 VMMDev: Guest Log: CPUID EDX: 0x178bfbff
    00:00:03.501896 AHCI#0: Reset the HBA
    00:00:03.501918 VD#0: Cancelling all active requests
    00:00:03.502328 AHCI#0: Port 0 reset
    00:00:03.502400 VD#0: Cancelling all active requests
    00:00:03.504496 VMMDev: Guest Log: BIOS: AHCI 0-P#0: PCHS=16383/16/63 LCHS=1024/255/63 0x0000000040000000 sectors
    00:00:03.518031 PIT: mode=2 count=0x48d3 (18643) - 64.00 Hz (ch=0)
    00:00:03.519347 Display::i_handleDisplayResize: uScreenId=0 pvVRAM=0000000006b50000 w=640 h=480 bpp=32 cbLine=0xA00 flags=0x0 origin=0,0
    00:00:05.355824 NAT: Old socket recv size: 64KB
    00:00:05.355863 NAT: Old socket send size: 64KB
    00:00:05.999713 Display::i_handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0 origin=0,0
    00:00:06.015907 PIT: mode=2 count=0x10000 (65536) - 18.20 Hz (ch=0)
    00:00:06.016702 VMMDev: Guest Log: BIOS: Boot : bseqnr=1, bootseq=0032
    00:00:06.018399 VMMDev: Guest Log: BIOS: Booting from Hard Disk...
    00:00:06.176131 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=81
    00:00:06.179167 VMMDev: Guest Log: int13_harddisk: function 00, unmapped device for ELDL=81
    00:00:06.182116 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=81
    00:00:06.185177 VMMDev: Guest Log: int13_harddisk: function 00, unmapped device for ELDL=81
    00:00:06.188234 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=81
    00:00:06.191194 VMMDev: Guest Log: int13_harddisk: function 00, unmapped device for ELDL=81
    00:00:07.181246 Display::i_handleDisplayResize: uScreenId=0 pvVRAM=0000000006b50000 w=640 h=480 bpp=32 cbLine=0xA00 flags=0x0 origin=0,0
    00:00:13.851770 Display::i_handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0 origin=0,0
    00:00:16.030195 GIM: KVM: VCPU 0: Enabled system-time struct. at 0x0000000010001000 - u32TscScale=0x804a6fdd i8TscShift=0 uVersion=2 fFlags=0x1 uTsc=0x0 uVirtNanoTS=0x0
    00:00:16.030233 TM: Host/VM is not suitable for using TSC mode 'RealTscOffset', request to change TSC mode ignored
    00:00:16.166319 GIM: KVM: Enabled wall-clock struct. at 0x0000000010000000 - u32Sec=1594387259 u32Nano=115290300 uVersion=2
    00:00:16.249854 PIT: mode=2 count=0x12a5 (4773) - 249.98 Hz (ch=0)
    00:00:16.278282 MsrExit/0: 0010:ffffffffa44788e8/LM: RDMSR 00000140 -> 00000000 / VERR_CPUM_RAISE_GP_0
    STUCK HERE
    alexk345
    @alexk345
    tried another laravel...
    same issue

    λ vagrant up
    Bringing machine 'default' up with 'virtualbox' provider...
    ==> default: Box 'sternpunkt/jimmybox' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
    ==> default: Loading metadata for box 'sternpunkt/jimmybox'
    default: URL: https://vagrantcloud.com/sternpunkt/jimmybox
    ==> default: Adding box 'sternpunkt/jimmybox' (v3.0.2) for provider: virtualbox
    default: Downloading: https://vagrantcloud.com/sternpunkt/boxes/jimmybox/versions/3.0.2/providers/virtualbox.box
    Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com
    default:
    ==> default: Successfully added box 'sternpunkt/jimmybox' (v3.0.2) for 'virtualbox'!
    ==> default: Importing base box 'sternpunkt/jimmybox'...
    ==> default: Matching MAC address for NAT networking...
    ==> default: Checking if box 'sternpunkt/jimmybox' version '3.0.2' is up to date...
    ==> default: Setting the name of the VM: homestead_default_1594389351893_24926
    ==> default: Clearing any previously set network interfaces...
    ==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    ==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
    ==> default: Booting VM...
    ==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    Timed out while waiting for the machine to boot. This means that
    Vagrant was unable to communicate with the guest machine within
    the configured ("config.vm.boot_timeout" value) time period.

    If you look above, you should be able to see the error(s) that
    Vagrant had when attempting to connect to the machine. These errors
    are usually good hints as to what may be wrong.

    If you're using a custom box, make sure that networking is properly
    working and you're able to connect to the machine. It is a common
    problem that networking isn't setup properly in these boxes.
    Verify that authentication configurations are also setup properly,
    as well.

    If the box appears to be booting properly, you may want to increase
    the timeout ("config.vm.boot_timeout") value.

    alexk345
    @alexk345
    @alexk345
    It worked when i turned bot hyper v and virtualization (bios) off
    bcdedit /set hypervisorlaunchtype off
    open powershel in admin mode ...paste this "bcdedit /set hypervisorlaunchtype off" enter ... reboot
    lkthomas
    @lkthomas
    hey all
    does Vagrant support LXC remote container spawning ?
    Sophia Castellarin
    @soapy1
    heya @lkthomas it looks like there is a plugin for this https://github.com/fgrehm/vagrant-lxc however, the vagrant team is not involved in maintaining it. Also, you can find lxc boxes on vagrant cloud https://app.vagrantup.com/boxes/search?provider=lxc
    lkthomas
    @lkthomas
    ok thanks @soapy1
    Dan Čermák
    @dcermak

    heya @lkthomas it looks like there is a plugin for this https://github.com/fgrehm/vagrant-lxc however, the vagrant team is not involved in maintaining it. Also, you can find lxc boxes on vagrant cloud https://app.vagrantup.com/boxes/search?provider=lxc

    Note that the plugin had its last commit 2 years ago, so your mileage may vary…

    I actually wanted to package it for opensuse, but skipped it, because upstream appears to be pretty much dead (like so many other plugins unfortunately)
    daebenji
    @daebenji
    hello folks,
    I'm using libvirt on my machine and everything is doing well except for redhat machines, which can not use rsync as their synced folders. the problem is, that vagrant up is trying to install rsync on the generic/rhel8 box, which does not work, because there are was no repo/subscription management yet. is their a way to start with shell provisioning and install rsync before the rsync tries to install itself?
    If I'm doing the setup of the machine and reloading it with rsync synced folders enabled, it does work and i have access to my files inside the vm.
    Sophia Castellarin
    @soapy1
    I wonder if you could use a typed trigger to do this https://www.vagrantup.com/docs/triggers/configuration#actions, probably running before the Vagrant::Action::Builtin::SyncedFolderaction. I think it would look something like
    config.trigger.before :"Vagrant::Action::Builtin::SyncedFolder", type: :action do |t|
      t.warn = "installing rsync"
      t.run_remote =  {inline: "install rsync....."}
    end
    daebenji
    @daebenji
    @soapy1 thanks, will give it a shot
    daebenji
    @daebenji
    @soapy1 had no luck with the triggers. Because you have to setup subscription-manager before you can add repositorys and afterwards installing packages (rsync in my case) I modified the trigger to:
    config.trigger.before :"Vagrant::Action::Builtin::SyncedFolder", type: :action do |t|
      t.warn = "installing rsync"
      t.run_remote = {inline:"subscription-manager register --username=<my-username> --password=<my-password>"}
      t.run_remote = {inline:"subscription-manager attach"}
      t.run_remote = {inline:"subscription-manager release --set=8.2"}
      t.run_remote = {inline:"yum -y update"}
      t.run_remote = {inline:"yum -y -q install rsync"}
    end
    Do I have to put it into several triggers?
    Sophia Castellarin
    @soapy1
    You an use one trigger. I would specify the run_remote bit like:
    t.run_remote = {inline: <<~EOF
      subscription-manager register --username=<my-username> --password=<my-password>
      subscription-manager attach
      ....
      EOF
    }
    Nik Mohamad Aizuddin
    @nikAizuddin

    Hi, how do I set domain type in Vagrant file? For example I want to change kvm into tcg?

    Bringing machine 'saferwall-box' up with 'libvirt' provider...
    ==> saferwall-box: Checking if box 'opensuse/Tumbleweed.aarch64' version '1.0.20201030' is up to date...
    WARNING: Nokogiri was built against LibXML version 2.9.10, but has dynamically loaded 2.9.7
    ==> saferwall-box: Creating image (snapshot of base box volume).
    ==> saferwall-box: Creating domain with the following settings...
    ==> saferwall-box:  -- Name:              saferwall-box_saferwall-box
    ==> saferwall-box:  -- Domain type:       kvm
    ==> saferwall-box:  -- Cpus:              2
    ==> saferwall-box:  -- CPU topology:   sockets=1, cores=2, threads=1
    ==> saferwall-box:  -- Feature:           apic
    ==> saferwall-box:  -- Memory:            4096M

    Otherwise, I got the following error:

    Error while creating domain: Error saving the server: Call to virDomainDefineXML failed: unsupported configuration: Emulator '/usr/bin/qemu-system-aarch64' does not support virt type 'kvm'
    Nik Mohamad Aizuddin
    @nikAizuddin

    This is the subpart of my Vagrantfile:

        saferwall_box.vm.provider "libvirt" do |v, override|
          override.vagrant.plugins = config.vagrant.plugins + ["vagrant-libvirt"]
          v.cpus = "2"
          v.cputopology sockets: "1", cores: "2", threads: "1"
          v.memory = "4096"
          v.disk_bus = "virtio"
          v.nic_model_type = "virtio-net-pci"
          v.nested = false
          v.cpu_model = "cortex-a72"
          v.graphics_type = "none"
          v.kvm_hidden = "false"
          v.machine_type = "virt"
          v.machine_arch = "aarch64"
          v.autostart = false
    
          salt_provision_saferwall_box override
        end

    I don't know how to set domain type.

    daebenji
    @daebenji

    You an use one trigger. I would specify the run_remote bit like:

    t.run_remote = {inline: <<~EOF
      subscription-manager register --username=<my-username> --password=<my-password>
      subscription-manager attach
      ....
      EOF
    }

    I will give it a try

    Nik Mohamad Aizuddin
    @nikAizuddin
    Never mind, I just got it working. Need to use v.driver = "qemu" and some other changes:
        saferwall_box.vm.provider "libvirt" do |v, override|
          override.vagrant.plugins = config.vagrant.plugins + ["vagrant-libvirt"]
          v.cpus = "2"
          v.cputopology sockets: "1", cores: "2", threads: "1"
          v.memory = "4096"
          v.disk_bus = "virtio"
          v.nic_model_type = "virtio-net-pci"
          v.nested = false
          v.cpu_mode = "custom"
          v.cpu_model = "cortex-a72"
          v.graphics_type = "none"
          v.machine_type = "virt"
          v.machine_arch = "aarch64"
          v.driver = "qemu"
          v.autostart = false
    
          salt_provision_saferwall_box override
        end
    daebenji
    @daebenji

    You an use one trigger. I would specify the run_remote bit like:

    t.run_remote = {inline: <<~EOF
      subscription-manager register --username=<my-username> --password=<my-password>
      subscription-manager attach
      ....
      EOF
    }

    I will give it a try

    Hello, it seems that the trigger is ignored. right after the ssh provisioning, the vagrant tries to install rsync normally.
    Another Trigger (executed on destroy) is working fine. Do you have any Idea?

    vagrant up
    Bringing machine 'rhel79' up with 'libvirt' provider...
    ==> rhel79: Checking if box 'generic/rhel7' version '3.1.0' is up to date...
    ==> rhel79: Creating image (snapshot of base box volume).
    ==> rhel79: Creating domain with the following settings...
    ==> rhel79:  -- Name:              rhel7_rhel79
    ==> rhel79:  -- Domain type:       kvm
    ==> rhel79:  -- Cpus:              2
    ==> rhel79:  -- Feature:           acpi
    ==> rhel79:  -- Feature:           apic
    ==> rhel79:  -- Feature:           pae
    ==> rhel79:  -- Memory:            2048M
    ==> rhel79:  -- Management MAC:
    ==> rhel79:  -- Loader:
    ==> rhel79:  -- Nvram:
    ==> rhel79:  -- Base box:          generic/rhel7
    ==> rhel79:  -- Storage pool:      default
    ==> rhel79:  -- Image:             /var/lib/libvirt/images/rhel7_rhel79.img (128G)
    ==> rhel79:  -- Volume Cache:      default
    ==> rhel79:  -- Kernel:
    ==> rhel79:  -- Initrd:
    ==> rhel79:  -- Graphics Type:     vnc
    ==> rhel79:  -- Graphics Port:     -1
    ==> rhel79:  -- Graphics IP:       127.0.0.1
    ==> rhel79:  -- Graphics Password: Not defined
    ==> rhel79:  -- Video Type:        cirrus
    ==> rhel79:  -- Video VRAM:        256
    ==> rhel79:  -- Sound Type:
    ==> rhel79:  -- Keymap:            en-us
    ==> rhel79:  -- TPM Path:
    ==> rhel79:  -- INPUT:             type=mouse, bus=ps2
    ==> rhel79: Creating shared folders metadata...
    ==> rhel79: Starting domain.
    ==> rhel79: Waiting for domain to get an IP address...
    ==> rhel79: Waiting for SSH to become available...
        rhel79:
        rhel79: Vagrant insecure key detected. Vagrant will automatically replace
        rhel79: this with a newly generated keypair for better security.
        rhel79:
        rhel79: Inserting generated public key within guest...
        rhel79: Removing insecure key from the guest if it's present...
        rhel79: Key inserted! Disconnecting and reconnecting using new SSH key...
    ==> rhel79: Installing rsync to the VM...
    ==> rhel79: Removing domain...
    The following SSH command responded with a non-zero exit status.
    Vagrant assumes that this means the command failed!
    
    if command -v dnf; then
      dnf -y install rsync
    else
      yum -y install rsync
    fi
    
    
    Stdout from the command:
    
    Loaded plugins: product-id, search-disabled-repos
    No package rsync available.
    
    
    Stderr from the command:
    
    http://scientificlinux.physik.uni-muenchen.de/mirror/epel/7/x86_64/repodata/aac7d1bf9045f059c953fc6dd1299f734b2f8b841c94e9bea80f1ac4788fbe35-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found
    Trying other mirror.
    To address this issue please refer to the below knowledge base article
    
    https://access.redhat.com/articles/1320623
    
    If above article doesn't help to resolve this issue please open a ticket with Red Hat Support.
    
    Error: Nothing to do
    this is the output when bringing it up when the trigger is configured as:
    rhel79.trigger.before :all do |trigger|
      trigger.warn = "install rsync"
      trigger.run_remote = {inline: <<~EOF
        subscription-manager register --username=<my-username> --password=<my-password>
        subscription-manager attach --auto
        subscription-manager release --set=7.9
        yum -y update
        yum -y -q install rsync
        EOF
      }
    end
    Sophia Castellarin
    @soapy1

    oof, my bad. So, I missed two things:
    (1) typed triggers, like ones based on actions are an experimental Vagrant feature (https://www.vagrantup.com/docs/experimental#typed_triggers). In order to enable them, export the env var VAGRANT_DEFAULT_PROVIDER=typed_triggers

    (2) you are using the libvirt, which is likely different from the virtualbox provider that I'm using. So, the specific action that you will need to attach to is likely different. This Vagrantfile works for me (on virtualbox):

    ENV["VAGRANT_EXPERIMENTAL"] = "typed_triggers"
    Vagrant.configure("2") do |config|
      config.vm.box = "generic/rhel7"
      config.trigger.after :"Vagrant::Action::Builtin::WaitForCommunicator", type: :action do |t|
        t.warn = "installing rsync"
        t.run_remote = {inline: "install....."}
      end
      config.vm.synced_folder ".", "/vagrant", type: "rsync", disabled: false

    For the libvirt provider, I suspect the action to use is something like VagrantPlugins::ProviderLibvirt::Action::WaitTillUp

    One caveat of this method, is the trigger will run on every run of the action chosen. So, if it triggers on :"Vagrant::Action::Builtin::WaitForCommunicator", rsync will try to install on all vagrant up's, reload's, etc.

    daebenji
    @daebenji
    thanks, that's exactly what I wanted to do. I solved the problem of retriggering by writing a File to the System after the first run and Check everytime the trigger runs if it exist and if not, it is doing the action.
    Darragh Bailey
    @electrofelix
    Anyone know if any of the providers for vagrant support multiple disks in the vagrant box? Looking to see if there is an existing pattern to follow before going forward with a specific implemention for vagrant-libvirt. Searching around the virtualbox code to see if there is anything that looks like it does this, might help speed things up a bit if someone can confirm either way
    Chris Roberts
    @chrisroberts
    is this in the context of multiple disks provided in a box (so they exist by default) or the disk feature?
    Darragh Bailey
    @electrofelix
    @chrisroberts the former, it's been asked for and certainly would be useful, and as there is a PR with a suggested way, I'm looking to understand what might already be adopted elsewhere
    Chris Roberts
    @chrisroberts
    ah, okay. looking at the PR i see. on the vbox/vmware side the vagrant provider doesn't have to care. the vbox/vmware metadata files keep track of the defined disks so the vagrant providers don't have to do anything to make them available
    Darragh Bailey
    @electrofelix
    Ah, i was afraid of that, well I'll have a close look to see if there are any restrictions regarding the metadata json format before moving forward
    Chris Cranford
    @Naros
    Hi everyone, is there a way to prevent vagrant on a linux host from shutting down due to being idle, loss of network, etc?
    Often I go to sleep with a running vagrant box and when I wake the vagrant box has powered off while not being used.
    Ben Johnson
    @cbj4074
    @Naros I can't say that I've ever seen that happen. I can't think of any reason for which the VM would actually be powered-off. Unreachable is one thing, but off is another.
    Which provider are you using, and which OS?
    Chris Cranford
    @Naros
    @cbj4074 I'm using VirtualBox on Fedora 32
    Ben Johnson
    @cbj4074
    @Naros What's the output of the vagrant status command when you wake up and find the VM to be off?
    Chris Cranford
    @Naros
    @cbj4074 poweredoff iirc
    It's as tho Vagrant has force halted the box when the network connection resets due to an unstable network.
    I end up having to run vagrant up and restart the box entirely when this happens.
    Ben Johnson
    @cbj4074
    @Naros Hi Chris, apologies for the late reply. That's really bizarre; I think it's worth opening an Issue in the Vagrant repo on GitHub over that behavior.
    Marc Mercer
    @Daemoen_gitlab
    hmmm, looks like the room is no longer active?
    daebenji
    @daebenji
    maybe there were no questions to answer?
    Sophia Castellarin
    @soapy1
    The best place to ask Vagrant questions is probably https://discuss.hashicorp.com/c/vagrant/24