I didn't merge a Grommunio/OpenSuse yet but with Debian i had no problems.

Did you configure the Hardware to be virtio stuff?

Maybe check the consistency of your disk on the ESXi first. If it's still a BTRFS-System nvm.. /dev/grommunio/LVRoot means XFS...
but you maybe wanna just install a fresh System and transfer(backup/recover) the whole thing to the new one if that keeps happening.

yes I configured it as virtio. Debian and Ubuntu machines are working fine. only have issue with this one.
I already started your suggestion an hour ago. May it just makes sense to do that. I hope I could avoid it as it takes time...

I assume the discs changed there ID. Find the new IDs and populate /etc/fstab with new IDs.

Unfortunately, I cannot fstab...

You need to find the correct disk with:
ls -l /dev/disk/by-uuid/, then mount the root partition somewhere like /nmt, edit the /mnt/etc/fstab and correct the IDs, unmount the root partition and reboot.

I don't know why but I cannot see by-uuid.
This is what is visible:

same for me on my fresh installed environment and also on the old ESXi server... strange

Today we migrated a XEN cluster to Proxmox and ran into a similar problem, no disks under Proxmox. Proxmox uses the KVM disk drivers which are not in the initrd. The solution was quite simple:

  1. configure the disks in Proxmox as SATA and reboot the server
  2. now the disks are mounted correctly
  3. mount a small KVM disk to load the KVM drivers
  4. recreate the initrd with dracut -force
  5. shut down the server and switch the disks back to KVM
  6. reboot the server and all disks are correctly mounted and grommunio is running
  7. remove the small KVM disk from step 3.
    It is probably similar with VMware, but with different drivers.
    4 months later

    I had almost the same approach and have the same messages and issues like @torfkop posted, but with the minor difference, I tried to migrate the system from VMWare .vmdk to Proxmox qcow2 - like I did it serveral times before with ubuntu (nextcloud), windows 10,11 ....

    Unfortunatly @WalterH 's approch didn't work for me. Whatever I tried (ide, sata, scsi, virtio) I did't succseed in getting the system booted and disks mounted correctly

    WalterH Today we migrated a XEN cluster to Proxmox and ran into a similar problem, no disks under Proxmox. Proxmox uses the KVM disk drivers which are not in the initrd. The solution was quite simple:
    configure the disks in Proxmox as SATA and reboot the server
    now the disks are mounted correctly
    mount a small KVM disk to load the KVM drivers
    recreate the initrd with dracut -force
    shut down the server and switch the disks back to KVM
    reboot the server and all disks are correctly mounted and grommunio is running
    remove the small KVM disk from step 3.
    It is probably similar with VMware, but with different drivers.

    Does anyone has some hints or advices how to deal with the situation. What I would like to avoid is to setup up from scratch, do the whole configuration again and restore liked discribed by @Andy and or @crbp

    Best klaus

      klaus I tried (ide, sata, scsi, virtio) I did't succseed in getting

      if the system do not boot with the sata driver, you have a big problem. Every Linux kernel contains the sata driver.
      Can you post a screenshot from failed boot on Proxmox?

      Now the system is up and running via proxmox kvm. You are right, it should work with sata and it does.
      However you are blind and make mistakes, it doesn't. :-)
      My mistake was, I switched on the vIOMMU - without it workes

      @WalterH Tks for your quick response and your guidance ....

      WalterH configure the disks in Proxmox as SATA and reboot the server

      CU
      Klaus

      i would still try to move that to virtio-scsi and not keep it on sata
      https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_hard_disk_bus

      e.g.:

      # qm config 109 |grep scsi
      boot: order=scsi0;ide2;net0
      scsi0: nas-zfs:vm-109-disk-1,cache=writeback,discard=on,iothread=1,size=12G,ssd=1
      scsihw: virtio-scsi-single

      https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Best_practices

        crpb i would still try to move that to virtio-scsi and not keep it on sata

        Tks fpor the hint. I aleready did it as soon as I had the the system up and running with sata.
        I just had to recreate initrd like @WalterH mentioned in order to get the drivers loaded and running

        looks like that now

          4 days later

          klaus Yes virtio-scsi is the best solution. Same situation with virtio LAN drivers, the virtio LAN drivers are much faster than the Intel or Realtek drivers.

          © 2020-2024 grommunio GmbH. All rights reserved. | https://grommunio.com | Data Protection | Legal notice