Good evening,
has anyone successfully migrated a grommunio installation from ESXi to Proxmox?
I'm struggling.
Everytime I try I get the following message while booting.
Any idea?
Grommunio ESXi to Proxmox migration
- Edited
I didn't merge a Grommunio/OpenSuse yet but with Debian i had no problems.
Did you configure the Hardware to be virtio stuff?
Maybe check the consistency of your disk on the ESXi first. If it's still a BTRFS-System nvm.. /dev/grommunio/LVRoot means XFS...
but you maybe wanna just install a fresh System and transfer(backup/recover) the whole thing to the new one if that keeps happening.
yes I configured it as virtio. Debian and Ubuntu machines are working fine. only have issue with this one.
I already started your suggestion an hour ago. May it just makes sense to do that. I hope I could avoid it as it takes time...
I assume the discs changed there ID. Find the new IDs and populate /etc/fstab with new IDs.
Unfortunately, I cannot fstab...
You need to find the correct disk with:
ls -l /dev/disk/by-uuid/
, then mount the root partition somewhere like /nmt
, edit the /mnt/etc/fstab
and correct the IDs, unmount the root partition and reboot.
I don't know why but I cannot see by-uuid.
This is what is visible:
This is my system:
same for me on my fresh installed environment and also on the old ESXi server... strange
Today we migrated a XEN cluster to Proxmox and ran into a similar problem, no disks under Proxmox. Proxmox uses the KVM disk drivers which are not in the initrd. The solution was quite simple:
- configure the disks in Proxmox as SATA and reboot the server
- now the disks are mounted correctly
- mount a small KVM disk to load the KVM drivers
- recreate the initrd with
dracut -force
- shut down the server and switch the disks back to KVM
- reboot the server and all disks are correctly mounted and grommunio is running
- remove the small KVM disk from step 3.
It is probably similar with VMware, but with different drivers.
I had almost the same approach and have the same messages and issues like @torfkop posted, but with the minor difference, I tried to migrate the system from VMWare .vmdk to Proxmox qcow2 - like I did it serveral times before with ubuntu (nextcloud), windows 10,11 ....
Unfortunatly @WalterH 's approch didn't work for me. Whatever I tried (ide, sata, scsi, virtio) I did't succseed in getting the system booted and disks mounted correctly
WalterH Today we migrated a XEN cluster to Proxmox and ran into a similar problem, no disks under Proxmox. Proxmox uses the KVM disk drivers which are not in the initrd. The solution was quite simple:
configure the disks in Proxmox as SATA and reboot the server
now the disks are mounted correctly
mount a small KVM disk to load the KVM drivers
recreate the initrd with dracut -force
shut down the server and switch the disks back to KVM
reboot the server and all disks are correctly mounted and grommunio is running
remove the small KVM disk from step 3.
It is probably similar with VMware, but with different drivers.
Does anyone has some hints or advices how to deal with the situation. What I would like to avoid is to setup up from scratch, do the whole configuration again and restore liked discribed by @Andy and or @crbp
Best klaus
Now the system is up and running via proxmox kvm. You are right, it should work with sata and it does.
However you are blind and make mistakes, it doesn't. :-)
My mistake was, I switched on the vIOMMU - without it workes
@WalterH Tks for your quick response and your guidance ....
WalterH configure the disks in Proxmox as SATA and reboot the server
CU
Klaus
- Edited
i would still try to move that to virtio-scsi and not keep it on sata
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_hard_disk_bus
e.g.:
# qm config 109 |grep scsi
boot: order=scsi0;ide2;net0
scsi0: nas-zfs:vm-109-disk-1,cache=writeback,discard=on,iothread=1,size=12G,ssd=1
scsihw: virtio-scsi-single
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Best_practices