From ESXi to Proxmox — My Weekend Down the Rabbit Hole
I’ve been running VMware ESXi on my Dell PowerEdge R640 for a while now, and it’s served me well. But lately, I’ve felt the itch. You know the one — where something starts to feel a little too familiar, a little too stable. It’s not that ESXi was doing anything wrong. I was just ready to grow and challenge myself with something new.
Proxmox kept coming up — in Slack threads with coworkers, in casual mentions from my former CISO. Enough times that it started sticking in my head. And when you hear the same tool dropped into conversations by people you respect, it eventually gets a hold of you.
So I decided: this weekend, I’d move my homelab from ESXi to Proxmox.
What followed was a chaotic, frustrating, and ultimately satisfying learning experience that made me appreciate just how far I’ve come.
A Little Backstory
At my first IT job, I was Tier 1 — resetting passwords, calming users down, and not touching anything too important. My only interaction with servers back then? One morning I walked into the server room, coffee in hand, totally oblivious. The sysadmin lit me up for bringing a drink near the racks. That was the full extent of my server-side experience.
So when I realized I’d need to physically open the Dell PowerEdge to troubleshoot a BIOS issue, I felt that same "you're about to mess something up" tension. But I did it. For the first time, I popped open a server, removed the CMOS battery to reset it, and got my USB boot media recognized.
It was a minor victory, but it felt huge.
Proxmox: First Impressions
Once I had Proxmox installed, things started getting real. I decided to migrate my FlareVM and REMnux boxes over first. This meant exporting the VMs from ESXi, SCP’ing them over to the new server, and figuring out how the hell to get Proxmox to recognize and boot them.
Spoiler: it did not go smoothly.
There were boot loops. BIOS vs. UEFI misconfigurations. Drives that weren’t marked bootable. EFI disk warnings. More boot loops. A blue screen or two.
I’d switch a setting, try again. Boot into SeaBIOS? Fail. Switch back to OVMF (UEFI)? Fail again. Eventually, after enough trial and error to make me question all my life choices, FlareVM finally booted. And there it was — the login screen.
Finally seeing this after hours of troubleshooting felt like winning a CTF challenge.
The REMnux Puzzle
REMnux was next. I had originally installed it on a lightweight Ubuntu base image inside ESXi. I SCP’d the QCOW2 and VMDK files over and manually mapped them to the new VM in Proxmox. Naturally, it didn’t work the first time. But eventually, after some tweaking and double-checking UUIDs, it booted too.
It lives!
Oh, and of course — I was nearly out of space on local
. Proxmox was yelling at me with red text and low disk warnings. I hadn’t yet moved everything to local-lvm
, which had 850GB just sitting there untouched. Lessons learned, notes taken.
Final Thoughts
Was this move necessary? Maybe not.
Was it easy? Absolutely not.
But did I learn a hell of a lot and walk away with a better understanding of my server, virtualization tech, and the pain of bootloader troubleshooting? Definitely.
If you’re running ESXi and wondering if it’s worth checking out Proxmox — especially in a homelab setting — I’d say yes. But only if you’re okay with some trial by fire. And maybe keep a screwdriver handy.
And don’t take coffee into the server room.