Like many, I’ve dreamed of having a miniature datacenter in my basement. My inspiration came from a coworker who had converted his garage into a datacenter, complete with multiple window A/C units, tile floor, racks and cabinets full of equipment and his own public Class C back when individuals could request and receive their own Class C.
I lived in a house in the mountains that had a fairly high crawlspace underneath, and I dreamed of some day scooping it out level, pouring cement and putting in a short server cabinet and two-post. To that end, I had a lot of equipment that I had saved from the landfill over the previous couple of years: old second generation ProLiants, a tape changer, a QNAP that had been discarded after a very expensive data recovery operation and an AlphaServer 4100.
However, there were two important lessons that I learned that changed my plans: first, that home ownership can be very expensive, and something simple like pouring a floor was beyond both my abilities and my budget, and second, that electricity in California is ruinously expensive for individuals. While commercial customers generally get good prices regardless of their usage, it doesn’t take many spinning hard drives before the higher residential tiers kick in at 35+ cents per kWh. I think my coworker got away with it because he was doing web hosting on the side out of his garage at a time when web hosting still actually paid something.
Once I realized the sheer cost of having the equipment turned on, to say nothing of the cost of air conditioning the basement at the time when my poorly insulated house was already costing over $300 a month to cool during the summer, I gave up on my dreams and donated most of the equipment to that coworker. My homelab ended up being a single core Dell laptop “server” with a broken screen that ran Fedora and consumed 12 watts.
Fast forward to 2016, and I realized that I needed to make a change. Working as the sole IT admin in a SMB means that I always had zero to near-zero budget and had to assemble solutions from whatever discarded hardware and free software I could cobble together. While that does provide some valuable experience, modern companies are looking for experience with VMware or Hyper-V on SAN or virtual SAN storage, not open source NAS software cobbled together and running QEMU on an old desktop.
I looked at building that homelab I always wanted, but again, electricity cost had only gone up in the intervening 15 years, and I was now in a transitional state of four people living in a two bedroom apartment. I wasn’t willing to sacrifice what little tranquility we had at home by having a couple of R710s screaming in the living room. Thus, I decided to build my own “servers.”
I already had built my “primary” desktop specifically to move some workloads off of my laptop. It was an i5-4570S desktop with 6GB of RAM and a discarded video card that I used for light gaming and running one or two small VMs in Virtualbox. My goal was to build two or three compact desktops I could run trialware on to familiarize myself with vCenter, SCCM and other technologies I was interested in. By keeping the machines compact, I could fit them under my desk, and by using consumer-grade hardware, I could keep the cost, noise and power usage down.
To save space, I chose the Antec VSK2000-U3 mini/micro-ATX case. Looking back, this was a huge mistake. These are about the size of a modern SFF Dell case, but it is a pain finding motherboards and heatsinks that fit. However, they did the job as far as fitting into the available space under the desk. These cases use a TFX form factor power supply–common in a lot of desktop and SFF form factor Dells, so used power supplies are cheap and plentiful on eBay.
When choosing motherboards, my goal was to find a mini or micro-ATX board that had four RAM slots so I could eventually upgrade them to 32GB using 8GB sticks–not as easy a task as one might think. The first motherboard I found on Craigslist. It was a Gigabyte Mini-ATX motherboard with an i5-3470S CPU. Due to the location of the CPU on the motherboard, I couldn’t fit it in the case without the heatsink hitting the drive tray, so I ended up swapping the mobo with the board in my home machine, as my nice Gigabyte motherboard and i5-4570S fit the Antec case.
Thinking I was clever, I chose an eBay take-out from an Optiplex 9010 SFF as my second motherboard. It had four RAM slots and was cheaper than any of the other options. However, I soon found out that Dell engineered that sucker to fit their case and no others. Their proprietary motherboard wouldn’t fit a standard heatsink. I ended up getting the correct Dell heatsink/fan combination from eBay, which fit the case perfectly and used a heat pipe setup to eject the heat out of the back of the case. I also had to get the Dell rear fan and front panel/power button assembly to get the system to power on without complaining. Fortunately, the Dell rear fan fit the front of the Antec case where Antec had provided their own fan, so no hacking was needed. Finally, the I/O shield in the Optiplex is riveted into the Dell case and can’t be purchased separately, so I’m running it without a shield. This system runs with an i5-3570K, which I rescued out of another dead machine I rescued from the trash.

Optiplex 9010 SFF heatsink, bottom view. The end of the fan shroud on the left stops just millimeters from the rear grille of the Antec case, like they were made for each other.
Once the homelab was up and running, I upgraded RAM when I could afford it. The two homelab machines and my desktop started out with 8GB each and now have 24GB each. To further save electricity, I have them powered down when not in use. (My primary desktop stays on, as it runs Plex and other household services.) Right now, each system has a hard drive and a 2.5″ SSD. (They have an empty optical drive bay, so additional drives can fit with a little work.) I picked up some four-port gigabit Intel 1000 NICs (HP NC364T) since the onboard Realtek NICs on the Gigabyte boards aren’t picked up by ESXi.
So the big question: will these work with ESXi 6.7? In short, no. They run 6.5 Update 2 just fine, but the 3570K crashes when booting 6.7, possibly because it doesn’t have VT-d support. However, they work great running 6.0 and 6.5, which will get me through my VCP studies. For the price, power consumption, heat and noise, they do the job just fine for now.