“This is no longer a vacation. It’s a quest. It’s a quest for fun.” -Clark W. Griswold
The 48U enclosures and the in-row CRACs are in place and bolted together, but there’s no noise except the shrill shriek of a chop saw in the next room. Drywall dust coats every surface despite the daily visits of the kind folks with mops and wet rags. The lights overhead are working, but the three-cabinet UPS and zero-U PDUs are all lifeless and dark.
Even in this state, the cabinets are being virtually filled. In a recently stood-up NetBox implementation back at HQ, top-of-rack switches are, contrary to their name, being placed in the middle of the enclosures. Servers are being virtually installed, while in the physical world, blanking panels are being snapped into place and patch panels are installed by the cabling vendor, leaving gaping holes where there will soon be humming metal boxes with blinkenlights on display. Some gaps are bridged with blue painters’ tape, signifying with permanent marker the eventual resting place of ISP-provided equipment.
During the first couple of weeks after everything is bolted down, we’re pretty much limited to planning and measuring because of the room is packed with contractors running Cat6, terminating fiber runs, plumbing the CRACs, putting batteries in the UPS, connecting the generator, wiring up the fire system–it’s barely controlled chaos. Within a couple of weeks, the pace slows a bit; it’s still a hardhat-required zone, but the fiber runs to the IDFs are done and being tested, patch panels are being terminated and the fire system has long since been inspected and signed off by the city. A couple of more weeks and we know it’s time to get serious when the sticky mat gets stuck down inside the door, the hardhat rules are rescinded and the first CRACs are fired up.
Thus begins the saga of a small band of intrepid SysAdmins working to turn wrinkled printouts, foam weatherstripping, hundreds of cage nuts, blue painter’s tape and a couple hundred feet of Velcro into a working data center. This marks the first time I’ve worked in a hot-aisle/cold-aisle data center, much less put one together. This is something I’ve wanted to do for years, but there’s remarkably little detailed information on the web about this process; the nitty gritty of data center design and construction is usually delegated to consultants who like to keep their trade a closely-guarded secret, and indeed, we consulted with a company on the initial design and construction of our little box of heaven.
The concept of hot-aisle/cold aisle containment is pretty straightforward and detailed in hundreds of white papers on the Internet: server and network equipment use fans to pull cool air in one side of the unit and blow heat out the other. Therefore, if you can turn your data center into two compartments, one that directs all of the cooled air from your A/C into the cold intake side of the equipment, and one that directs all of the heated air from your equipment back into the A/C return, you increase the efficiency and reduce the costs of running your A/C, and you keep hot exhaust air from returning to the intake of the equipment. Ultimately, if done right, you can turn up the temperature in the cold aisle, further reducing your costs, as you don’t have “hot spots” where equipment is picking up hotter exhausted air. The methods for achieving this vary greatly.
And more importantly, it turns out that there are some caveats that can either increase that initial cash outlay significantly or reduce overall efficiency.
Stay tuned as I dig into the details of this new project.
Pingback: Working In the Hot Aisle: Choosing a Design | TimClevenger.com
Pingback: Working in the Hot Aisle: Power and Cooling | TimClevenger.com
Pingback: Working in the Hot Aisle: Sealing It Up | TimClevenger.com