“The hot stays hot and the cool stays cool.” -McDonalds McDLT commercials
Part 1 of this saga can be found here.
We were all just a bunch of starry-eyed kids with our plan for the future: Rack some equipment, buy back-to-front cooled switches, snap in some filler blanks and the world would be a beautiful place. We soon were in for a big reality check. Let’s start with a simple task: buying some switches.
It turns out that it’s actually difficult to find switches that suck air in the back and blow it out the front. In fact, it’s even fairly difficult to find switches that blow front to back. Many (most?) switches pull air from the left side of the switch and blow it out the right or vice versa. In a contained data center, that means that the switches are pulling in hot air from the hot aisle and blowing out even hotter air. In fact, there are diagrams on the Internet showing rows of chassis switches mounted in 2-post racks where the leftmost switch is getting relatively cool air and blowing warmer air into the switch to its right. This continues down the line until you get to the switch at the other end of the row, which is slowly melting into a pile of slag. Needless to say, this is not good for uptime.
There are companies that make various contraptions for contained-aisle use. These devices have either passive ducts or active fan-fed ducts that pull air from the cold aisle and duct it into the intake side of the switch. Unfortunately, switch manufacturers can’t even agree on which side of the switch to pull air from or where the grilles on the chassis are located. Alas, this means that unless somebody is making a cooler specific to your chassis, you have to figure out which contraption is closest to your needs. In our case, we were dealing with a Cisco 10-slot chassis with right-to-left cooling. There were no contraptions that fit correctly, so an APC 2U switch cooler was used. This cooler pulls air from the front and blows it up along the intake side of the switch in the hot aisle. While not as energy efficient as contraptions with custom fitted ducts that enclose the input side of the switch, it works well enough and includes redundant fans and power inputs.
For the top of rack and core switches, only the Cisco Nexus line offered back-to-front cooling options (among Cisco switches, that is.) That’s fine since we were looking at Nexus anyway, but it’s unfortunate that it’s not an option on Catalyst switches. Front-to-back cooling is an option, but then the switch ports are in the cold aisle, meaning that cables must be passed through the rack and into the hot aisle. It can work, but it’s not as clean.
However, buying back-to-front cooled switches is but the beginning of the process. The switches are mounted to the back of the cabinet and are shorter than the cabinet, meaning that there is a gap around the back of the switch where it’s not sealed to the front of the cabinet. Fortunately, the contraption industry has a solution for that as well. In our case, we went with the HotLok SwitchFix line of passive coolers. These units are expandable; they use two fitted rectangles of powder-coated steel that telescope to close the gap between the switch and the cabinet. They come in different ranges of depths to fit different rack depth and switch depth combinations and typically mount inside the switch rails leading to the intake side of the switch. Nylon brush “fingers” allow power and console cables to pass between the switch and the SwitchFix and into the hot aisle.

While this sounds like an ideal solution, in reality the heavy gauge steel was difficult to expand and fit correctly, and we ended up using a short RJ-45 extension cable to bring the console port out of the SwitchFix and into the cold aisle for easy switch configuration. The price was a little heart-stopping as well, though it was still better than cobbling together homemade plastic and duct-tape contraptions to do the job.
With the switches sorted, cable managers became the next issue. The contractor provided standard 2U cable managers, but they had massive gaps in the center for cables to pass through–great for a 2-post telco rack, but not so great for a sealed cabinet. We ended up using some relatively flat APC 2U cable managers and placed a flat steel 2U filler plate behind them, spaced out with #14 by 1.8″ nylon spacers from Grainger. With the rails fully forward in the cabinet, the front cover of the cable manager just touched the door, but didn’t scrape or significantly flex.
One racks are in place and equipment is installed, the rest of the rack needs to be filled to prevent mixing of hot and cold air. There are a lot of options, from molded plastic fillers that snap into the square mounting holes to powdered coated sheets of steel with holes drilled for mounting. Although the cost was significantly higher, we opted for the APC 1U snap-in fillers. Because they didn’t need screws, cage nuts or tools, they were easy to install and easy to remove. With the rails adjusted all the way up against the cabinet on the cold aisle side, no additional fillers were needed around the sides.
With every rack unit filled with switches, servers, cable managers, telco equipment and snap-in fillers, sealing the remaining gaps was our final issue to tackle from an efficiency perspective. While the tops of the cabinets were enclosed by the roof system, there was still a one-inch gap underneath the cabinets that let cold air through. Even though the gap under the cabinet was only an inch high, our 18 cabinets had gaps equivalent to about three square feet of open space! We bought a roll of magnetic strip to attach to the bottom of the cabinets to block that airflow, reduce dust intrusion and clean up the look.
Lessons Learned
There’s no other way of saying this: this was a lot of work. A lot of stuff had to be purchased after racking began. There are a lot of gotchas that have to be considered when planning this, and the biggest one is just being able to seal everything up. Pretty much the entire compute equipment industry has standardized on planning their equipment around front-to-back cooling, which makes using them in a contained-aisle environment simple. Unfortunately, switch manufacturers are largely just not on board. I don’t know if it’s because switching equipment is typically left out of such environments, or if they just don’t see enough of it in the enterprise and small business markets, but cooling switches involves an awful lot of random metal and plastic accessories that have high markups and slow shipping times.
However, I have to say that having equipment sitting at rock-stable temperatures is a huge plus. We were able to raise the server room temperatures, and we don’t have the hot-spot issues that cause fans to spin up and down throughout the day. Our in-row chillers run much less than the big CRAC units in the previous data center, even though there is much more equipment in there today. The extra work helped build a solid foundation for expansion.