About Tim Clevenger

I'm a 20+ year veteran of the IT industry who enjoys modern and vintage technology.

Automatically Disable Inactive Users in Active Directory

While Microsoft provides the ability to set an expiration date on an Active Directory user account, there’s no built-in facility in Group Policy or Active Directory to automatically disable a user who hasn’t logged in in a defined period of time. This is surprising since many companies have such a policy and some information security standards such as PCI require it.

There are software products on the market that provide this functionality, but for my homelab, my goal is do this on the cheap. Well, not the cheap so much as the free. After reading up on the subject, I found that this is not quite as straightforward as it may seem. For instance, Active Directory doesn’t actually provide very good tools out of the box for determining when a user last logged on.

The Elusive Time Stamp

Active Directory actually provides three different timestamps for determining when a user last logged on, and none of them are awesome. Here are the three available AD attributes:

lastLogon – This provides a time stamp of the user’s last logon, with the caveat that it is not a replicated attribute. Each domain controller retains its own version of this attribute with the last timestamp that the user logged onto that particular domain controller. This means that any script that uses this attribute will need to pull the attribute from every domain controller in the domain and then use the most recent of those timestamps to determine that actual last logon.

lastLogonTimeStamp – This is a replicated version of the lastLogon timestamp. However, it is not replicated immediately. To reduce domain replication traffic, the replication frequency depends on a domain attribute called msDS-LogonTimeSyncInterval. The value of lastLogonTimeStamp is replicated based on a random time interval of up to five days before the msDS-LogonTimeSyncInterval. By default the msDS-LogonTimeSyncInterval attribute is unset, which makes it default to 14 days. Therefore, by default, lastLogonTimeStamp is replicated somewhere between 9 and 14 days after the previous replicated value. Needless to say, this is not useful for our purposes. In addition, this attribute is stored in a 64-bit signed numeric value that must be converted to a proper date/time to be useful in Powershell.

lastLogonDate – There are a lot of blogs that will state that this is not a replicated timestamp. Technically that’s true; it’s a copy of lastLogonTimeStamp that the domain controller has converted to a standard date/time for you. Otherwise, it’s the same as lastLogonTimeStamp and has the same 9-to-14-day replication delay.

Another caveat for all three of these attributes is that the timestamp is updated for many logon operations, including interactive and network logons and those passed from another service such as RADIUS or another Kerberos realm. Also, there’s a Kerberos operation called “Service-for-User-to-Self” (“S4u2Self”) which allows services to request Kerberos tickets for a user to perform group and access checks for that user without supplying the user’s credentials. For more information, see this blog post.

Caveats aside, this is what we have to work with. For the purposes of my relatively small domain, I’m comfortable with increasing the replication frequency of lastLogonTimeStamp. Be aware before using this in production that it will increase replication traffic, especially during periods when many users are logging in simultaneously; domain controllers will be replicating this attribute daily instead of every 9 to 14 days.

As none of these caveats applied in my homelab, I launched Active Directory Users and Computers. Under “View”, I selected “Advanced Features” to expose the attributes I needed to view or change. I then right-clicked my domain and selected “Properties.” The msDS-LogonTimeSyncInterval was “not set” as expected, so I changed it to “1” to ensure that the timestamp was replicated daily for all users.

I then created the below Powershell script in a directory.

# disableUsers.ps1  
# Set msDS-LogonTimeSyncInterval (days) to a sane number.  By
# default lastLogonDate only replicates between DCs every 9-14 
# days unless this attribute is set to a shorter interval.

# Also, make sure to create the EventLog source before running, or
# comment out the Write-EventLog lines if no event logging is
# needed.  Only needed once on each machine running this script.
# New-EventLog -LogName Application -Source "DisableUsers.ps1"

# Remove "-WhatIf"s before putting into production.

Import-Module ActiveDirectory

$inactiveDays = 90
$neverLoggedInDays = 90
$disableDaysInactive=(Get-Date).AddDays(-($inactiveDays))
$disableDaysNeverLoggedIn=(Get-Date).AddDays(-($neverLoggedInDays))

# Identify and disable users who have not logged in in x days

$disableUsers1 = Get-ADUser -SearchBase "OU=Users,OU=Demo Accounts,DC=lab,DC=clev,DC=work" -Filter {Enabled -eq $TRUE} -Properties lastLogonDate, whenCreated, distinguishedName | Where-Object {($_.lastLogonDate -lt $disableDaysInactive) -and ($_.lastLogonDate -ne $NULL)}

 $disableUsers1 | ForEach-Object {
   Disable-ADAccount $_ -WhatIf
   Write-EventLog -Source "DisableUsers.ps1" -EventId 9090 -LogName Application -Message "Attempted to disable user $_ because the last login was more than $inactiveDays ago."
   }

# Identify and disable users who were created x days ago and never logged in.

$disableUsers2 = Get-ADUser -SearchBase "OU=Users,OU=Demo Accounts,DC=lab,DC=clev,DC=work" -Filter {Enabled -eq $TRUE} -Properties lastLogonDate, whenCreated, distinguishedName | Where-Object {($_.whenCreated -lt $disableDaysNeverLoggedIn) -and (-not ($_.lastLogonDate -ne $NULL))}

$disableUsers2 | ForEach-Object {
   Disable-ADAccount $_ -WhatIf
   Write-EventLog -Source "DisableUsers.ps1" -EventId 9091 -LogName Application -Message "Attempted to disable user $_ because user has never logged in and $neverLoggedInDays days have passed."
   }

You may notice two blocks of similar code. The lastLogonDate is null for newly created accounts that have never logged in. Rather than have them all handled in a single block, I created two separate handlers for accounts that have logged in and those that haven’t. This might be useful for some organizations that want to disable inactive accounts after 90 days but disable accounts that have never logged in after only 14 or 30 days.

Note also that I have included event logging in this script. This is completely optional, but if you are bringing Windows logs into a SIEM like Splunk, it’s useful to have custom scripts logging into the Windows logs so that the security team can track and act on these events. To log Windows events from a Powershell script, you need to register your script. This is a simple one-time command on each machine running the script. Here’s the command I used to register my script:

New-EventLog -LogName Application -Source "DisableUsers.ps1"

This gives my script the ability to write events into the Application log, and the source will show as “DisableUsers.ps1”. The LogName can be used to log events to a different standard Windows log, or to even create a completely separate log. I also created two event IDs, 9090 and 9091, to log the two event types from my script. I did a quick Google search to make sure that these weren’t already used by Windows, but duplicate IDs are fine from different sources.

This script by default has a “-WhatIf” switch on each Disable-ADAccount command. This is so the script can be tested against the domain to make sure it’s behaving in an expected manner. During the run, the script will display the accounts it would have disabled, and the event log will generate an event for each disablement regardless of the WhatIf switch. Once thoroughly tested, the “-WhatIf”s can be removed to make the script active. The SearchBase should also be changed to an appropriate OU in your domain. Note that this script does not discriminate; any users in its path are subject to being disabled, including service accounts and the Administrator account if they have not logged in during the activity period.

Finally, it’s time to put the script into play. It’s as easy as creating a daily repeating task that launches powershell.exe with the arguments “-ExecutionPolicy Bypass c:\path\to\DisableUsers.ps1” If this is run on a domain controller, it can be run as the NT AUTHORITY\System user so that no credentials need to be stored or updated. I ran this in my homelab test domain and viewed the event log. Sure enough, several of my test accounts were disabled. Note that I used the weasel words “attempted to” in the log; this is because I don’t actually do any checking to verify that the account was disabled successfully.

I did see one blog post where the author added a command to update the description field of an account so that administrators could see at a glance that it was auto-disabled. I didn’t do that here, but if you wanted to add that, here’s a sample command you can add to the script:

Set-ADUser -Description ("Account Auto-Disabled by DisableUser.ps1 on $(get-date)")

Hopefully this will help others to work around a glaring oversight by Microsoft. Please drop me a comment if you have any suggestions for improvement; I’m not a Powershell coder by trade and I’m always looking for tips.

Building and Populating Active Directory for my Homelab

There are a multitude of Active Directory how-to blog posts and videos out there. This is my experience putting together a quick and dirty Active Directory server, populating it with OUs and users, and getting users randomly logged in each day. I’m going to skip the standard steps for installing AD as it’s well documented in other blogs, and it’s the populating of AD that’s more interesting to me.

Why put in all this work you ask? I’m doing some experimentation in AD, and rather than blow up a production environment, I want a working and populated AD at home. I also need users to “log in” randomly so I can use the login activity to write some scripts. So, here’s how I did it.

Step 1 was to actually install Active Directory. Again, there are numerous blogs about how to do this. Since I was running Server 2016 in evaluation mode, I wasn’t going to do a lot of customizing since it was just going to expire eventually anyway. I had already configured 2016 for DNS and DHCP for my lab network, so installing AD was as easy as adding the role and management utilities from the Server Manager. Utilizing the “Next” button repeatedly, I created a new forest and domain for the lab.

The next step was to populate the directory. For this, I went with the “CreateDemoUsers” script from the Microsoft TechNet Gallery. I downloaded the PowerShell script and accompanying CSV and executed the script. In minutes, I had a complete company including multiple OUs filled with over a thousand user accounts and groups. Each user had a complete profile, including title, location and manager.

With my sample Active Directory populated, my next goal was to record random logins for these users. I needed this initially for some scripts I was writing, but my eventual goal was to generate regular activity to bring into Splunk. For my simulated login activity, I wrote a script to pick a user at random and execute a dummy command as that user so that a login was recorded on their user record. I began by modifying the “Default Domain Controller policy” in Group Policy Management. I added “Domain Users” to “Allow log on locally.” (This is found in Computer Configuration -> Windows Settings -> Security Settings -> Local Policies -> User Rights Assignment.) This step isn’t necessary if the script is running from a domain member, but since I didn’t have any computers added to the domain, this would allow me to run the script directly on the domain controller.

Here is the script that I created. I put it in c:\temp so it could be run by any user account.

# Login.ps1

$aduser = Get-ADUser -Filter * | Select-Object -Property UserPrincipalName | Sort-Object{Get-Random} | Select -First 1
$username = $aduser.UserPrincipalName
$password = ConvertTo-SecureString 'Password1' -AsPlainText -Force

$credential = New-Object System.Management.Automation.PSCredential $username, $password
Start-Process hostname.exe -Credential $credential

I wanted my user base as a whole to log in on average every 90 days, which meant that I wanted about 11 users per day to log in. Easy enough, I created a scheduled task to run at 12:00am every day and repeat every 2 hours. That was close enough for my purposes.

Since I was running this on a lone DC in a lab, I ran it as NT AUTHORITY\SYSTEM so I didn’t have to mess with passwords. The task runs powershell.exe with the arguments -ExecutionPolicy Bypass c:\temp\login.ps1.

After saving the task, I right-clicked and clicked “Run.” Because I chose hostname.exe as the execution target, the program opened and immediately closed. While this briefly flashed on my screen when testing, nothing appears on the screen when running this as a scheduled task. A quick look through the event log confirmed that user “LEClinton” logged in, and a look at Lisa Clinton’s AD record shows that she did indeed log in.

While my eventual goal is to script all of the installation and configuration, this was enough to get the other work I needed to do done, as well as provide a semi-active AD environment for further testing and development.

Homelab Again

“We’re putting the band back together.”

Five o’clock. Time to shut down the ol’ laptop and go watch some TV. Or not. I feel the urge to create again. I have a bunch of hardware laying around that’s gathering dust, and it’s time to assemble it into something I can use to learn and share.

I’m not looking for anything flashy here. Money is tight and the future is uncertain for me and millions of others, so while I’d love a few old Dell servers or some shiny new NUCs, I’m sure I can cobble together something useful out of my closet for free. Initially I was going to put together a single machine and set up nested virtualization, but my best available equipment only has four cores and frankly, I feel like it’s cheating anyway. I’m a hardware junkie, so I cobble.

With a day’s work, I have scraped together a cluster of machines. One is from my original homelab build: a Dell Optiplex 9010 motherboard and i5 processor bought off eBay and transplanted into a slim desktop case. It has four DIMM slots, so it’s going to be one of my ESXi hosts. The other VM host is a Dell Optiplex 3020 SFF i5 desktop. Alas, it only has two DIMM slots, so it maxes out at 16GB with the sticks I have. I also manage scrape together a third system with eight gigs of RAM and a lowly Core 2 Duo. Transplanted into a Dell XPS tower case formerly occupied by a Lynnfield i7, it has enough drive bays that I can drop in a one terabyte SATA SSD and three 800GB SAS SSDs in RAID-0 on an LSI HBA. This system takes the role of an NFS-based NAS running on CentOS 8 and has no problems saturating a gigabit link.

It’s not all roses–my networking is all gigabit–but it totals eight cores, 48GB of usable RAM and over 3TB of flash storage. A few hours after that, I have a two-node ESXi 6.7 cluster configured and vCenter showing all green. I toyed briefly with installing 7.0, but I want to give that a little more time to bake, and besides, if what I’m planning doesn’t directly translate then I’ve done something wrong.

What are my goals here? Well, I want to work on a “let’s automate VMware” series. Not satisfied with the kind of janky scripts I’ve cobbled together to automate some of my uglier processes, I want to learn how to do this the right way. That certainly means Terraform, some kind of configuration management (Ansible, Salt, Puppet, Chef or a combination), building deployment images with Packer, and figuring out how to safely do all of this with code that I’m storing in Github. Possible future projects might involve integrating with ServiceNow, Splunk, monitoring (Zabbix, Solarwinds) or other business tools, and using the resulting tooling to deploy resources into AWS, Azure, DigitalOcean and/or VMware Cloud.

Why do this on my own? While I do have the resources at work, I feel more comfortable doing this on my own time and at my own pace. Plus, by using personal time and resources, I feel safer sharing my findings publicly with others. While my employer has never given me any indication that it would be an issue, I’d just rather be able to do and refine my skills on my own time.

So stay tuned; there’s more content to come from the homelab!

How to Disable Screen Resolution Auto-Resizing on ESXi Virtual Machines (for Linux this time)

It seemed like a simple request. All I wanted was for my Linux virtual machine to stay at the 1920×1080 resolution I specified. VMware Tools, however, had different ideas and resized my VM’s screen resolution every time I resized the window in the remote console. Normally this is no biggie, but users were connecting into this box with VNC and the VNC server was freaking out when the resolution changed. Google has dozens of blog posts and forum questions about resolving this issue in Windows VMs and in VMware Fusion/Workstation, but I couldn’t find anything on the fixing this in ESXi 6.7 even after an hour of searching.

Finally I dug into my local filesystem and found the culprit: a dynamic library called libresolutionKMS.so. On my CentOS 7 VM, it was located in /usr/lib64/open-vm-tools/plugins/vmsvc. Like the Windows equivalent, moving or deleting this file and rebooting was sufficient to stop this issue in its tracks. In my case, I moved it back to my home directory to make sure nothing broke, but it appears that this stopped the auto-resizing without affecting any other operations, and VMware still reported the VMtools as running.

As this is Linux, open-vm-tools is updated by my package manager, so this file will need to be moved or deleted each time after open-vm-tools is updated. Given the lack of any other way to disable this behavior, this seems like a workable kludge for my situation.

Working in the Hot Aisle: Sealing It Up

“The hot stays hot and the cool stays cool.” -McDonalds McDLT commercials

Part 1 of this saga can be found here.

We were all just a bunch of starry-eyed kids with our plan for the future:  Rack some equipment, buy back-to-front cooled switches, snap in some filler blanks and the world would be a beautiful place.  We soon were in for a big reality check.  Let’s start with a simple task: buying some switches.

It turns out that it’s actually difficult to find switches that suck air in the back and blow it out the front.  In fact, it’s even fairly difficult to find switches that blow front to back.  Many (most?) switches pull air from the left side of the switch and blow it out the right or vice versa.  In a contained data center, that means that the switches are pulling in hot air from the hot aisle and blowing out even hotter air.  In fact, there are diagrams on the Internet showing rows of chassis switches mounted in 2-post racks where the leftmost switch is getting relatively cool air and blowing warmer air into the switch to its right.  This continues down the line until you get to the switch at the other end of the row, which is slowly melting into a pile of slag.  Needless to say, this is not good for uptime.

There are companies that make various contraptions for contained-aisle use.  These devices have either passive ducts or active fan-fed ducts that pull air from the cold aisle and duct it into the intake side of the switch.  Unfortunately, switch manufacturers can’t even agree on which side of the switch to pull air from or where the grilles on the chassis are located.  Alas, this means that unless somebody is making a cooler specific to your chassis, you have to figure out which contraption is closest to your needs.  In our case, we were dealing with a Cisco 10-slot chassis with right-to-left cooling.  There were no contraptions that fit correctly, so an APC 2U switch cooler was used.  This cooler pulls air from the front and blows it up along the intake side of the switch in the hot aisle.  While not as energy efficient as contraptions with custom fitted ducts that enclose the input side of the switch, it works well enough and includes redundant fans and power inputs.

For the top of rack and core switches, only the Cisco Nexus line offered back-to-front cooling options (among Cisco switches, that is.)  That’s fine since we were looking at Nexus anyway, but it’s unfortunate that it’s not an option on Catalyst switches.  Front-to-back cooling is an option, but then the switch ports are in the cold aisle, meaning that cables must be passed through the rack and into the hot aisle.  It can work, but it’s not as clean.

However, buying back-to-front cooled switches is but the beginning of the process.  The switches are mounted to the back of the cabinet and are shorter than the cabinet, meaning that there is a gap around the back of the switch where it’s not sealed to the front of the cabinet.  Fortunately, the contraption industry has a solution for that as well.  In our case, we went with the HotLok SwitchFix line of passive coolers.  These units are expandable; they use two fitted rectangles of powder-coated steel that telescope to close the gap between the switch and the cabinet.  They come in different ranges of depths to fit different rack depth and switch depth combinations and typically mount inside the switch rails leading to the intake side of the switch.  Nylon brush “fingers” allow power and console cables to pass between the switch and the SwitchFix and into the hot aisle.

The rear of the switch as viewed from the cold aisle side. The SwitchFix bridges the gap between the rear of the switch and the front of the rack.

While this sounds like an ideal solution, in reality the heavy gauge steel was difficult to expand and fit correctly, and we ended up using a short RJ-45 extension cable to bring the console port out of the SwitchFix and into the cold aisle for easy switch configuration.  The price was a little heart-stopping as well, though it was still better than cobbling together homemade plastic and duct-tape contraptions to do the job.

With the switches sorted, cable managers became the next issue.  The contractor provided standard 2U cable managers, but they had massive gaps in the center for cables to pass through–great for a 2-post telco rack, but not so great for a sealed cabinet.  We ended up using some relatively flat APC 2U cable managers and placed a flat steel 2U filler plate behind them, spaced out with #14 by 1.8″ nylon spacers from Grainger.  With the rails fully forward in the cabinet, the front cover of the cable manager just touched the door, but didn’t scrape or significantly flex.

One racks are in place and equipment is installed, the rest of the rack needs to be filled to prevent mixing of hot and cold air. There are a lot of options, from molded plastic fillers that snap into the square mounting holes to powdered coated sheets of steel with holes drilled for mounting. Although the cost was significantly higher, we opted for the APC 1U snap-in fillers. Because they didn’t need screws, cage nuts or tools, they were easy to install and easy to remove. With the rails adjusted all the way up against the cabinet on the cold aisle side, no additional fillers were needed around the sides.

With every rack unit filled with switches, servers, cable managers, telco equipment and snap-in fillers, sealing the remaining gaps was our final issue to tackle from an efficiency perspective.  While the tops of the cabinets were enclosed by the roof system, there was still a one-inch gap underneath the cabinets that let cold air through.  Even though the gap under the cabinet was only an inch high, our 18 cabinets had gaps equivalent to about three square feet of open space!  We bought a roll of magnetic strip to attach to the bottom of the cabinets to block that airflow, reduce dust intrusion and clean up the look.

Lessons Learned

There’s no other way of saying this: this was a lot of work. A lot of stuff had to be purchased after racking began. There are a lot of gotchas that have to be considered when planning this, and the biggest one is just being able to seal everything up. Pretty much the entire compute equipment industry has standardized on planning their equipment around front-to-back cooling, which makes using them in a contained-aisle environment simple. Unfortunately, switch manufacturers are largely just not on board. I don’t know if it’s because switching equipment is typically left out of such environments, or if they just don’t see enough of it in the enterprise and small business markets, but cooling switches involves an awful lot of random metal and plastic accessories that have high markups and slow shipping times.

However, I have to say that having equipment sitting at rock-stable temperatures is a huge plus. We were able to raise the server room temperatures, and we don’t have the hot-spot issues that cause fans to spin up and down throughout the day. Our in-row chillers run much less than the big CRAC units in the previous data center, even though there is much more equipment in there today. The extra work helped build a solid foundation for expansion.

Working in the Hot Aisle: Power and Cooling

Part 1 of this saga can be found here.

An hour of battery backup is plenty of time to shut down a dozen servers… until it isn’t.  The last thing you want to see is that clock ticking down while a couple thousand Windows virtual machines decide to install updates before shutting down.

We were fortunate that one of the offices we were consolidating was subleased from a manufacturer who had not only APC in-row chillers that they were willing to sell, but a lightly used generator.  Between those and a new Symmetra PX UPS, we were on our way to breathing easy when the lights went out.  The PX provides several hours of power and is backed by the generator, which also backs up the chillers in the event of a power outage.  The PX is a marvel of engineering, but is also a single point of failure.  We witnessed this firsthand with an older Symmetra LX, which had a backplane failure a couple of years previous that took down everything.  With that in mind, we opted to go with two PDUs in each server cabinet: one that fed from the UPS and generator, and one that fed from city power with a massive power conditioner in front of it.  These circuits also extend into the IDFs so that building-wide network connectivity stays up in the event of a power issue.

Most IT equipment comes with redundant power supplies, so splitting the load is easy: one power supply goes to each PDU.  For the miscellaneous equipment with a single power supply, an APC 110V transfer switch handles the switching duties.  A 1U rackmount unit, it is basically a UPS with two inputs and no batteries, and it seamlessly switches from one source circuit to the other when a voltage drop is detected.

As mentioned, cooling duties are handled by APC In-Row chillers, two in each aisle.  They are plumbed to rooftop units and are backed by the generator in case of power failure.  Temperature sensors on adjacent cabinets provide readings that help them work as a group to optimize cooling, and network connectivity allow monitoring using SNMP and/or dedicated software.  Since we don’t yet need the cooling power from all four units, we will be programming the units to run on a schedule to balance running hours between the units.

Cooling in the IDFs is handled by the building’s chiller, with an independent thermostat-controlled exhaust fan as backup. As each IDF basically hosts just one chassis switch, cooling needs are easily handled in this manner. As users are issued laptops that can ride out most outages, we were able to sidestep having to provide UPS power to work areas.

Next time:  Keeping the hot hot and the cool cool.

Working In the Hot Aisle: Choosing a Design

Part 1 of this saga can be found here.

As mentioned in the previous post, there several ways to skin the airflow separation cat. Each has its advantages and disadvantages.

Initially we considered raised floor cooling. Dating back to the dawn of computing, raised floor datacenters consist of chillers that blow cold air into the space under the raised floor. Floor tiles are replaced with perforated tiles in front of or underneath the enclosures to allow cold air into the equipment. Hot air from the equipment is then returned to the A/C units above the floor. The raised floor also provides a hidden space to route cabling, although discipline is more difficult since the shame of poor cabling is hidden from view. While we liked the clean look of raised floors with the cables hidden away, the cost is high and the necessary entry ramp at the door would have taken up too much floor space in our smaller datacenter.

We also looked at a hot aisle design that uses the plenum space above a drop ceiling as the return. Separation is achieved with plastic panels above the enclosures, and the CRAC units are typically placed at one or both ends of the room. Because this was a two-row layout in a relatively tight space, it was difficult to find a location for the CRACs that would avoid creating hot spots.

The decision became a lot easier when we found out that one of the spaces we were vacating had APC in-row chillers that we could pick up at a steep discount. The in-row units are contained within a single standard rack enclosure, so they are ideal for a hot aisle/cold aisle configuration. They solved the hot spot issues, as they could be placed into both rows. They also use temperature probes integrated into the nearby cabinets to keep temperatures comfortable efficiently.

APC In-Row RD. APC makes a kit to extend the height to 48U if used with tall racks. (Photo by APC)

With the cooling situation sorted, we turned our attention to containment. We opted for the Schneider/APC EcoAisle system, which provided a roof and end-doors to our existing two-row enclosure layout to create a hot aisle and a cold aisle. The equipment fans pull in cooler air from the cold aisle and exhaust hot air into the hot aisle, while the in-row chillers pull hot air from the hot aisle and return chilled air back into the cold aisle.

There are two options for this configuration.  A central cold aisle can be used, with the rest of the exterior space used as a hot aisle.  This possibly could reduce energy consumption, as only the central aisle is cooled and therefore the room doesn’t need to be sealed as well from air leaks.

The second option, which we ended up choosing, was a central hot aisle.  In our case, the exterior cold aisles gave us more room to rack equipment, and using the entire room as the cold aisle gives us a much larger volume of cool air, meaning that in the case of cooling system failure, we have more time to shut down before temperatures become dangerous to the equipment.

The central hot aisle is covered by a roof consisting of lightweight translucent plastic insulating panels, which reduce heat loss while allowing light in. (The system includes integrated LED lighting as well.) The roof system is connected to the fire system, so in the case of activation of the fire alarm, electromagnets that hold the roof panels in place will release, causing the roof panels to fall in so the sprinklers can do their job.  We can also easily remove one or more panels to work in the cable channel above.

APC EcoAisle configuration. Ours is against the wall and has one exit door. (Photo APC).

Our final design consists of eleven equipment racks, three UPS racks and four chiller racks.  This leaves plenty of room for growth, and in the case of unexpected growth, an adjacent room, accessible by knocking out a wall, doubles the capacity.

We decided on 48U racks, both to increase the amount of equipment we can carry and to raise the EcoAisle’s roof. To accommodate networking, one enclosure in each aisle is a 750mm “network width” enclosure to provide extra room for cable management. As the in-row CRACs were only 42U high, Schneider provides a kit that bolts to the top of the units to add 6U to their height.

Next time:  Power and cooling