The Littlest Datacenter Part 1: Compute and Storage

I was tasked with building a datacenter.  Okay, not really.  The company was expanding into a low-cost strip mall, which meant limited connectivity options, no power redundancy and strict rules regarding modifications.  It also meant that I was limited to two racks in a tiny closet in the middle of an office space.  Finally, as always, there was minimal budget.

The Requirements

The COO was very SaaS-focused for business applications.  As the sole IT person (with additional ancillary duties), I was happy to oblige.  File storage, office applications, email, CRM, shipping and accounting functions were duly shipped off to folks who do that kind of thing for a living, leaving me with a relatively small build: AD/DNS/DHCP, phone system and surveillance.  While the systems I was replacing used independent servers that replicated VMs across, it was a decidedly more… manual failover process than I wanted.  As a shipping-focused business that was penalized for missing shipping deadlines, systems needed to be redundant and self-healing to the extent possible within the thin budget.  Finally, I knew that I would eventually be handing off the environment to either a managed service provider or a junior admin, so everything needed to be as simple and self-explanatory as possible.

The infrastructure VM (AD, DNS, etc.) and ancillary VMs were pretty straightforward.  The elephant in the room was the surveillance system.  Attached to 27 high-resolution surveillance cameras, it would have to store video for 90 days for most of the cameras for insurance reasons.  Once loaded with 90 days of video, it would consume 26TB of disk space and average about 50GB/hour of disk churn during business hours.

The Software

Because of costs, I settled on Hyper-V as my VM solution.  As it’s included with the Windows licenses I was already buying, it was cost-effective, and it had live migration, storage migration, backup APIs, remote replication and failover capabilities.  Standard licensing allows two Windows VMs to run on one license, further reducing costs.

Next to consider was the storage solution.  As I mentioned, the existing server pair consisted of two independent Hyper-V systems, with one active and one passive.  Hyper-V replication kept the passive host up to date, but in the event of a failure or maintenance, failing over and failing back was a long and arduous process.  I opted for shared storage to allow HA.  Rather than roll my own shared storage, I decided to buy.

After talking with several vendors, I settled on Starwind vSAN.  I had used their trialware with good results, and it had good reviews from people who had chosen it.  As it ran on two independent servers with independent copies of the data, it protected both from disk failure and host, backplane, operating system, RAID controller and motherboard failure.  Starwind sold a virtual appliance which was an OEM-branded but very familiar Dell T630 tower server, so I ordered two, which was substantially cheaper than sourcing the servers and vSAN software separately, and about a sixth of the cost of an equivalent pair of Dell servers and separate SAN.

The Hardware

I settled on a pair of midrange Xeons with 12 cores each–24 cores or 48 threads per host.  This was enough to process video on all of the cameras while leaving plenty of overhead for other tasks.  The T630 is an 18-bay unit with a rack option.  Dual gigabit connections went to the dedicated camera switch, while another pair went to the core switches.  For Starwind, a dual-port 10 gigabit card was installed in each host.  One port on each was used for Starwind iSCSI traffic, and the other for Starwind sync traffic.  Both were redundant in software, and they were direct-connected between the hosts with TwinAx.  Storage for each host consisted of sixteen 4TB Dell drives and two 200GB solid state drives for Starwind’s caching.

In an effort to reduce complexity, I went with a flat network.  Two HPe switches provided redundant gigabit links to the teamed server NICs and the other equipment in the rack.  Stacked and dual-uplinked HPe switches connected the workstations and ancillary equipment to the core.

The Software

Windows Server 2012R2 standard provided the backbone, with Starwind vSAN running on top.  Two Windows VMs powered the AD infrastructure server and the surveillance recording server.  I later purchased an additional Windows license and built a second DC/DNS/DHCP VM running on the second host.

Coming Soon:  Firewalls, backup, environmental, monitoring and security

 

5 thoughts on “The Littlest Datacenter Part 1: Compute and Storage

  1. Pingback: The Littlest Datacenter Part 2: Internet and Firewalls | TimClevenger.com

  2. Pingback: The Littlest Datacenter Part 3: Backup | TimClevenger.com

  3. Pingback: The Littlest Datacenter Part 4: Environmental and Security | TimClevenger.com

  4. Pingback: The Littlest Datacenter Part 5: Monitoring | TimClevenger.com

  5. Pingback: The Littlest Datacenter Part 6: Lessons Learned | TimClevenger.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.