Part 1 of this saga can be found here.
I had an interesting backup conundrum. I had 26TB of utterly incompressible and deduplication-proof surveillance video data that needed to be backed up. As this video closely recorded the fulfillment operation and was used to combat fraud by “customers”, it needed to be accessible for 90 days after recording. Other workloads that needed to be backed up included the infrastructure (AD/DNS/DHCP) servers, a local legacy file server, the PBX, and a few other miscellaneous VMs. Those, however, totaled less than 2TB and were easily backed up using a multitude of options.
The video data was difficult to cloudify. Just the initial 26TB was massive, but it also changed at a rate of about 50GB per hour during shipping hours. And in case of actual hardware failure, getting that 26TB back and into production was equally difficult.
This combined with a limited budget led me to a conclusion I didn’t want to reach: tape. I needed to get a copy of the data out of the server room, and having a 52TB array sitting under somebody’s desk wasn’t appealing. One thing I did know was that management told me that an offsite copy wasn’t necessary; in the event of a full-site disaster (fire, earthquake, civil unrest), the least of our worries would be defending shipments that had already been made. With that in mind, I decided to go with a disk-to-disk-to-tape option.
When it comes to buying server hardware with massive amounts of disk, I generally look at Super Micro. I had been quoted a few Dell servers with 4TB drives, but the cost was generally breathtaking. I wanted a relatively lightly powered box with a ton of big drives at a reasonable price, and I got it. I picked up a new 16-bay box with a single 8-core CPU, an LSI RAID controller and 64GB of RAM for cheaper than used Dell gear. For drives, I went with 6TB enterprise-class SAS drives: four from WD, four from HGST, four from Seagate and four from Toshiba. I configured these drives in a RAID-60 with two of each model of drive in each half of the RAID-60. That way, if I had a bad batch of Seagate drives (which NEVER happens, of course), I could lose all four and still have a running, if degraded, array. This RAID arrangement gave me about 72TB usable–enough for two full backups and a number of incrementals.
For tape duties, I picked up a new-old stock Dell PowerVault PV124. This LTO-5 SAS changer was chosen at a time when the initial build called for only 16TB for video. Holding 16 tapes, its raw uncompressed capacity was about 24TB, and it eventually was using all 16 tapes for a single backup.
Veeam was chosen to handle the backup duties, because at the time, it was the only solution I could find that could do VM-level backups AND handle SAS tape libraries. Backups were made nightly to the local proxy, with full copies to tape occurring weekly. The tapes were then stored in a fireproof safe in a steel-and-concrete vault at the other end of the building.
Coming soon: Environmental, monitoring and security.