#

Summer 2020 datacenter consolidation

On-going work July - ? in our Boston data center
Due to scheduling of resources around pandemic guidelines and rules, as well as scheduling with the data center and vendors, an end date is not yet known.

As part of cost-savings plans, FASRC and Harvard Medical School will be combining space in our Boston data center. This allows FASRC, HMS, HUIT to all have a singular Markley-Boston datacenter contract in the future. We are downsizing from 46 to 24 racks. 

This move requires the consolidation of some systems as well as the decommissioning of older, out-of-warranty servers in favor of more space and power-efficient systems. The systems targeted for decommissioning are generally 6 or more years old.

Due the the pandemic and the number of datacenter/FASRC/HMS/vendor staff involved, we have not been able to begin much of this work, but can soon begin in earnest. We will be working towards moving 18 racks worth of servers to a new location and consolidating as much as possible.

The first phase was the moving of the home directory server on July 20th which is now complete.

We now move on to:

  • A) Decommissioning old storage boxes and moving their shares to the new, more-dense and resilient CEPH cluster. Affected labs will be contacted re: the schedule for each. This includes rcnfs01 - rcnfs13 as well as fs2k01. All of these are small, old, and an an inefficient use of our now-limited rack space.
  • B) Decommissioning old, out-of-warranty compute nodes at 1ss. The owners/labs for each of these will be contacted re: the schedule for each. In every case we have found that the current Cannon compute already available is faster, more plentiful, and far more power-efficient than these aging machines. Installation of Cannon doubled the capacity of “shared” and all usage on older PI partitions is now down to < 25% usage. Note: These are compute nodes, not storage nodes.
  • C) Moving all 18 racks worth of equipment to the consolidated space and re-racking, cabling, and configuring. This will require a downtime later on. An exact date is not yet know due to the fluid nature of scheduling with our partners.
  • D) Returning to normal operation in the new space. ETA unknown at this point.

We thank you for your patience and understanding as this is a large project with many moving parts. The amount of money the university will save is significant and we are doing our best to meet their targets.

Thanks!
FAS Research Computing.