#

Service Center & Billing FAQ

FAS Research Computing has moved some service to a cost model in order to meet the demands of our growing user base that includes labs outside of the FAS. Having the same cost structures across all of the Schools/Institutes at Harvard will allow FASRC to be better aligned to a number of regulations and requirements.

If your school does not have a MoU (Memorandum of Understanding) with FASRC for billing, please have someone from your faculty or administration contact us to discuss.

Our services catalog is made up of three primary billable areas: Compute (cluster) use, Storage, and Virtual Machines (VMs).


FAQ – Compute

No. Cluster computing is a core funded research service for FAS. However, these services are extended to partner Schools at Harvard and thus a cost sharing model has been developed to ensure the sustainability of this service.

Non-FAS schools that have an established MOU (Memorandum of Understanding) with FASRC will be billed quarterly. The bill will include a usage report by PI group in CPU Hours.  The cost of tiers of service are defined here. Funds will be transferred at the TUB level.  If you are interested in your School having an MOU, please contact the FASRC Director.

  • Harvard John A. Paulson School of Engineering and Applied Sciences
  • Harvard T.H. Chan School of Public Health
  • Harvard Business School
Yes. We will work with a handful of faculty as a pilot program that can lead up to an MOU (Memorandum of Understanding).

This service allows for a large aggregate of high-performance computing resources to be shared across many research projects.  Full list of benefits on this service is provided here.

Faculty have the opportunity to purchase additional capacity by adding so that they have priority use.  Fully integrated servers in the cluster will not incur additional hosting costs and will have the following service expectations and limits. Please work with the FASRC Associate Director for Systems & Operations to obtain quotes for servers.  Purchases will be made by the PIs purchasing department.


FAQ – Storage

Since the growth of storage has increased tenfold in the past 5 years, hosting individual small capacity storage server deployments has become unsustainable to manage. These individual single server systems do not easily allow for growth of data share. Due to their small volume, many systems are run above 85% utilization which degrades the performance.

Many systems are also run beyond their original maintenance contract, which causes issues in sourcing parts to make repairs; older systems (>5yr) increases the risk for catastrophic data loss. Some systems were purchased by PIs without a provision for backup systems, which has led to confusion of which data shares should have backups. Our prior backup methodology does not scale to these larger systems with hundreds of millions of files. Given these historical reasons, revamping our storage service offerings allows FASRC to maintain the lifecycle of equipment, allowing us to project the overall growth for data capacity, datacenter space, and professional staffing to maintain your research data assets safely.

Prior to the establishment of a Storage Service Center, we only offered a single NFS filesystem for your Lab Share; you now have the choice of four storage offerings to meet your technology needs. The tiers of service clearly define what type of backup your data will have. You only have to pay for an allocation capacity that you need, as opposed to having to guess at the beginning of a server purchase and have this excess go unused.

Over time, you can request an increase to your allocation size. You will receive monthly reports on utilization from each tier to help you plan for future data needs. Some of our tiers will also have web-based data management tools that allow you to query different aspects of your data, tag your data, and see visual representations of your data.

Each PI will be contacted directly over the summer of 2021 about the migration of their data. Over FY22 we will be migrating whole filesystems at a time into the storage service center. All new space requests will be allocated on newly deployed storage in one of the Tiers.
Unlike the compute cluster, where resources are reserved and released, data is allocated to storage long-term. In addition, storage needs across various research domains is drastically different. Therefore, in the FY19 federal rate setting, FAS decided to remove the portion of FASRC dedicated to maintaining storage out of the facilities part of the F&A. This allows FAS to run a Storage Service Center with costs that are allowable on federal awards.
Information about the storage offerings can be found on our Storage Services page. Requests for storage allocations can be made through our portal. We ask that you limit your requests to once a month at most. Please keep in mind that large requests (>100 TB) might not all be available at the time of request and a smaller increase will be applied as we add more capacity in the coming month.
Yes, you can allocations in different storage tiers to meet your needs and budget.
Billing will be handled by Science Operation Core Facilities. You will be billed monthly for the TB allocation of space for each tier. By default, we will also provide you a usage report by user. A usage report per project can be available by request and is best setup for new projects with new allocations.

We have worked with RAS on two allocation methods to charge data storage to your grants (1) per user allocation method (2) per project allocation method.

Per use allocation method: You will be supplied a usage report by the user for each tier. You can use the % of data associated with this individual as the cost and use the same cost distribution of their % effort on grants.

Example 1: PI has 10 TB allocation on Tier 1 in which researchers John and Jill use. The monthly bill for 10 TB of Tier 1 is $208.30 (at $20.83/TB/mo). The usage report shows that 8 TB total usage where John usages is 60% and Jill is 40%. So data charges associated with John is $124.98 and with Jill is $83.32. John is funded 50% on NFS and 50% on the NIH project thus $62.49 should be allocated to each grant. Jill is funded 100% on NSF project, thus $83.32 should be allocated to her NSF grant.

This method allows faculty to manage their data structures independently to the specific projects as multiple projects will be using some of the same data. Keep in mind that as researchers leave, there needs to be a plan for their data as this data will continue to be reported on in the usage reports.

Per project allocation method: If requested a project specific report, you will have a direct mapping of data used by this project and can allocate this full cost to the cost distribution from grants.

Example 2: PI requests new 5 TB allocation on Tier 1 for NSF funded project. 10 users share this data. The monthly bill would include Tier 1 of $54.15 (at $20.83/TB/mo). The entire $54.15 would be charged to the NSF grant.

This allows there to be a very straightforward assignment between data and funding source. Reuse of the active parts of this data will need to be assigned to future projects.

Example 3: The above PI also has 100 TB allocation on Tier 0 used for multiple projects with multiple funding sources. The usage report for the Tier 0 would be provided per user as per Example 1 above, and the % effort allocation method would be used for Tier 0, while the Example 2 would be used for the new project on Tier 1.

As is common with other Science Operations Core Facilities, once funding sources have been established for bills, we will continue to direct bill those funds until the PI updates these distributions. For the first few months billing will be manual via email until the new Science Operations LIMS billing system is complete.

We suggest that a data management plan is established at the beginning of a project, so that a full data lifecycle can be mapped to phases of your data. This helps identify data that will need to be kept long-term from the start, as well as helps mitigate data being orphaned when students and postdocs move on. If research data is being used again in a subsequent project, you should allocate funds to carry this data forward to new projects. As per federal regulations, you cannot pay for storage in advance. The Tier 3 tape service provides a location to deposit data longer term (7 years) which can meet many of the funding requirements,

Few exceptions will be made. If circumstances warrant one, the request will be reviewed by the University Research Computing Officer, Sr. Director of Science Operations and Administrative Dean of Science. One possible exception is when storage must be adjacent to an instrument where data collection rates are beyond the capacity of 1 Gbps Ethernet (100 MB/s) for extended periods (days).
We will maintain existing physical servers while under warranty, which is typically 5-6 years from their purchase date. We will need a data migration plan to the appropriate tiers a few months prior to decommissioning the server

For billing inquiries or issues, please email billing@rc.fas.harvard.edu

For general storage issues, questions, or tier changes, please contact rchelp@rc.fas.harvard.edu


FAQ – Virtual Machine

June 18 is a university holiday (Juneteenth). July 2 & 5 are university holidays (Independence Day)
Next monthly maintenance July 12th 7am-11am - [Details]
Annual MGHPCC Power Shutdown Aug 9-12 [Details and Schedule]
STATUS PAGE No known issues.