#

Getting Started

Introduction to Odyssey — Online

Overview This online training covers various aspects of accessing and using Odyssey, our 60K+ core computer cluster: Access and Login Placing files, filesystems, and storag Selecting and/or installing software Determining resource requirements (time, RAM, partition) Writing SLURM submission scripts Troubleshooting, pitfalls, and acceptable use Methods for obtaining help IT, Data Security, and Data Usage Agreements (DUA) Although we will touch…

Account Qualifications and Affiliations

If you are unsure whether or not you qualify for an RC account or what arrangements your school or department has with FAS Research Computing, this document may help you sort out these questions before requesting an account. If you then wish to proceed, click the following link to proceed to the account request tool. Request an FAS RC Account…

Office Hours

FAS/SEAS/Main Campus Research Computing holds regular main campus office hours Wednesdays, 12PM-3PM, at 38 Oxford Street (click for map), room 206. Research Computing staff are on hand to answer questions and troubleshoot problems. HCSPH New schedule soon...

Introduction to Unix

We do not routinely teach beginning Unix classes, though we may offer hands-on sessions as a part of multi-topic workshops. However, we do expect you (and strongly encourage you) to be comfortable with the following Unix skills before submitting jobs on Odyssey: Know how to traverse directories Know how to create & remove files & directories Know how to copy/move…

Modules HowTo

For a more robust listing of modules, please use the module search on our Portal https://portal.rc.fas.harvard.edu/apps/modules NOTE: This documentation describes the new lmod module system on the cluster that was introduced Summer 2014. See this page for usage instructions for details. About modules On the Odyssey cluster, we want a variety of apps available, including different versions of the same app…

Running Jobs

The Odyssey cluster uses SLURM to manage jobs SLURM is a queue management system and stands for Simple Linux Utility for Resource Management. SLURM was developed at the Lawrence Livermore National Lab and currently runs some of the largest compute clusters in the world. SLURM is similar in many ways to most other queue systems. You write a batch script…