#

Running Jobs

MPI for Python (mpi4py) on Odyssey

Introduction This web-page is intended to help you running MPI Python applications on the Odyssey cluster using mpi4py. To use mpi4py you need to load an appropriate Python software module. We have the Anaconda Python distribution from Continuum Analytics. In addition to mpi4py, it includes hundreds of the most popular packages for large-scale data processing and scientific computing. You can…

Transferring files to DATA.RC

Users of the data.rc.fas.harvard.edu server have three options for doing so. Please note that if you choose option 2, the settings for FTP/S are not the same as for regular SFTP settings you might use to transfer files to other servers. The connection methods in order of preference: Via Web browser - This is the default means of accessing data.rc…

Convenient SLURM Commands

This page will give you a list of the commonly used commands for SLURM. Although there are a few advanced ones in here, as you start making significant use of the cluster, you'll find that these advanced ones are essential! A good comparison of SLURM, LSF, PBS/Torque, and SGE commands can be found here. Also useful: What is FairShare and…

SFTP file transfer using Filezilla – Filtering

There may be times when you wish to filter the file listing in the local or remote pane. If you need to do this often, you may want to set up a filter. Unlike the search feature (binoculars icon), filters modify what is shown in the Remote Site: or Local Site: pane. If you simply need to see files grouped together…

SFTP file transfer using Filezilla (Mac/Windows/Linux)

Filezilla is a free and open source SFTP client which is built on modern standards. It is available cross-platform (Mac, Windows and Linux) and is actively maintained. As such Research Computing is recommending its use over previous clients, especially as it does not have some of the quirks of clients like Cyberduck or SecureFX. This document will outline setting up a…

Submitting Large Numbers of Jobs to Odyssey

Introduction Often times one will need to submit a large number of jobs to the cluster for various reasons -- to get high throughput for a big analysis, to vary parameters with one analysis, etc. This document aims to help you become more efficient and to help you take advantage of shell and SLURM resources. This will improve your work…

Modules HowTo

For a more robust listing of modules, please use the module search on our Portal https://portal.rc.fas.harvard.edu/apps/modules NOTE: This documentation describes the new lmod module system on the cluster that was introduced Summer 2014. See this page for usage instructions for details. About modules On the Odyssey cluster, we want a variety of apps available, including different versions of the same app…