Name: Paul Edmon
Job Title: ITC Research Computing Associate
How long have you worked for RC?
I've been working at Research Computing (RC) and the Institute for Theory and Computation (ITC) at the Center for Astrophysics (CfA) since November of 2011. So I've been here just over 3 years.
What led you to a career in HPC?
Generational curse :-).
In all seriousness my dad did. He had his Ph.D. in Atmospheric Sciences and I've always looked up to him. Plus I've wanted to be an astronaut since I was a kid. So I pursued a B.S. in Physics from the University of Washington and a Ph.D. in Astrophysics from the University of Minnesota.
Turns out I'm too tall to be an astronaut, so I've had to give up on that dream for now. However I discovered in undergrad that I really enjoyed plasma physics. Then when I got to graduate school I took a course from my future advisor Tom Jones on Numerical Methods for Astronomy and Physics. We had a choice between Fortran and C, I wisely took Fortran. I found I had a knack for programming in that class and I signed up with Tom to be one of his graduate students.
Tom dealt with magnetohydrodynamics (MHD), cosmic ray acceleration, and high energy events (such as active galactic nuclei and supernova remnants) in astronomy. To do that sort of research required High Performance Computing (HPC). Fortunately the University of Minnesota has a long history with supercomputing and had the Minnesota Supercomputing Institute. I cut my teeth on the supercomputers there along with my office mate Pete Mendygral, now at Cray. He ended up rewriting our entire MHD code from scratch and as I shared the office with him we would talk all the time about how to optimize the code and get the most speed out of the machine. I eventually used this overhauled code for my Ph.D. thesis on particle acceleration in the winds from massive stars.
Given my background I was hired as a postdoc by Samar Safi-Harb at the University of Manitoba to work with her on simulations of pulsar wind nebulae. This required me to rewrite the code again for relativistic MHD. It was as I was finishing up the initial part of this project that I was hired by RC and the ITC.
When I got the job with RC I knew that I had followed in an odd way the same path as my father before me. He had done simulations of hurricanes in graduate school and then was hired on as a support scientist after being a postdoc at the University of Washington. So the apple doesn't fall far from the tree. We both have a knack for computers, thinking at scale, and fluid dynamics. Funnily enough my brother has his degree in Aeronautical Engineering, seems fluid dynamics runs in the family. I owe a great debt to my parents, my graduate and postdoc advisors for leading me to where I am today. I wouldn't be here with out them.
What’s the best part of your job?
Playing with a machine as large as Odyssey. Truly it is a sight to behold. We may not be as large as some of the major computing centers, but what we lack in size we more than make up by the complexity of our systems and the velocity we operate at. It's a lot of fun to think at scale and try to leverage all the great resources we have access to. Plus the team here is awesome to work with.
What’s the hardest part of your job?
The complexity of our system. We have an exceedingly heterogeneous cluster not to mention groups of responsibilities. We support vast amounts of research from supercomputing to instrument support. Since I'm the resident astronomer and I am funded by the ITC I have my hands in every aspect of Odyssey as the astronomers at the CfA use it all and are one of our largest user bases.
Keeping that all straight in my head is a challenge. There are a variety of dependencies that make servicing any part of the infrastructure here difficult. Not only that, but besides running the bare metal we also have to support the specific research scientists are doing. Having a wonderfully complex machine is no good to anyone if no one uses it. The job may be complex, frustrating, and difficult, but also very rewarding and a lot of fun.
What’s the biggest misconception about RC or HPC in general?
That if you just put your code on the supercomputer it will run faster. As it turns out the processors we use on the cluster are not much better than what you have in your desktop. At times your desktop may be faster. What makes HPC work is that we have a vast number of these processors all networked together with a high speed interconnect. Not even sending it to the cloud will get you that.
In order to get the most out of your code and leverage any HPC resource (whether it be ours, the cloud, or XSEDE) you need to optimize your code and workflow. This takes time and effort. You need to learn about the hardware you are running on, the code you are running, the science you need to get done, and marry all that together to make sure you get things done as quickly and accurately as possible. Supercomputing isn't a blackbox, and the more you understand the better you can engineer your workflow and code to take advantage of the great resources we have available. We at RC are here to help people achieve that.
Given all the research conducted on RC’s Odyssey cluster, is there any one project that stands out for you?
I would have to say the BICEP (Background Imaging of Cosmic Extragalactic Polarization) project by far. John Kovac and his group are a delight to work with and they are doing fantastic science. As an astronomer its fascinating to see what they are achieving, and the way they leverage our resources is breathtaking. Plus it's fun to think of data from a microwave polarization telescope at the South Pole being beamed to us via satellite to our cluster for analysis. Very cool stuff.
If you could give RC users one piece of advice what would it be?
Educate yourselves. Take advantage of all the educational opportunities that Harvard and RC have to offer. Don't treat your code as a blackbox but learn how it works and learn the algorithms behind it. We at RC have a ton of experience with HPC and we would love to talk to you about optimizing your code and helping you leverage the cluster. You may discover a better way to run your code that saves you a ton of time later on, or you may discover a bug in your code that improves the code for everyone. Always be ready to learn and don't just automatically trust the results of your code but test and scrutinize the results. It's part of being at a university and a good scientist. Our primary focus is to learn and we at RC are here to help.
South Park, The Simpson, or Archer?
The Simpsons were good up to about 10 years ago when they jumped the shark. Frankly I'm surprised they are still on the air. These days I prefer South Park. It may be particularly vulgar at times but the satire is usually spot on. The Mormon and Scientology episodes are by far my favorites. I was a bit disappointed by the last season, hopefully they haven't lost their edge.
You can learn more about Paul by following him on Twitter at @pauledmon.
Copyright © 2015. All Rights Reserved.
Information about how to reuse or republish this work may be available at Attribution.