What is it?
Originally Craigslist donated a bunch of computers to BSOE in the summer of 2009. We built a computer cluster out of it, and are allowing students, staff and faculty at UCSC to use the cluster . That original hardware is now retired.
The new cluster consists of one head node, and 7 compute nodes with a total of 304 cores to execute processes on. There are 2 queues all.q for long running jobs, and small.q for shorter jobs. Individuals can run 50 jobs at anytime, but queue up as many as they need.
The new clusters name is campusrocks2.soe.ucsc.edu
How do I log in?
Campusrocks uses Cruzid Blue to login. You must have set your password since 10-15-11 for this to work. Visit the Cruzid web page to set your password.
Once you have a password, you can log in to the system via SSH at campusrocks2.soe.ucsc.edu.
How long will it be available?
It has been given a new 5 year life cycle, and will be unplugged at the end, unless more funding appears to rejuvenate it. Its new drop-dead date is January 1st, 2018.
If you wish to purchase more nodes for the cluster, please put in a ticket email@example.com
What are some of the limitations (privacy, disk space)?
It came about based on an idea ISSDM had to build a campus cluster, and allow others to use it. The idea is have a shared resource, but allow those doing Computer research to be able to look at REAL file systems, to see how people use the file systems. So file system data will be available for both ISSDM and SSRC, and other units that need access to it.
How do I use the cluster (submit, cancel, review jobs)?
Regarding the queuing software, we are using SunGrid SGE and its documentation is available on the cluster also.
The above url is NOT accessible outside of the UCSC IP space.
qsub --> Submits a job (create a shell script, then run qsub shellscript)
How can I run MPI jobs on it?
Compile the code with mpicc
sample c code is in /opt/mpi-tests/src
And then a shell script like this (mpitest16):
How can I see how busy the cluster is?
qhost shows the load averages of each of the exec hosts
qstat -g c gives a count of number of jobs running on each queue
What is the Small Queue, what are the limitations?
The Small queue is for jobs that will not run for a long time, there is a 72 hour wall clock limit and and 800 hour CPU limit (if you do multi-threaded operations) you can see queue configurations with the command qconf -sq small.q
The small.q currently has 2 boxes dedicated with 48 processors each
(The all.q has 4 computers with a total of 144 processors)
How do I load Software on it?
We will load RPMs that are in the yum repository for the OS we are running, or you can compile code yourself in your home directory.
Put in an ITRequest ticket for known RPMs.
Are there Backups?
The /campus directory (your home directory) has an rsync done daily. Look in the /backups directory to find them. We run snapshots in the /backups directory cd /backups/.zfs/snapshots (we keep 4 daily snapshots, and 3 monthly snapshots)
How was it funded?
The original donation by craigslist is almost all gone. The only pieces still in place are:
1.) Head node
2.) The boxes the fileservers
For the rest:
SOE purchased new network switches
SOE purchased the 2 48 core boxes
PBSCI purchased the 3 64 core boxes
the 2 8 core boxes were old campus VM servers.
If wish to contribute new hardware, please contact us.
Campusrocks has been invaluable for my bioinformatics research with marine metagenomics data. The cluster has enabled me to investigate new ways of assembling and annotating 40-50 of these large datasets, with great speed (both due to fast cores, lots of memory, and parallelization) and reliable backup of scripts and results. I could not have done the same experiments in a reasonable time on my laptop, which would have been unusable for other research tasks had I tried. Finally, the cluster computing skills I have developed by working on campusrocks — my first such experience — will be essential for my bioinformatics work after graduate school. Thanks for maintaing such an important resource!
Adam Millard-Ball Assistant Professor, Environmental Studies Department
I use Campus Rocks to use the multi-core version of Stata that is installed, and for computational-intensive work in Python (usually estimation of statistical models). Let me know if you need more details.
I use the cluster for my research in medical imaging. I run simulations in a program called Geant4 which simulates particle interactions in an imaging system. Each simulation requires modeling the behavior and interactions of approximately 200 million proton events, and requires at least 90 cores, therefore, there is no other resource on campus that allows me to do these simulations in a timely manner. Likewise, it is essential to my work that the cluster function efficiently since time is of the essence. Some of the machines, namely (02 and 04) function at 1/3-1/5 the speed of some of the other machines which is extremely frustrating.
I use this cluster for parallelized monte carlo simulations of high energy particle physics processes, especially related to dark matter. I also utilize the cluster for simulating and processing large sets of gamma-ray data in order to search for astrophysical signatures of dark matter. A significant expansion of these resources would be of great value to UCSC's research programs.
I am an undergraduate working with Dr. Camps in METX. My cluster utilization involved analyzing co-variation in cancer databases, namely cbioportal, to provide functional context clues for an orphan gene. Proper analysis requires using the entire genome as a query set, which can be computationally intensive. CampusRocks is a great resource, thanks for your work.
am a graduate student in Scott Lokey's lab. We use the Campus Rocks cluster for running molecular dynamics simulations on virtual libraries consisting of thousands of members. While each individual simulation is fairly brief and computationally inexpensive, the numbers mandate parallelism. The campus rocks cluster provides a wonderful and free resource for running these simulations. We greatly value its functionality and will continue to use it in whatever capacity we can.
campusrocks has been a tremendously helpful resource for me this year (I only recently joined the UCSC faculty). I have used the system to run DFT calculations and model molecular reactivity, photophysical properties, as well as NMR chemical shifts. In future, and as my lab continues to grow, we will furthermore be conducting bioinformatics analyses (ChIP-seq and RNA-seq). One of my research interests is on the transcription factor NF kappa B. Continued access to the cluster will play a highly important role for my research.
Hello Ted. I use the cluster for assembling genome sequence data from bacteria we isolate in extreme environments high in arsenic. Part of my the research we do in my lab involves isolating and characterizing new bacterial species that can grow on the toxic metal, arsenic, which is naturally occurring and at high levels in places like Mono Lake, CA and other soda lakes in Nevada. The cluster is essential to the genome assembly process because I need a computer system with a lot of power. The programs I use work so much better on the cluster. It's been really nice to have this service available.
© 2015 UC Santa Cruz • All Rights Reserved
1156 High St, Santa Cruz, CA 95064 • 831-459-2158 •