Condo computing: UAB IT offers unique matching grant program for research computing cluster

Would a share in UAB’s supercomputer help you with groundbreaking research?

UAB IT is offering a unique opportunity for UAB faculty and researchers to invest in the UAB research computing infrastructure by providing a dollar-for-dollar match toward the purchase of compute resources. UAB’s research computing cluster is one of the fastest in the state.

The matching program effectively enables the researcher to get priority access to twice the compute resources in which they invest.

Priority access will be implemented through scheduler policies, ensuring the maximum wait time to access those priority compute resources will not exceed two hours.

When these resources are not in use, they will be available to all cluster users who have individual jobs that take less than two hours to execute (i.e., these resources will be part of the express queue). These resources will be available to the user in this mode for a period of three years, after which the priority to use expires, and they will be added to the general compute pool.

Since there is a limited amount of funds set aside for this matching program, requests are reviewed on a first-come, first-served basis and approved based on the matching amount and specific needs for the purchase.

The matching program for the 2016-2017 fiscal year has $150,000 available for matching and is restricted to specific hardware configurations and only minor variations to these configurations are allowed (e.g., additional RAM). The configurations that are currently supported are:

  • Two Intel Xeon E5-2680 v4 2.4GHz CPUs (total 28 cores) and four NVIDIA Tesla P100 16GB GPUs without NVLINK; 256 GB RAM; EDR InfiniBand - $32,000
  • Two Intel Xeon E5-2680 v4 2.4GHz CPUs (total 28 cores) and four NVIDIA Tesla P100 16GB GPUs with NVLINK; 256 GB RAM; EDR InfiniBand - $38,000
The purchased resources will be operated and supported by the UAB IT as a standard part the cluster. All existing procedures and policies regarding access and usage to the cluster remain the same. These resources will be accessed through the existing job scheduler and workload manager (SLURM). Advanced reservation of these resources will be available as needed.