High Performance Computing (HPC) facility on the Storrs campus serves all of UConn's researchers, with a focus on tightly-coupled computational problems capable of scaling from a single computer to hundreds of compute nodes. Provided by University Information Technology Services (UITS), the Storrs HPC facility supports the institutional mission for excellence in research and initiatives for UConn Technology Park, Next Generation Connecticut, and Bioscience Connecticut. Cluster usage has grown dramatically each year since the cluster was deployed in 2011.
Open access: All members of the UConn research community are welcome to use the cluster’s resources. Access to cluster resources is provided through a prioritized queuing system. Open access to resources is provided at no cost to researchers.
High priority access: High priority access is available under a “condo model,” where faculty are able to purchase semi-dedicated nodes which get made available to all users when there are unused compute cycles. Under the condo model, faculty researchers fund the capital equipment costs of individual compute nodes, while the university funds the operating costs of running these nodes for five years. Faculty who purchase compute nodes receive access to equivalent resources at a higher priority than other researchers. The faculty can designate others to receive access at the same priority level, such as their graduate students, postdoctoral researchers, etc. With priority access, computational jobs are moved to the front of the queuing system and are guaranteed to begin execution within twelve hours. A priority user can utilize their resources indefinitely. All access to resources is managed through the cluster’s job scheduler. When a priority user is not using their assigned resources, the nodes are made available to all UConn researchers for general use.
Data Storage Options: The Storrs HPC cluster has a number of data storage options to meet various needs. There is a high-speed scratch file system, which allows parallel file writing from all compute nodes. All users also get a persistent home directory, and groups of users can request private shared folders. Once data is no longer needed for computation, it should be transferred off of the cluster to a permanent data storage location. To meet this need, the university offers a data archival service that features over three petabytes of capacity. Data transfer to permanent locations should be done via the web-based Globus service. For more information about data storage options, please refer to our Data Storage Guide.