Skip to Content

Proposal Support

This boilerplate language was written for researchers to use in grant proposals involving ACISS, the University of Oregon’s shared HPC cluster. To request other boilerplate language not included here (e.g., network capabilities), please contact acissmanagement@uoregon.edu. (Researchers should feel free to use only what is necessary from this text. It is not required to use the entire text.)

Letter of Support

ACISS letter of support (.doc)

Facilities, Equipment, and Other Resources

Applied Computational Instrument for Scientific Synthesis (ACISS)

 The ACISS high-performance computing facility represents the University of Oregon’s commitment to centralized research computing.  This center consists of 195 computational nodes with a total of 2672 conventional processor cores, 156 general-purpose graphical processing units (GPUs), and 19 TB of memory.  The nodes are connected by a 10 GigE ethernet switch and supported by a 400 TB parallel NAS file system.  In addition to the core ACISS facility, the UO college of Arts and Sciences provides a group of computational and graphics professionals to support UO faculty and students involved in research and educational pursuits at the university.

ACISS is a research computing cluster built as a heterogeneous platform to address requirements from a diverse set of research applications in multiple scientific disciplines. A unique feature of ACISS is the management of a portion of its resources as a cloud system for computational science, informatics, and data science. Groups that use the system configure their own virtual machine images and carry out their research work as if they had their own dedicated cluster.  In addition to the cloud resources, ACISS is configured into three separate subsystems to serve the diverse computational requirements of users of the facility.

Researchers will use ACISS, a shared high-performance computing cluster operated by the college of Arts and Sciences at the University of Oregon. ACISS, a Linux-based cluster, consists of 196 nodes and more than 2600 cores. ACISS is available to researchers in various configurations: Standard ACISS nodes consists of two 2.67 GHz Intel Xeon X5650 processors and contains 72 GB ram per node (6 GB/core); larger memory hardware employs 2.27 GHz Intel Xeon X7560 processors and provide 384GB per node (12 GB/core); GPU (Graphics Processing Unit) nodes which are standard nodes that contain three NVIDIA M2070 GPUs per node, each GPU providing 448 CUDA cores and 5.5 GB memory per GPU. An additional service called the ACISS Operating Environment allows researchers to add their own compute equipment to house it with ACISS in order to take advantage of existing infrastructure, software, and staff resources.  

Nodes are connected to each other and to 400 TB of IBRIX parallel storage via a Voltaire Vantage 8500 10Gb ethernet switch. A gigabit Ethernet network, used for node management, also connects all of the nodes. In addition to various compilers, debuggers, profilers and OpenMPI, ACISS also supports a wide range of application software.

Computing jobs on ACISS are managed through a combination of the Maui Scheduler and the Terascale Open-Source Resource and QUEue Manager (Torque).

ACISS is housed in the UO Computing Center (UOCC. The high efficiency of its cooling system has saved energy expenses and lowered the cost of ACISS allocations. The UOCC also has generators and batteries to provide sufficient power for a graceful shutdown of systems if needed.

 Basic nodes.  The basic computational configuration of ACISS consists of 128 nodes, each with two Intel X5650 2.66 GHz 6-core CPUs (12 cores per node) and 72 GB of memory.

 Fat nodes.  To serve applications with large memory footprints, ACISS provides 16 fat nodes consisting of four Intel X7560 2.26 GHz 8-core CPUs (32 cores per node) and 384 GB of memory.

 GPU nodes.  To serve applications with higher computational requirements, ACISS provides 52 nodes consisting of two Intel X5650 2.66 GHz 6-core CPUs (12 cores per node) and 72 GB of memory.  In addition each node contains 3 NVIDIA M2070 GPUs (for a total of 156 GPUs in aggregate).

CASIT Research Support Services (RSS)

The College of Arts and Sciences Research Support Service (RSS) provides scientific software development and maintenance support for researchers within the college.  The RSS team consists of 4 professional staff members with extensive expertise in parallel and scientific computing and in visualization programming and research.  Team members provide support for parallelizing applications; porting code from one language/platform to another; code optimization; data visualization; creating data collection processing and analysis pipelines; and general scientific application development.  In addition, the RSS team provides basic training for faculty and students in scientific high-performance programming and computing and in the running and management of tasks and data flow on the ACISS facility.

Budget Justifications

This example is formulated for 4 compute nodes (see last sentence) at a budget of $40,000; you will need to modify this as needed.

The ACISS High Performance Computing Cluster is the community-based, interdisciplinary core facility for high performance computing available to all researchers at the University of Oregon. Started in 2009 by more than 10 researchers from multiple departments and research centers, it is supported by faculty contributions, federal grants, and the University administration. More information on ACISS is available at aciss-computing.uoregon.edu. Installed in Fall 2011, ACISS has an 197-node distributed-memory cluster, consisting of 128 basic compute nodes, 52 gpu nodes, 16 fat nodes and 1 login node. Each basic and gpu node has two 6-core Intel Nehalem processors, 72 GB memory, and a 500 GB local hard drive, whereas each fat node has four 8-core Intel Nehalem processors, 384 GB memory and a 500 GB local hard drive. Access to the system is via a login node with 96 GB memory and a 500 GB hard drive. A central 400 TB parallel storage solution is provided. All components are connected by a 10Gb ethernet interconnect that provides wide bandwidth for communications in parallel programs. The cluster offers standard scientific software including C, C++, FORTRAN compilers, numerical libraries, as well as MATLAB, Mathematica and many other applications. Several implementations of the MPI libraries are available. System administration is provided by CASIT (the College of Arts and Sciences Information Technology Support Services). Users have access to consulting support provided by a dedicated full-time GRA. CASIT is committed to keeping the resources in the ACISS High Performance Computing Facility current and is planning on future regular upgrades. For dedicated priority access necessary for long-term jobs, users must contribute $10,000 per basic node to the system’s funding, as stipulated in the usage policy at the ACISS webpage. For this purpose, we budget the purchase of four (4) compute nodes in this proposal to give our project the needed priority.

Links