Farber (farber.hpc.udel.edu)

The Farber cluster, UD's second Community Cluster, was deployed in 2014 and is a distributed-memory, Linux cluster. It consists of 100 compute nodes (2000 cores, 6.4 TB memory). The nodes are built of Intel “Ivy Bridge” 10-core processors in a dual-socket configurations for 20-cores per node. An FDR InfiniBand network fabric supports the Lustre filesystem (approx 256 TiB of usable space). Gigabit and 10-Gigabit Ethernet networks provide access to additional filesystems and the campus network. The cluster was purchased with a proposed 4 year life, putting its retirement in the July 2018 to September 2018 time period.

The cluster has been named in honor of David Farber, UD professor and Distinguished Policy Fellow in the Department of Electrical and Computer Engineering. Farber is one of the pioneers who helped develop the U.S. Department of Defense’s ARPANET into the modern Internet. His work on CSNET, a network linking computer science departments across the globe, was a key step between the ARPANET and today’s Internet. Today, Farber’s work focuses on the translation of technology and economics into policy, the impact of multi-terabit communications, and new computer architecture innovations on future Internet protocols and architectures. He was named to the Internet Society’s board of trustees in 2012.

Attributes

  • classified as a compute-cluster
  • database has record of 193 unique nodes
  • hardware platform is x86_64
  • uses Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz processors
  • running CentOS
    • release 6.5
    • kernel 2.6.32-431.23.3.el6.x86_64
  • The system is monitored by Ganglia and is web accessible.

Milestones

  • January 18, 2013: Initial planning of machine purchase begins.
  • May 1, 2014: Purchase of machine is finalized.
  • June 25, 2014: Machine ships to UD.
  • June 27, 2014: Machine arrives at UD.
  • August 5, 2014: Machine is integrated into campus network and powered on.
  • September 16, 2014: Machine is opened to end-users.

Nodes Disabled for Maintenance

The following table lists all nodes which are online but are not accepting jobs. This usually happens because IT is troubleshooting a problematic node.

node namelast change of status
n181October 20, 2017 14:00:03 EDT
page last modified November 03 2016 13:29:43