Architecture

System specification for DIaL2.5 at Leicester.

Hardware

  • 2 management nodes in a high availability configuration with management and job scheduling services failing over between them.
  • 100Gb/s EDR 2:1 blocking Infiniband network in a fat-tree topology
  • 3.0PB Intel Lustre file system.

400 skylake compute nodes

  • Processor 2*18 core Intel SkyLake CPUs – Xeon 6140 2.3Ghz
  • Memory 192GB RAM
  • Hard disk 240GB SSD drive
  • Network 100Gb/s 2:1 blocking EDR Infiniband

3 skylake large mem nodes

  • Processor 2*18 core Intel SkyLake CPUs – Xeon 6140 2.3Ghz
  • Memory 1.5TB RAM
  • Hard disk 240GB SSD drive
  • Network 100Gb/s 2:1 blocking EDR Infiniband

Superdome (UV successor) node

  • Processor 8*18 core Intel SkyLake CPUs – Xeon 6154 3.7Ghz
  • Memory 6TB RAM
  • Hard disk 600GB SSD drive
  • Network 100Gb/s 2:1 blocking EDR Infiniband

2 login nodes

  • Processor 2*18 core Intel SkyLake CPUs – Xeon 6140 2.6Ghz
  • Memory 192GB RAM
  • Hard disk 300GB SSD drive
  • Network 100Gb/s EDR Infiniband
  • External network connection 10Gb/s

Software

  • Centos 7
  • Intel Cluster Studio XE compilers and libraries
  • Moab plus Torque job scheduling
  • Bright Cluster Manager