Citizen / General public
Researcher
Student / Future student
Company
Public partner
Journalist
Teacher / Pupil

DANTE cluster details

  • Master Nodes : SR665 (2U) with
    • 2 AMD EPYC 7252 8C@3.1GHz, 128GB RAM
    • 5 SAS 1.8TB 10K drives, 1 RAID 930-8i 2GB card
    • 1 Mellanox InfiniBand HDR 100Gbs Dual Port card, 4x 1G ports
  • 2 CPU frontals: SR645 with
    • AMD EPYC ROME 7302 16C@3GHz, 256GB RAM
    • RAID 930-8i 2GB card, 2 x 480GB SSDs
    • Mellanox InfiniBand HDR 100Gbs card, 4x 1G ports, 1x 25G port
  • 1 GPU frontal: SR650 (2U) with
    • 2 Intel 6226R 16C@2.9GHz 150W, 192GB RAM
    • 1 nVIDIA A100 40GB GPU card
    • 2 x 480GB SSDs, 1 x 2GB RAID 930-8i card
    • 1 Mellanox InfiniBand HDR Dual Port 100Gbs card, 2x 1G ports, 1x 25G port
  • 56 compute nodes: SR645 with
    • 2 AMD Rome Zen2 7H12 64C, 2.60GHz
    • 1 Mellanox InfiniBand HDR 100Gbs card, 4x 1G ports
    • 42 nodes with 256GB of RAM, 14 with 512GB

        298.2Tflops peak, 7168 cores

  • 3 GPU nodes (2U): SR670 with
    • 2 Intel 6226R CPUs 16C@2.9GHz 150W, 384GB RAM
    • 4 nVIDIA A100 40GB GPU cards
    • 5 x 1.92 TB SSDs, 1 x 2GB RAID 930-8i card
    • 1 Mellanox InfiniBand HDR 100Gbs card, 2 1G ports
  • Parallel Storage
    • 1 PB usable, 5GB/s write and 7GB/s read
    • 2 NSD SR665 servers (2U) under RedHat
    • 1 DE6000H rack (4U) with 80 disks of 16TB (8+2P)
  • Network
    • Computing
      • 2 HDR 80 ports 100 Gb/s (Mellanox QM8790) connected by 4 fibers 200 Gb/s
      • 16Tb/s cumulative bandwidth
      • Up to 15.8 billion messages per second
      • Latency of 130ns
    • System administration
      • Level 1: 2 Mellanox AS4610-54T switches (48 10/100/1000BASE-T ports and 4 x 10G SFP+ uplink)
      • level 2 (network core): 2 Mellanox SN2010 switches (18 ports 10G/25G + 4 ports 100G)
    • 18 manageable PDUs
  • Environment
    • xCAT (deployment), Slurm (scheduler), Ganglia and Nagios (monitoring)
    • RHEL 8.2
    • xClarity (remote control)
    • LeSI support (7 years)
    • Intel Parallel Studio XE Cluster Edition for Linux (5 tokens) for 3 years
    • AMD scientific libraries optimized for the EPYC architecture

Were kept from the previous configuration :

  • 1 Netapp FAS3140 rack of 221 TB effective
  • 4 PowerEdge 720 GPU nodes with 2 Intel Xeon E5-2650 2.00 GHz each (2×8 cores), 64 GB RAM, 2×2 TB Near-Line SAS 7.2K in RAID 0, 2 NVIDIA Tesla K20m PCI-E (2496 cores, 5 GB)