Hardware

  1. /
  2. Systems
  3. /
  4. Hardware

Hardware

(Updated January 2024)

The HPC Infrastructure 

Cineca is one of the Large Scale Facilities in Europe and is a EuroHPC-JU Tier-0 hosting site.

  • LEONARDO: This is the pre-exascale Tier-0 EuroHPC supercomputer. LEONARDO is classified in the 6° position in the Top500 list. It is hosted by Cineca, and it is built in the Bologna Technopole. It is supplied by ATOS, with two main partitions: Booster Module and Data-centric Module. Leonardo started its production in the summer of 2023 with Booster Module. The Data-centric Module started the production in February 2024.
  • MARCONI: It is the Tier-0 system that replaced FERMI in July 2016. It is based on the LENOVO NeXtScale platform and the Intel Xeon Phi product family. It has been gradually upgraded since June 2016. The actual configuration consists of Marconi-A3 with SkyLake (in production since August 2017, boosted in January 2018 and completed in November 2018). Marconi had two additional partitions in a previous configuration phase: Marconi-A1 with Intel Broadwell, into production since July 2016, closed in September 2018, and Marconi-A2 with Intel KnightsLanding into production since January 2017, closed in January 2020. Marconi is classified in the Top500 list among the most powerful supercomputers: ranked 12 in November 2016 and ranked 19 in the November 2018 list.
  • DGX: The NVIDIA A100 accelerated system is available from January 2021, particularly suitable for Deep Learning frameworks. This is an AMD 3-node system equipped with 8 NVIDIA A100 Tensor Core GPUs per node.
  • GALILEO100: This is our Tier-1 infrastructure for scientific research, co-funded by the European ICEI (Interactive Computing e-Infrastructure) project and engineered by DELL. It has been available to the Italian public and industrial researchers since August 2021 (in pre-production). The full production started in October 2021.
  • ADA CLOUD: CINECA HPC cloud service was renewed in September 2021 with Intel 8260 (CascadeLake) nodes, completing the HPC ecosystem.

 

CPU (mhz,core, ...)Total cores / Total NodesMemory  per nodeAccelerator
LEONARDO (Booster) Intel IceLake, Intel Xeon Platinum 8358 CPU @2.6GHz 32 cores 32*3456 / 3456 512 GB DDR4 3200 MHz 4x NVIDIA AMPERE GPUs per node 64GB HBM2
LEONARDO (DCGP) Intel Sapphire Rapids, 2x Intel Xeon Platinum 8480+ CPU @2.0GHz 56 cores each 112*1536 / 1536 512 GB DDR5 4800 MHz
MARCONI-A3 Intel SkyLake, 2x Intel Xeon Platinum 8160 @2.1GHz 24 cores each 48*3216 / 3216 192 GB
DGX AMD 2x Rome 7742 @2.6GHz 64 cores HT 2 each 384/3 980 GB 8x NVIDIA A100 Tensor Core GPUs per node, NVlink 3.0 80 GB
GALILEO100 Intel CascadeLake, Intel Xeon Platinum 8260 @2.4 GHz 24 cores each 48*554 / 554 384 GB / 3.0 TB 34 nodes with 2x NVIDIA V100 per node, PCIe3
ADA CLOUD 2x Intel CascadeLake, Intel Xeon Platinum 8260 2@2.4 GHz 24 cores each 48*2*68/ 68 768 GB

The Data Storage Facility

  • Scratch: each system has its own local scratch area (pointed by $CINECA_SCRATCH env variable)
  • Work: each system has its own local working storage area (pointed by $WORK env variable)
  • DRes: a shared storage area for Long Term Archive is mounted on all machine’s login-nodes  (pointed by $DRES env variable)
  • Tape: a tape library (12 PB, expandable to 16PB) is connected to the DRES storage area as a multi-level archive (via LTFS)
  Scratch (local) Work (local) DRes (shared) Tape (shared)
MARCONI 2.2 PB 5.9 PB 6.5 PB 20 PB
GALILEO100 tbd tbd
MARCONI100 1.8 PB 2.3 PB

 

Old HPC Infrastructure

  • MARCONI100:  It is the accelerated Marconi partition, available since April 2020. It was an IBM system equipped with NVIDIA Volta V100 GPUs, opening the way to the accelerated pre-exascale Leonardo supercomputer. Its production was stopped in July 2023.
  • D.A.V.I.D.E.: (Development of an Added Value Infrastructure Designed in Europe)  It was the energy-aware, High-Performance Cluster, based on OpenPOWER8 servers and NVIDIA Tesla P100 data center GPUs.  It entered the Top500 and Green500 lists in June 2017. D.A.V.I.D.E. was in full production from January 2018 to January 2020.
  • FERMI: it has been a Tire-0 system from June 2012 to July 2016 and gets its name from the famous Italian physicist. It is an IBM BG/Q supercomputer, classified among the most powerful supercomputers in the Top500 List: rank 7th in June 2012.
  • In June 2012 it was ranked 11th in the Green500’s energy-efficient supercomputers list.
    FERMI was taken out of production on July 18, 2016, substituted by the MARCONI system.
  • GALILEO: It is the second system (Tier-1), it is an IBM NeXtScale cluster accelerated with Intel Phis. Galileo replaced the old PLX system. It is named after the Italian physicist and philosopher who played a major role in the scientific revolution during the Renaissance. Galileo was in full production from February 2015 to November, 20th, 2017. Starting from January 2018 GALILEO has been reconfigured with Intel Xeon E5-2697 v4 (Broadwell) nodes, inherited from MARCONI system.
    Starting from March 2018 GALILEO is again in production, available for the Italian research community. Starting from March 2021 was gradually turned off to give space to more performant Infrastructure Galileo100.
  • PICO: It was made of an Intel NeXtScale server, designed to optimize density and performance, driving a large data repository shared among all the HPC systems in CINECA. It was used for “BigData” classes of applications, complying with the peculiar hardware requirements (large memory per node, massive storage equipment and sharing, fast data access, and transfer, etc.), the software tools, and the high-throughput technologies needed by data-oriented projects, such as an accelerated visualization environment, cloud computing, hadoop etc.
CPU (mhz,core, …)Total cores / Total NodesMemory per nodeAcceleratorNotes
FERMI PowerA2@1.6GHz, 16 cores each 163.840 / 10.240 16 GB
GALILEO Intel Haswell 2 x Intel Xeon 2630 v3 @2.4GHz 8 cores each 8384 / 524 128 GB 768 Intel Phi 7120p + 20 NVidia K80 8 nodes devoted to visualization
GALILEO2 (from2018) Intel Broadwell 2x Intel Xeon E5-2697 v4 @2.3GHz 18 cores each 36792 / 1022 128 GB 60 nodes NvidiaK80 GPU + 2 nodes Nvidia V100 GPU
D.A.V.I.D.E. OpenPOWER8 NVIDIA Tesla P100 SXM2 @2GHz 16 cores each 720 / 45 Tesla P100 2 visualization nodes, 2 BigMem nodes and 4 BigInsight nodes
PICO Intel Xeon E5 2670 v2 @2.5Ghz 1456 / 74 Nvidia K20
MARCONI-A1 Intel Broadwell 2x Intel Xeon E5-2697 v4 @2.3GHz 18 cores each 128 GB switched off Sept 2018
MARCONI-A2 Intel Knights Landing 1x Intel Xeon Phi7250 @1.4GHz 68 cores each 244800 / 3600 96 GB switched off Jan 2020
MARCONI100 IBM Power9 AC922 @3.1GHz 32 cores HT 4 each 32*980 / 980 256GB 4x NVIDIA Volta V100 GPUs per node, NVlink 2.0 16 GB switched off July 2023