Logo Cineca Logo SCAI

You are here


(Updated April 2020)

The HPC Infrastructure 

Cineca is currently one of the Large Scale Facilities in Europe and it is a PRACE Tier-0 hosting site.

  • MARCONI: It is the Tier-0 system that replaced FERMI in July 2016. It is based on the LENOVO NeXtScale platform and the next generation of the Intel Xeon Phi product family. It has been gradually upgraded from June 2016. The actual configuration consists of Marconi-A3 with SkyLake (in production since August 2017, upgraded in January 2018 and completed in November 2018). In a previous configuration phase, Marconi had two additional partitions: Marconi-A1 with Intel Broadwell, into production since July 2016, closed in September 2018, and Marconi-A2 with Intel KnightsLanding into production since January 2017, closed in January 2020. Marconi is classified in Top500 list among the most powerful supercomputer:  rank 12 in November 2016, and rank 19 in the November 2018 list.
  • MARCONI100:  It is the new accelerated not conventional Marconi partition available from April 2020. This is an IBM system equipped with NVIDIA Volta V100GPUs, opening the way to the accelerated pre-exascale Leonardo supercomputer.
  • DGX: It is the NVIDIA A100 accelerated system available from January 2021, particulary suitable for Deep Learning frameworks. This is a AMD 3-node system equipped with 8 NVIDIA A100 Tensor Core GPUs per node. 
  • GALILEO: It has been renewed in March 2018 with Intel Xeon E5-2697 v4 (Broadwell) nodes, available for the Italian research community.
  • CLOUD.HPC: CINECA cloud service has been renewed in April 2020 with Intel E5-2697 v4 (Broadwell) nodes, completing the HPC ecosystem. 
  CPU (mhz,core, ...) Total cores / Total Nodes Memory  per node Accelerator Notes

Intel SkyLake
2x Intel Xeon 8160 
24 cores each

72576+38016+43776 / 1512+792+912 192 GB -  
MARCONI100 IBM Power9 AC922
32 cores HT 4 each
31360 / 980  256 GB 4x NVIDIA Volta
V100 GPUs,
NVlink 2.0 16 GB
DGX AMD 2x Rome 7742
32 cores HT 2 each
384/3 980 GB 8x NVIDIA A100
Tensor Core GPUs,
NVlink 3.0 80 GB
GALILEO Intel Broadwell
2x Intel Xeon E5-2697
v4 @2.3GHz
18 cores each
36792 / 1022 128 GB    
CLOUD.HPC Intel Broadwell
2x Intel Xeon E5-2697
v4 @2.3GHz
18 cores each
2880/ 80 256 GB    

The Data Storage Facility

  • Scratch: each system has its own local scratch area (pointed by $CINECA_SCRATCH env variable)
  • Work: working storage is mounted to the three systems (pointed by $WORK env variable)
  • DRes: a shared storage area is mounted on all machine's login-nodes and all Pico's computes node (pointed by $DRES env variable)
  • Tape: a tape library (12 PB, expandable to 16PB) is connected to the DRES storage area as a multi-level archive (via LTFS)
  Scratch (local) Work (local) DRes (shared) Tape (shared)
MARCONI 2.2 PB 5.9 PB 6.5 PB 20 PB
GALILEO 720 TB 1800 TB



Old HPC Infrastructure

  • D.A.V.I.D.E.: (Development of an Added Value Infrastructure Designed in Europe)  It was the energy-aware, High-Performance Cluster, based on OpenPOWER8 servers and NVIDIA Tesla P100 data center GPUs.  It entered the Top500 and Green500 lists in June 2017. D.A.V.I.D.E. was in full production from January 2018 to January 2020.
  • FERMI: it has been a Tire-0 system from June 2012 to July 2016 and gets its name from the famous Italian physicist. It is an IBM BG/Q supercomputer, classified among the most powerful supercomputers in the Top500 List: rank 7th in June 2012. 
  • In June 2012 it was ranked 11th in the Green500's energy-efficient supercomputers list.
    FERMI was taken out of production as July 18, 2016, substituted by the MARCONI system.
  • GALILEO: It is the second system (Tier-1), it is an IBM NeXtScale cluster accelerated with Intel Phis. Galileo replaced the old PLX system. It is named after the Italian physicist and philosopher who played a major role in the scientific revolution during the Renaissance. Galileo was in full production from February 2015 to November, 20th, 2017.
  • PICO: It was made of an Intel NeXtScale server, designed to optimize density and performance, driving a large data repository shared among all the HPC systems in CINECA. It was used for "BigData" classes of applications, complying with the peculiar hardware requirements (large memory per node, massive storage equipment and sharing, fast data access, and transfer, etc.), the software tools and the high-throughput technologies needed by data-oriented projects, such as an accelerated visualization environment, cloud computing, hadoop etc. 
  CPU (mhz,core, ...) Total cores / Total Nodes Memory  per node Accelerator Notes
Fermi PowerA2@1.6GHz, 
16 cores each
163.840 / 10.240 16 GB -


GALILEO Intel Haswell
2 x Intel Xeon 2630 v3 @2.4GHz 
8 cores each
8384 / 524  128 GB 768 Intel Phi 7120p +

  20 NVidia K80

8 nodes devoted to visualization

D.A.V.I.D.E. OpenPOWER8 
NVIDIA Tesla P100 SXM2
16 cores each
720 / 45     Tesla P100

2 visualization nodes,  2 BigMem nodes and 4 BigInsight nodes 


PICO Intel Xeon E5 2670 v2 @2.5Ghz 1456  / 74   Nvidia K20  
MARCONI-A1 Intel Broadwell
2x Intel Xeon E5-2697 v4
18 cores each 
  128 GB -

switched off Sept 2018

MARCONI-A2 Intel Knights Landing
1x Intel Xeon Phi7250 
68 cores each
244800 / 3600 96 GB - switched off Jan 2020