D.A.V.I.D.E.

  1. /
  2. Systems
  3. /
  4. Hardware
  5. /
  6. D.A.V.I.D.E.

D.A.V.I.D.E.

D.A.V.I.D.E. was taken out of production on January 2020.

D.A.V.I.D.E. : (Development of an Added Value Infrastructure Designed in Europe) is an energy-aware Petaflops Class High Performance Cluster based on Power Architecture and coupled with NVIDIA Tesla Pascal GPUs with NVLink. The innovative design of D.A.V.I.D.E. has been developed by E4 Computer Engineering for PRACE, which has as its ultimate goal to produce a leading edge HPC cluster showing higher performance, reduced power consumption and ease of use.

D.A.V.I.D.E. entered the TOP500 and GREEN500 list in June 2017 in its air-cooled version, while the current version features liquid cooling and an innovative technology for monitoring and capping the power consumption.

D.A.V.I.D.E. is based on OpenPOWER platform and is among the harbingers of a new generation of HPC systems which deliver high performances while being environmentally conscious. It has been built using best-in-class components plus custom hardware and an innovative middleware system software.

A key feature of D.A.V.I.D.E. is an innovative technology for measuring, monitoring and capping the power consumption of the node and of the whole system, through the collection of data from the relevant components (processors, memory, GPUs, fans) to further improve energy efficiency. The technology has been developed in collaboration with the University of Bologna.


FEATURES

  •  Off-the-shelf components
  • High speed and accurate per-node power sensing synchronized among the nodes
  • Data accessible out-of-band and without processor intervention
  • Out-of-Band and synchronized fine grain performance sensing
  • Dedicated data-collection subsystem running on management nodes
  • Predictive Power Aware job scheduler and power manager

Model: E4 Cluster Open rack

Architecture: OpenPower NViDIA NVLink
Nodes: 45 x (2 Power8+4Tesla P100) + 2 (service&login nodes)
Processors: OpenPower8 NVIDIA Tesla P100 SXM2
Internal Network: 2xIB EDR, 2x1GbE
Cooling:  SoC and GPU with direct hot water 
Cooling capacity: 40kW
Heat exchanger: Liquid-liquid, redundant pumpsModel: E4 Cluster Open rack
Storage: 1xSSD SATA
Max Performances: 22 TFLOPs (double precision), 44 TFLOPs single precision

Peak Performance: ~1 PFlop/s