LEONARDO: new job local storage resource management

  1. /
  2. HPC Center news
  3. /
  4. LEONARDO: new job local...

Dear Users,

As on the other CINECA clusters, also on LEONARDO a temporary storage area local to nodes is available for the job’s execution, accessible via the environment variable $TMPDIR.

Differently from the other CINECA clusters, on LEONARDO the temporary area of a job is managed by the slurm job_container/tmpfs plugin, which provides a job-specific, private temporary file system space, with a private instance of /tmp (and /dev/shm) in the job’s user space. Such tmpfs are removed at the end of the job (and all data will be lost). Please note that, thanks to this plugin:

  • the local storage is considered a “resource” on LEONARDO, and can be explicitly asked on the diskful nodes only (DCGP and serial nodes) via the sbatch directive or srun option “–gres=tmpfs:XX” (for instance –gres=tmpfs:200GB). If not requested, the /tmp has the default dimension of 10 GB
  • on the booster (diskless) partition the gres/tmpfs cannot be requested, and the /tmp area is created with a fixed quota of 10GB
  • the requested amount of gres/tmpfs resource contributes to the consumed budget, changing the number of accounted equivalent core hours, see the dedicated section on the Accounting on CINECA clusters. The accounting of this new resource will be active starting from tomorrow.

For more details, please refer to the Disks and Filesystems section on Leonardo’s User Guide.

Best regards,

HPC User Support @ CINECA