Leonardo scheduled maintenance, February 26th-29th
Dear Users, This is to inform you that, due to scheduled extraordinary maintenance operations on Leonardo’s power system: The operations will start at 08:00 a.m. on the 26th, and we […]
Leonardo: DCGP issue on submission solved
Dear Users, we fixed the issue preventing the submission of jobs requesting more than 32 cpus per node on the DCGP partition. Best regards, HPC User Support @ CINECA
Problem on HPC newsletter
Dear Users, you may have received a mail from the HPC News Center saying that you have been unsubscribed. We are working to fix the issue, please ignore future similar […]
Leonardo back in production
Dear users,Leonardo is back in production. We apologize for the unexpected extended duration of the stop. Best regards,HPC User Support – CINECA
Leonardo: unscheduled short stop tomorrow, January 25th
Dear All,due to the need of urgently updating the booster nodes firmware and the board management software, an unscheduled short stop of the cluster will take place tomorrow morning from […]
Safari Mandana
Mandana Safari, with an MSc in Nanophysics from Razi University, Iran, focused on DFT applications in her master’s thesis. She pursued a Ph.D. at SISSA, Italy, exploring charge transfer mechanisms […]
Melfi Giuseppe
Giuseppe Melfi completed a Master’s degree in Astrophysics, with a thesis on fluid dynamic simulations conducted in HPC environment. He initially worked in Milan in the field of Data and […]
Leonardo maintenance update
Dear Users, Leonardo maintenance operations have been completed. However, the cluster is still out of production in order to solve some issues to the water cooling system which arose when […]
Redenti Michael
Michael Redenti graduated with a degree in Applied Mathematics from The University of Stirling in 2018, and gained his Master’s by Research from the University of Edinburgh in 2023. His […]
Leonardo back in production and scheduled operation on login nodes on December 6th
Dear Users, Leonardo’s maintenance successfully concluded and the cluster is now back in full production. As anticipated, we changed the job environment to place the job’s tmpdir in a private […]