Infrastructure

 Tier2 clusters of Ghent University

The Stevin computing infrastructure consists of several Tier2 clusters which are hosted in the S10 datacenter of Ghent University.

This infrastructure is co-financed by FWO and Department of Economy, Science and Innovation (EWI).

Tier-2 login nodes

Log in to the HPC-UGent Tier-2 infrastructure using SSH via login.hpc.ugent.be .

Tier-2 compute clusters

CPU clusters

The HPC-UGent Tier-2 infrastructure currently included 5 standard CPU-only clusters, of different generations (listed from old to new).

For basic information on using these clusters, see Chapters 4-7 in the HPC-UGent user manual .

cluster name # nodes Processor architecture Usable memory/node Local diskspace/node Interconnect Operating
system
swalot 128 2 x 10-core Intel Xeon E5-2660v3
(Haswell-EP @ 2.6 GHz)
116 GiB 1 TB FDR InfiniBand CentOS 7
skitty 72 2 x 18-core Intel Xeon Gold 6140
(Skylake @ 2.3 GHz)
177 GiB 1 TB
240 GB SSD
EDR InfiniBand RHEL 8
victini (*) 96 2 x 18-core Intel Xeon Gold 6140
(Skylake @ 2.3 GHz)
88 GiB 1 TB
240 GB SSD
10 GbE RHEL 8
kirlia 16

2 x 18-core Intel Xeon Gold 6240
(Cascade Lake @ 2.6 GHz)

738 GiB 1.6 TB NVME HDR-100 InfiniBand RHEL 8
doduo 128

2x 48-core AMD EPYC 7552 (Rome @ 2.2 GHz)

250 GiB 180GB SSD HDR-100 InfiniBand RHEL 8

Interactive debug cluster


A special-purpose interactive debug cluster is available, where you should always be able to get a job running quickly, without waiting in the queue .

Intended usage is mainly for interactive work, either via an interactive job or using the HPC-UGent web portal (see also Chapter 8 in the HPC-UGent user manual).

This cluster is heavily overprovisioned, so jobs may run slower if the cluster is used more heavily.

Strict limits are in place per user: max. 5 jobs in queue, max. 3 jobs running, max. of 8 cores and 27GB of memory in total for running jobs.

For more information, see Chapter 22 in the HPC-UGent user manual .

cluster name # nodes Processor architecture Usable memory/node Local diskspace/node Interconnect Operating
system
slaking 10 2 x 12-core Intel Xeon E5-2680
(Haswell @ 2.5 GHz)
500 GiB 1.1 TB SSD FDR InfiniBand RHEL 8

GPU clusters

Two GPU clusters are available, with different generations of NVIDIA GPUs.

These are well suited for specific workloads, with software that can leverage the GPU resources (like TensorFlow, PyTorch, GROMACS, AlphaFold, etc.).

For more information on using these clusters, see Chapter 21 in the HPC-UGent user manual .

cluster name # nodes Processor architecture & GPUs (per node)
Usable memory/node Local diskspace/node Interconnect Operating
system
joltik 10

2x 16-core Intel Xeon Gold 6242 (Cascade Lake @ 2.8 GHz)

4x NVIDIA Volta V100 GPUs (32GB GPU memory)

256 GiB 800GB SSD double EDR Infiniband RHEL 8
accelgor 9

2x 24-core AMD EPYC 7413 (Milan @ 2.2 GHz)

4x NVIDIA Ampere A100 GPUs (80GB GPU memory)

500 GiB 180GB SSD HDR-100 InfiniBand RHEL 8


Tier-2 shared storage

Filesystem name Intended usage Total storage space Personal storage space VO storage space (*)
$VSC_HOME Home directory, entry point to the system 90 TB 3GB (fixed) (none)
$VSC_DATA

Long-term storage of large data files

1.9 PB

25GB (fixed)

250GB
$VSC_SCRATCH Temporary fast storage of 'live' data for calculations 1.7 PB 25GB (fixed) 250GB
$VSC_SCRATCH_ARCANINE Temporary very fast storage of 'live' data for calculations
(recommended for very I/O-intensive jobs)
70 TB NVME (none) upon request

 

* Storage space for a group of users (Virtual Organisation or VO for short) can be increased significantly on request.
  For more information, see our HPC-UGent tutorial .

Infrastructure status

Check the system status